Tag: ai

Experimenting with using Replit to build a postal address validator

In some spare time, and with some spare cash for the fees…, I’ve been experimenting with some AI-enabled code generators recently.

Here’s some things I learnt from using Replit to build two tools, a UK postal address validator, and a French postal address validator.

The two address validators.

Both of the experimental tools I built are publicly available – and are on github, here and here – but do be aware that they are experiments. They are not guaranteed to be either reliably useful or legal. I’ve not looked at the code and have particular concerns over how the UK version handles copyright.

With those reservations in mind, here’s some thoughts I had after using Replit to build those two tools:

  • the experience of building the tools was pretty easy, and at times astonishing,
  • but Replit didn’t give me confidence that the tools would be reliable and
  • it neither checked, or encouraged me to check, whether what I was doing was legal.
  • It was a heck of a lot easier to build an address validator for France than the UK.

The experience of building the two apps was pretty easy, and at times astonishing

I am *not* a software engineer but I can do some coding, understand software engineering practices, and have some experience working with public sector data like addresses. Within that context the tool development experience was pretty straightforward and at times astonishing to my tiny mind. 

I could both use the chatbot-like interface to tweak small things – like content on the front-end – and large things – like working out how to download and make use of specific address data sources that it was pointed at. That latter bit felt astonishing.

Replit taking on the task of integrating data from Datadaptive 

At one point I needed to go and create an API key, but didn’t feel the need to go and look under the hood at the code or database.

But Replit didn’t give me confidence that the tools would be reliable

At the start of the build process for the UK address validator Replit said it had built something that worked, but it obviously did not. It took three cycles of me saying things like “that doesn’t work, look harder” before the tool started working and even then it had not done what it had been asked to do.

Replit had been told to build an address validator, but instead it built a postcode validator

After that initial confusion Replit reported that it was running more tests, but it never showed the results. I had to specifically ask it to show test results before it reported that it was loading a testing skill and running some tests.

Despite this Replit had been happy to publish the app with no indication either to myself or to potential end users of the app that it might not be reliable.

Replit might give the feeling, but not the reality, of reliability. They should try to fix that.

And Replit neither checked, or encouraged me to check, whether what I was doing was legal

Replit struggled to understand or communicate data protection risks, which are important in France, or copyright risks, which are important in the UK.

It happily built functionality to collect and republish the addresses that people validated using the tool without telling me this should be made clearly visible to users. Eek. Don’t test it with your home address!

When I suggested Replit should use some public sector data containing addresses that was released under the UK’s OGL (Open Government Licence) it told me that using the data was fully permitted. This misses that the UK OGL contains exemptions both for personal data and for third party rights. 

This is an incomplete summary of the legal position. The UK OGL has a set of exemptions that are important to understand.

There have been multiple cases in the UK where people have been threatened with legal action over infringements of copyright when using address data. Replit had even suggested I use a service – https://getaddress.io – that recently closed because it lost a court case over third party rights in address data. Silly Replit.

To look deeper into copyright complications Replit was told to look at the UK Land Registry’s Price Paid data. This data is published under the UK OGL and has an explicit warning that the Royal Mail and Ordnance Survey reserve some rights.

This time Replit communicated the restrictions but suggested they could be worked around by showing the residential property price when validating the address. I don’t think the courts would agree with this interpretation.  

The purpose of the tool was to validate addresses, not to provide residential property price information.

After a bit of prompting I got Replit to start communicating the various data protection and copyright risks to potential users of my experiment, but it did leave me wondering.

  • How many other Replit users are happily producing apps that unhappily break the law with risks to themselves and other people? 
  • Whether as well as the law potentially needing to become more machine-readable that these new coding tools need to get better at communicating legal requirements and risks to the people that use them?
  • And should governments play a role in making that happen?

After all, the increased ease of using this wave of coding tools seems likely to increase the number of people who produce software, whether it be in tools like Replit or in real-time when using an AI agent. I suspect it will be increasingly important that AI-generated software and the humans that are responsible for it follow the law. 

It was a lot easier to build a French address validator than a UK one

Finally, there was just one more thought. One that is likely to be obvious for anyone who works with UK geospatial data.

It took me several hours of to and fro to produce a useful looking UK address validator that did not completely rely on expensive licences and that could communicate to users the legal requirements that came with reusing the data. And I already knew quite a bit about how to do that.

It took me just 10 minutes to do the same for France, and that came with considerably less risk.

This is partly because the French government has already put in the operational and technical work to build an open address database and provide an API that tools like Replit can use. But it is also because the French government has put in the legal and financial work to ensure that they could provide this data for free and under an open licence which is more permissive than the UK’s OGL. The French government – and others – have done this for many other public sector datasets too.

If we are moving to a world where AI-enabled coding tools, like Replit, are more widely used then the work that countries like France have done could prove invaluable in helping many more people produce software tools that work, are reliable and are legally safe to use. The UK has some catching up to do.

Robots terms of service

In 2023 one of the AI debates was about when information and data on the web can be used to train AI models.

In late December we saw another billion dollar court case as the New York Times alleged that Microsoft and OpenAI had unlawfully used news articles to create AI models. 

In 2024 and beyond, then as well as the debate about how information can be used in relation to AI I expect we’re going to see more debate about how services can be used by AI. 

If we peer into the future, perhaps we need terms of service for robots?

AI services will connect services from multiple existing organisations in new ways

As Sarah Gold puts itwhen applied to technical infrastructure, LLMs become a kind of connective tissue…[they] will connect different systems – at scale. They will execute complex and multi-part tasks, across different departments and organisations”. 

From a consumer perspective this will manifest as different kinds of services, such as learned services that are deliberately designed for particular tasks like moving home or arranging a holiday, to more general-purpose AI agents that can help with a range of tasks.

The technology to enable these kinds of services is getting ever closer to working at scale, but services are not only made of technology.

A concept of a learned service that helps a family move home, by Projects by IF.

Service providers will have relationships with both users and AI providers

From the perspective of existing service providers this new wave of AI services will look like another relationship in addition to the existing relationship with service users. 

With AI agents there are important relationships between users, service providers, and the organisations that provide AI services. The new relationship between the AI service provider and its service users is also very important, but this post focuses on the relationship with existing service providers. Picture by me with assistance from DALL-E.

These kinds of three way relationships obviously already exist. Many people use travel agents to help arrange holidays. Supermarkets bring together food from multiple suppliers and make it available in one place. My sisters and I help my elderly mother use various services.

But AI has the potential to create new arrangements at speed, at scale, and without pre-existing contracts. To provide a simple example, an AI service could ring a series of hotels to make bookings for a train trip across Europe.

Many service providers will not be happy with AI services using their services

But just as existing service providers have not been happy with AI companies using information, many service providers will not be happy with AI services using their services.

Some of this discomfort will be from a simple fear of competition, but in other cases it will be because of other fears such as:

  • consumers being dissatisfied because a service does not meet their expectations, perhaps because an AI service generated an incorrect description of a hotel
  • risk of regulatory action, perhaps the AI service does not collect identity information in a way that meets local requirements
  • that it will generate degrading work for humans, for example through a large number of AI service providers using computers to make repeated phone calls for information
  • whether the existing service provider and AI service provider are receiving fair shares of the value created by the combined service

Robots terms of service

Some of these fears can, and will, be overcome by existing mechanisms.

Liability laws are being updated. AI services that take the mickey will be sued. Some AI and service providers will negotiate new contracts that create new rules for payment of commission, or for how workers should be treated. This will all need to happen across a large number of sectors, industries, geographies. 

But I also wonder if we need to look at some other existing concepts like terms of service, one of the, often lengthy, bits of legal text that humans get when we agree to use a service.

Picture by me with assistance from DALL-E.

If we are heading to a future where new three way relationships between humans, service providers, and AI-powered services can – and probably will – be created at speed, scale and without pre-existing contracts then, perhaps, service providers will need new terms of services that describe how AI robots can use their services?

© 2026

Theme by Anders NorenUp ↑

This website stores cookies on your computer. These cookies are used to provide a more personalized experience and to track your whereabouts around our website in compliance with the European General Data Protection Regulation. If you decide to to opt-out of any future tracking, a cookie will be setup in your browser to remember this choice for one year.

Accept or Deny