In 2023 one of the AI debates was about when information and data on the web can be used to train AI models.
In late December we saw another billion dollar court case as the New York Times alleged that Microsoft and OpenAI had unlawfully used news articles to create AI models.
In 2024 and beyond, then as well as the debate about how information can be used in relation to AI I expect we’re going to see more debate about how services can be used by AI.
If we peer into the future, perhaps we need terms of service for robots?
AI services will connect services from multiple existing organisations in new ways
As Sarah Gold puts it “when applied to technical infrastructure, LLMs become a kind of connective tissue…[they] will connect different systems – at scale. They will execute complex and multi-part tasks, across different departments and organisations”.
From a consumer perspective this will manifest as different kinds of services, such as learned services that are deliberately designed for particular tasks like moving home or arranging a holiday, to more general-purpose AI agents that can help with a range of tasks.
The technology to enable these kinds of services is getting ever closer to working at scale, but services are not only made of technology.
Service providers will have relationships with both users and AI providers
From the perspective of existing service providers this new wave of AI services will look like another relationship in addition to the existing relationship with service users.
These kinds of three way relationships obviously already exist. Many people use travel agents to help arrange holidays. Supermarkets bring together food from multiple suppliers and make it available in one place. My sisters and I help my elderly mother use various services.
But AI has the potential to create new arrangements at speed, at scale, and without pre-existing contracts. To provide a simple example, an AI service could ring a series of hotels to make bookings for a train trip across Europe.
Many service providers will not be happy with AI services using their services
But just as existing service providers have not been happy with AI companies using information, many service providers will not be happy with AI services using their services.
Some of this discomfort will be from a simple fear of competition, but in other cases it will be because of other fears such as:
- consumers being dissatisfied because a service does not meet their expectations, perhaps because an AI service generated an incorrect description of a hotel
- risk of regulatory action, perhaps the AI service does not collect identity information in a way that meets local requirements
- that it will generate degrading work for humans, for example through a large number of AI service providers using computers to make repeated phone calls for information
- whether the existing service provider and AI service provider are receiving fair shares of the value created by the combined service
Robots terms of service
Some of these fears can, and will, be overcome by existing mechanisms.
Liability laws are being updated. AI services that take the mickey will be sued. Some AI and service providers will negotiate new contracts that create new rules for payment of commission, or for how workers should be treated. This will all need to happen across a large number of sectors, industries, geographies.
But I also wonder if we need to look at some other existing concepts like terms of service, one of the, often lengthy, bits of legal text that humans get when we agree to use a service.
If we are heading to a future where new three way relationships between humans, service providers, and AI-powered services can – and probably will – be created at speed, scale and without pre-existing contracts then, perhaps, service providers will need new terms of services that describe how AI robots can use their services?
Leave a Reply