It needs to be more disruptive if it is going to help deliver on Government Ministers’ hopes that data will be the backbone of the UK’s recovery from Coronavirus. It needs to do more to encourage the UK’s geospatial institutions to adapt to and thrive in the 21st century.
This blogpost contains an idea to help.
Understand the address mess and then start cleaning it up with a new government service.
UK address data is a mess
UK address data is a mess. The data is low quality. It is hard to understand what can legally be done or how to fix things.
The problems that affect citizens, particularly with new build properties, are well known. Mismatching and missing addresses have caused problems during the Coronavirus pandemic.
And governance is a mess. The Geospatial Commission, GeoPlace, Scottish Improvement Service, Northern Ireland Land & Property Service, local authorities, Ofcom, Land Registry, the VOA, Ordnance Survey and the privatised Royal Mail are all involved. Organisations find themselves talking with several of these institutions to get things done.
Othercountriesfaced similar issues and are cleaning them up. While there are lots of dedicated people in the UK we are not making fast enough progress in doing the same.
UPRNs will not clean up the address mess
Government has many levers to try and clean up this mess.
The website has a nice bit. The text area to search for addresses. A user starts to type in an address and it helps them find an address and UPRN. A neat bit of work by GeoPlace and Aligned Assets.
Turning that neat address search functionality into a common component that any service designer can easily reuse, let’s call it GOV.UK FindMyAddress, would be a place to start cleaning up the address mess.
GeoPlace and Aligned Assets should be working with local authorities, Government Digital Services and the Digital Land team at MHCLG to do this.
The search field would become an API designed to improve the flow and data quality from a user entering an address to a UPRN being passed to back-end services.
The code should be open source. Many people will help maintain it, while anyone can fork the code and do their own thing. Openness supports competition as well as collaboration.
The API would also help to discover errors or omissions in the data. It can feed those back into Geoplace and local authority’s address management processes.
Collaborative maintenance will gradually improve both data quality and the service. Win win.
Other things that a service team should explore
A service team would also explore business models and governance.
Following England’s abject attempts at getting a contact tracing app up and running, we need to try something different as we start to exit lockdown. Hopefully we will see more focus on getting manual contact tracing working in local places, including pubs.
Meanwhile there are expectations that more pubs will have apps that allow people to order a drink without going to a crowded bar. The Prime Minister has said that only table service will be allowed. Some civil servants have apparently termed this ‘appy hour’**.
There are three different needs here – the ability to order a drink for a table, the ability to log contact information for who was in what pub at what time, and the ability to provide contact information to contact tracing teams – but I expect many places will combine them into one ‘thing’. It reduces the effort for pubs and the millions of people that use them every week.
It is good that England is (finally!) getting a grasp on the basics of social distancing and contact tracing, but now we need to learn how to do it good enough. Unfortunately, I fear that, as with the contact tracing app, we will end up with another adversarial and national fight over how to do this.
I fear lots of discussion of national tech solutions that could work for the entire country with little understanding of the needs of different people and pubs, the relationships between pubs and their customers, or thought about what data and apps pubs already collect and use.
Yesterday I thought I’d take a walk round our town, Whitley Bay – home of the famous Spanish City to look at the many closed pubs. Here are some pictures and notes to help other people understand the context in this one English town.
It might be useful if your interest is in contact tracing, building an app, pub app policy, data protection, or simply picking a pub to go to in Whitley Bay.
Part of the local Blackrose chain. Lots of TVs for sport. Always busy with, mostly male, day time drinkers.
Their window has a new poster up advertising an app. There was no link for me to try it.
I expect that they will get a licence to use one of the many pub apps that will emerge on the market. I expect that many of the customers will simply shout when they want a drink.
I also expect that lots of their customers will either refuse to give contact information or provide false names, probably the names of ex-Newcastle players.
Despite that the pub’s staff will know most people’s first names, jobs and who they drink with. Good pub staff know things like that. That could make for some interesting conversations if a local contact tracing team came knocking on the door.
The Fat Ox is part of the Craft Union chain which has over 300 pubs. It has a couple of TVs for sport. There is pub-rock, ska, and soft rock on the weekends.
Their windows did not have a new poster up advertising an app.
Despite their larger size I would expect the Craft Union chain to take the same app as the Blackrose chain, while the Fat Ox will have the same challenges with customers, false data and contact tracing as The Victoria.
Independent pub running out of an old shop. Brews its own beer and stocks other microbrews.
Designed for middle-aged drinkers – a large proportion of whom will be members of Camra, the UK’s campaign for real ale. I am not a member of Camra but can occasionally be seen here, quietly reading a book in the corner.
The pub has a sign saying “we have no wi-fi, talk to each other”. Mobile phone signals work perfectly well and no one stops you if you use your phone. If I remember correctly the till is manual. It’s a quiet pub.
I’d expect people to use hand signals to order beer, the staff to have a decent idea of who was in the pub on which night, and for there to be a manual book for people to sign in and out. Just like they have a manual book for tracking accidents.
Most of the names will be accurate, if a little hard to read.
Image: the author’s mockup of a contact tracing book based on the type of accident books that exist in pubs, restaurants and museums up and down the country.As you might be able to tell Art was my worst subject at school.
Sister pub of the Split Chimp in Newcastle. A real ale pub with good beer and, if you’re lucky, excellent pork pies. It has the best pub music in Whitley Bay***.
It has younger customers than the Dog and Rabbit. Because of its location – it is part of the revamped Spanish City on the seafront – its customers also include both dog walkers and people who are visiting the seaside.
They run their till on an iPad. I expect that whatever software they use will be upgraded to support drink ordering apps and contact tracing.
Most of their customers will give them accurate data, because the app will be designed to encourage people to provide it with accurate data to feed the software supplier’s machines.
Beefeater is a big chain of pub restaurants owned by the large Whitbread organisation. My wife is confident that this pub does the best value for money steaks in Whitley Bay. If you time it right then you can get a great table to watch football matches. It is also on the seafront and connected to a hotel so gets a lot of visitors.
I’m not aware of them having an app. I expect Whitbread have got a team of staff and suppliers working to develop something and will have it ready to launch when the pubs reopen. I doubt the app will be as good as the Wetherspoons one.
Most of their customers will give it accurate data, because the app will be designed to encourage people to provide it accurate data to feed Whitbread’s marketing machine.
The Rockcliffe is a classic British backstreet boozer. Once a week there is a meat raffle. Scampi (flavour) fries and bacon (flavour) rashers are always available. It always seems to be full of people who’ve stopped off while taking the dog for a walk, people playing backgammon, and people having a quick drink after work before going home to eat dinner.
I’d expect them to try to take the same app as The Fat Ox and The Victoria, but their customers will end up using the same manual methods as The Dog and Rabbit.
What can we learn?
These are not all of the pubs in Whitley Bay. Far from it. I didn’t even mention Berties Club (with its famous karaoke), the Nord Bottle Shop (which has a backroom to drink a bottle or two in), or the Whitley Bay Brewing Company. But hopefully this gives people a taste of the variety that exists in most small towns.
The pubs are a mix of large chains, small chains and independents. That will affect the type of apps that they try to build or buy. The customers in the pubs vary and have differing relationships with the pub and the staff that work there. Those dynamics will affect how customers behave when they use apps, and whether or not staff try to persuade them to use them as they are designed.
Within this single town there is a range of different contextual relationships, problems and opportunities. That will affect contact tracing and efforts to contain the pandemic.
So, if you’re in a national conversation about pubs, apps and contact tracing and thinking about a single national solution then I’d suggest that you instead start thinking about how to shape and regulate a market with many local solutions.
* It is not clear from these briefings whether the idea is only for England, or for all four UK nations. After all, public health is a devolved matter. I expect it is only England.
This post is based on desk research, conversations with various people in national and local organisations, and a talk I gave at an OpenDataSavesLives meeting. For more Coronavirus stuff that I’ve worked on see the Ada Lovelace Institute’s “Exit Through The App Store“.
Coronavirus is a pandemic. For a couple of centuries we have known that data is one of the most powerful tools in a pandemic. The UK prides itself on being a world leading nation in the use of digital, technology and data. Yet in England, the largest of the UK’s four nations, we are struggling to get data to local places so that they can use it to help save lives.
The role of local places in a pandemic
In England local authorities are responsible for public health in their area. They also play a vital role across many services including housing, business support, health and social care. They work with a range of partners to do this. Hospitals, doctors, care providers, police forces, charities, businesses and citizens (through both existing and new structures).
At the moment England can see the end of the first wave of the pandemic and is starting to relax lockdown measures. The focus has shifted to what is called test, trace, and isolate. Widespread testing to understand where the disease is, contact tracing to track down who else might have it, and isolation to contain new outbreaks of the disease.
These are tasks where national decisions and health research play a role, but a similarly important role is played by local places.
Having good data about the spread of the virus in local places might help a community group to tailor hygiene advice to meet language needs, a business organisation to distribute hand sanitisers to shops, care homes to take extra precautions, public health officials and statisticians to produce local predictive models, or a local authority to manage a local lockdown.
Local organisations are often the most appropriate organisations to do this because their staff know their places and the people who live there. They are trusted, or not, in different ways than the central government.
Data and information about the pandemic
But to take these decisions they need information.
Some of this information will come from these organisation’s connections with their places – a community organiser might hear of an outbreak because a friend is affected by it, or people might see complaints about shop hygiene on a local social media page.
Other bits of information need to come from data, for example the number of people tested in an area and how many were positive, or the number of contacts traced and whether there is a difference between demographic groups.
Local places are struggling to get access to this data, but it does exist.
The national government has set up national programmes like the Covid-19 data store, NHS Test + Trace, the NHS symptom tracking service and Project OASIS – which brings together data from various symptom tracking apps. As an aside this seems to be an exceptionally English approach, most other nations of a similar size seem to have built on existing regional and local structures.
All of these national programmes use data, for example to improve operational performance, to inform national decision makers, to support medical researchers, and to inform national media debate.
But the data they collect and steward is not getting to local places and those local places need it too.
Charities collecting and publishing data about social care because of government failure. Local academics being told that their research needs to conform with national health needs. Regions exploring whether to launch their own symptom tracking services. Businesses offering data services that may be of lower quality than that which the national government already holds. Local officials and community groups struggling to find out who to speak with to even start a conversation about data access.
In May there were reports that an interim operational review by a cross-government team highlighted the problem of data access. Tom Riodan, the CEO of Leeds Council, was given a role in the national Test + Trace programme after that review. His role is not only about data access but, as a result, some progress seems to be happening.
Despite this the national programmes still lack urgency and there are now concerns that the government will supply local places with dashboards that it and its national partners design, rather than giving local places access to data so that they can use it to design and operate whatever decision making tools they need.
Meanwhile the public complaints will continue and the opportunity to make decisions that could save lives will be lost.
Accessing and using data in trustworthy ways
When data access is provided then it will need to be used in trustworthy ways.
Local public sector organisations have had the legal power to use personal health data since COPI (Control of Patient Information) notices were issued back on 1 April 2020. The notices were passed to support this kind of use.
Other organisations, such as charities or businesses, can use open data which is aggregated to a safe level.
For these organisations then daily publication of symptom, testing and contact tracing data at the level of LSOAs (Lower layer Super Output Areas) is likely to have the right balance between data protection and usefulness for public health. It is hard to be certain without access to the data.
If the national programmes do not have the expertise to navigate these issues then they could get help from the Office of National Statistics who can both work through how to publish the data and help to communicate how this data for local operational decision making has different characteristics to statistical data.
The power of networks
When the data is available then it can start to rapidly be put to use.
Some local authorities are already working with their communities to prototype what they can do when, or if, the data arrives.
Networks like these can help get the data used in building tools for local places, evaluate the outcomes to discover what works and what does not, and share their learnings across the nation.
But they need the data
There are lessons to be learnt here, and not just about public health programmes in a pandemic.
If the UK wants to level up across the country it will need to do a lot more work on devolving data governance and learning how to get both local places and citizens represented in decision making about data. Perhaps the plan for the UK’s recovery after the pandemic or the national data strategy will tackle that particular challenge.
But there are also immediate steps that need to be taken.
We urgently need to get data out of these national programmes and to local places. It will help save lives.
This blogpost was originally published in 2016 when I worked for the Open Data Institute. I have learnt things and would write my parts differently now, for example framing as an “open and trustworthy” data future and with a better understanding of distribution of power, human rights, rule of law, and equitable sharing of benefits. I am republishing here as the post was one of many that were not copied onto the current Open Data Institute site. That makes it hard to find, and I want to be able to easily refer to it in the future. The content is available under a CC-BY-SA licence attributable to the Open Data Institute.
This [was] the fourth in a blog series discussing how the future affects data infrastructure. The first describes why we are considering three potential data futures, the second explores the locked-down future, and the third explores the paid future
By Peter Wells and Anna Scott
What will our ‘data future’ look like? Well, there are three possible directions in which it could turn: a locked-down future where data collection and use is tightly restricted, a paid future where data is licensed for money, and an open future where data is made as open as possible.
At the Open Data Institute, we like to describe data as roads: roads help us navigate to a location, data helps us navigate to a decision. While the locked-down future has missing roads and locked gates, and the paid future is dominated by toll booths, the open future has roads that we can all use. Both the paid and locked-down futures are more limited in their use of data than the open future. Where they create less value and only some can benefit, the open future creates a virtuous circle from which everyone can benefit.
In the open future governments, businesses and civil society use and publish as much data openly – for anyone to access, use and share – as possible. Open data will be maximised, while privacy is respected.
We choose openness for economic, environmental and social good
The choices we make with data are extremely important. The open future will only emerge as governments, businesses and people embrace it and create an environment that encourages the open future to flourish.
To some extent this open future for data is an evolution of openness in society: open government, open web, open source, open standards, open innovation and open culture. Within each of these movements, some people choose paid and closed models, while others choose openness. We believe maximising openness is the best choice because it brings the biggest benefits across society.
The open future and the Data Spectrum
In the picture above the red line is the locked-down future, the blue line is the paid future and the green line shows the open future on the Data Spectrum, running from closed to shared to open data.
In the open future, we can expect the closed part of the spectrum to contain more data than in the paid future. In the paid future, the lure of money leads to us sharing data that should have been kept private or could have been opened to benefit all of society. People feel that everything can be shared if the price is right and sacrifice privacy for cash.
In the open future, we have a better understanding of what should be closed and what should be open. The ‘shared’ layer is significantly smaller. It is reduced to those occasions when we share data for service needs – such as through open APIs – or for research.
The rest of the data is open, for anyone to access, use and share.
Some of our data infrastructure will have been built by organisations working together to solve common problems. Collaborative maintenance models – like that used by OpenStreetMap – are likely to exist in great numbers, and be used for other core data assets such as addresses. Responsibilities, costs and benefits are shared in these parts of the infrastructure. The organisations work in the open and the data produced by their collaboration is made open. Their culture has changed to be one of openness, and this matches what their communities expect.
In other parts of our data infrastructure, data will be shared between organisations.
Data intermediaries, or aggregators, will add value by combining datasets or offering additional services. They are likely to use freemium models, with an open data feed that anyone can use and a premium feed for customers who need high-volume usage or early access to data.
There will also be organisations that share personal data, facilitated by institutions of trust who certify sharing according to agreed principles and open standards. Both the organisations and institutions of trust will be transparent in how they make decisions. Where it does not damage privacy, they will publish aggregated data about the personal data that is being shared.
Releasing aggregated data for the benefit of everyone and being transparent about how and when data is shared can increase innovation, improve trust and help make consent meaningful. There will be the necessary expertise and data skills in the organisations that deliver services, maintain and regulate data as well as increased data literacy amongst the people who choose whether or not to use those services.
The single-minded focus on data sharing – whether it be personal data or other data assets – that we saw in the paid future is like building a road network that consists mainly of toll roads. In the open future, data is like the road infrastructure that we have now: most roads, no matter who maintains them, are free for everyone to use.
This will not be easy. Realising our open future will require us to build and maintain a data infrastructure that is as reliable and open as possible, and that maximises value by bringing together privacy and openness. By doing this we support, transparency and accountability; we grow our economies and we receive better services.
Data helps us build this future but it is humans that choose the direction.
Peter Wells is an Associate and Anna Scott is Writer / Editor at the ODI. Follow @peterkwells and @annadscott on Twitter.
I used to lead the Open Data Institute’s work on data institutions. The team both piloted data trusts and explained that a range of approaches existed – including things like data representatives and data cooperatives – that can change how decisions are made about data. Hopefully to make those decisions more trustworthy. There are many other people working on data institutions in the UK, in Europe and around the world. I’m often surprised by how many.
Over the last couple of weeks I have been talking with people about data institutions. Many of the conversations surface similar implicit assumptions.
There can be only one
In many of the conversations people assumed that there could be only one data institution within a particular context. They had not thought about whether and when there might be multiple.
Some data institutions will exist to steward data for which you might want there to only be a single source of truth[mfn]I know. I do love a bit of epistemology and discussions about the nature of ‘truth’ but that would be an unnecessary diversion in this blogpost[/mfn] – for example the list of Prime Ministers of a country, the list of websites that exist, or who you are married to.
Many others will steward data or have a purpose where there might be multiple things doing roughly similar jobs but, perhaps, with different methodologies or priorities. Maybe one has a purpose of “for the benefit of the people of Newcastle”, another has “for the economic benefit of the people of Newcastle” and a third has “for the benefit of the businesses of Newcastle”. A single word can make a big difference.
Sometimes there should be only one data institution but multiple will exist. That’s life. We live in a wonderfully imperfect world.
Being open to the need to work with other people and other institutions is a better starting assumption than there being only one. Institutions might compete with each other, cooperate with each other, or both, but do expect it to happen.
Rip it up and start again
Another assumption was about the need for something new.
The way we steward data at the moment is not working, therefore we must need a new institution to fix the problem, right? Maybe…
Sometimes we need to fix things that are not working, or at least try to make them better. An existing institution might provide vital services, it might contain valuable knowledge, or it might do things that – shock! horror! – are only loosely related to data. Creating a new institution might break existing and important things.
I do not know of a good methodology to help people decide when to try a revolution and when to try evolution, but do make sure that it is a conscious decision
You forgot government
Many people thought that they needed a new type of data institution – like a data trust or data cooperative – when actually they might just need to improve a simple, old-school democratic institution like a bit of government.
I am very conscious that I live in the UK, a high-income country with an old and (relatively…) stable democracy. Not everyone does. I’ve worked a lot internationally, but mostly in similar countries. In these countries we have many institutions that are already legally responsible and democratically accountable for stewarding data for a particular purpose.
There will be institutions responsible for land registries, local places, criminal justice systems, welfare payments and – in a country with a national health system like the UK – health and social care. Perhaps, rather than working around those government institutions you need to use democratic processes to change their behaviour to make them more useful and trustworthy.
Some people seemed to forget the government and implicitly assumed that they needed to take responsibility into a new institution that they would build and run.
Sometimes we do need to take responsibility away from the government, but at other times we need to add new responsibilities to government or just make existing bits of government work a bit better.
Again, make it a conscious decision.
Building institutions takes time
Building institutions takes time. Not just your time, but other people’s too. It will take even longer if you do not think about why you are doing it and do not surface and challenge assumptions about what any new institutional arrangements should look like.
Assumptions like whether there will be multiple institutions, whether there should be something new, whether the institution should be part of the government, what approach you need, or even whether that approach is suitable for your particular context.
Making those assumptions explicit and challenging them is likely to help you move a bit faster and be a bit more effective at actually making people’s lives a bit better.
An experiment in writing fiction about a sociotechnical system.
Fred and Gabriel knew that the virus was under control, but they were still worried.
The DNA test on their newborn child, Ariel had shown that Ariel might easily be infected by the virus. The result was red. Fred and Gabriel’s own DNA tests had been taken years ago when the tests had first been invented. Fred was amber and Gabriel was green.
Those scores were ok but red spelt danger.
The test was designed to predict how likely it was that someone would catch the virus. The simple scores of red, amber and green were designed to be easily understandable. The real test results were more complex.
Everyone was susceptible to the virus, particularly if there was a large number of infected people in a group. Scientists had found that people who were easily infected shared certain patterns in their DNA. The tests were designed to spot those patterns. It was important to know who was susceptible because people became infectious before any symptoms were visible. To stop the spread of the virus it was necessary to reduce the chance of the first infection.
Reducing the spread of the virus was a priority for everyone. When the virus had first appeared it had killed many people and caused panic in many, many more. The virus was under control but people needed to be confident that there would be no more major outbreaks. A systemic response had been required.
The maps and the rules
The system was designed to minimise the chance that people who could be easily infected with the virus could mix with each other. That would reduce the chance of a single infection rapidly spreading.
The DNA tests were part of this system. Everyone needed to be tested. The results were recorded and made available for everyone to see.
People were wary about other individuals whose results showed danger but to reduce the chance of inadvertent mixing there were maps and rules. The rules said that spaces like towns, hospitals, supermarkets, and offices could only have a maximum percentage of reds and a maximum percentage of ambers.
Anyone could look at maps that showed both the maximum and the current percentage of red, amber and green in each place. The maps helped people know if the rules were being adhered to.
Fred thought the maps were beautiful.
To make the maps and rules work it was necessary to know where individuals were. There was a network of cameras for this.
The tracking cameras were originally deployed by the government’s centre for data modelling. The centre made sure the population was happy by measuring happiness and recommending ways to improve it. Their early models suffered as the data quality was poor. The solution was to collect and share higher quality data in larger volumes. The people who worked in the centre used the images of people captured by the cameras to estimate the levels of happiness in different parts of the country.
The original happiness tracking system was repurposed for the virus through a software update.
Originally the system had identified people through mobile phones, glasses and watches but people found it too easy to swap these devices with each other so the system now used other methods. Face masks had been popular when the virus first appeared but were now banned as the best way to stop an outbreak relied on identifying people. As well as faces, the system looked at other attributes like the shape of people’s bodies, how they walked, and how they gestured while they spoke.
At first it had been expensive and slow to do these checks as it required expert people recognisers. Other experts watched the people recognisers to learn enough that they could design algorithms to make the process faster. Over time the people who worked at the camera manufacturers had made it even easier by optimising the camera hardware to meet the needs of the algorithms.
Gabriel worked in one of the organisations around the country that designed, installed, maintained and updated the cameras, and the network that connected the cameras together. The job was as important as those maintaining other bits of vital infrastructure like electricity power stations, roads, and water networks.
The images from the cameras were linked to individuals and test results. The beautiful maps updated in time with people moving around.
The government had given police officers, immigration officials, nurses, teachers, landlords and employers the responsibility to make sure the rules were enforced. The tests, the rules, the maps, the cameras and the people were all part of the system.
If a maximum percentage was breached then it was someone’s fault. That person risked a fine, jail or losing their job. But if they kept the mix under control then there were rewards – perhaps a promotion or more simply praise from the people who had been kept safe. You could spot one of the responsible people by looking for people staring at a map with moving dots of green, amber and red with an occasional burst of movement to get someone out of a room before someone else entered through another door.
The rules would affect Ariel, Fred and Gabriel. It would affect where Ariel could go to school or, many years in the future, where to get a job or who they could fall in love with. It would affect where the family could live and go on holiday. It would even affect which park to play in on which day and which other families to play with. The family would need to stare at maps too.
The rules would affect Fred and Gabriel in other ways. The system knew that they were Ariel’s parents and shared bits of their child’s DNA. If Ariel’s score was red then this might mean that Fred and Gabriel were more susceptible than their tests had originally shown.
The test results were not perfect. They were just a prediction. More data could improve the prediction. Because of Ariel’s red score the system might change Gabriel’s green to an amber, while Fred might become a red.
Breaking the system
Fred suggested retesting Ariel. There were a range of test providers. As the government said “every market is better when it is competitive!”. A different provider might give a different result. But Gabriel was not sure if this was true. Gabriel had heard that nowadays the different test providers were just different brands. The test was the test. That made it both effective and efficient.
Perhaps there was another way. If there was a new family member whose test result was green then that could bring down the score for the other family members.
You could pay people to manipulate the DNA of an unborn baby. It was said that this DNA manipulation would generate a better test result, with only a small chance of harming the baby. You could even improve other things at the same time – perhaps a bit more height and better hair.
The system was based on data. Data came from humans – whether it be the baby humans, the humans who created the tests, the humans running the cameras and maps, or the humans who manipulated DNA to manipulate the tests. To break the system humans could feed it false data. But there was a chance of harm to a baby.
The virus was under control
It was complicated trying to live a life under the system. But Fred knew the virus was under control.
The last outbreak had been when Fred was still a child, fifteen long years ago. Despite that, the system still tested and monitored for the virus. There were many organisations working to make sure the system worked as well as it did. Gabriel worked for one. The job put food on the family’s table.
Those organisations were spreading to more and more countries around the world. The organisations exported the system to the world and, in return, bought taxes and jobs back to their home country.
The system had been built for a purpose, reducing the spread of the virus, but the system had proved useful for lots more things. The virus evolved so the test needed to evolve too. The scores of red, amber and green sounded very simple but outside of a small group of people no one really knew how the scores were calculated.
Fred and Gabriel stared at the system.
They started talking to other people who wanted something different. To begin with they might only be able to meet in little groups but that would change. They could make the maps more beautiful with more colours. Lots of new colours catching light everywhere.
The UK is having a general election on December 12th. Over the next week political parties will put out their manifestos. Those manifestos will contain lots of commitments about what the parties will do if they are elected.
First, a bit of context. Technology is always changing but it has changed a lot in the last few decades with the proliferation of computers, the internet, the web, and data. These technologies have changed things for governments.
Some citizens now have higher expectations from public services. They expect public services to behave like those they get from Google, Amazon or whichever service is hot this year, *checks notes*, such as ByteDance’s TikTok. Technology is enabling things that some may think should be public services — like accurate mapping data on smartphones, or being able to have a video call with a doctor.
Using new technology to help deliver public services that work for everyone is a tough job that, despite good work by Government Digital Services, government still has not cracked.
New technology has also enabled new businesses, markets and types of services to emerge. Things like smartphones, social media, cloud computing, online retailers, online advertising, and the “sharing economy”. The world is now more interconnected. Someone in Wales can rapidly build an online service and start selling it to people in India, and vice versa. Meanwhile because the technologies have also been adopted by existing companies they affect government’s role in existing markets.
Technological waves of change like this are not new — I recommend reading some history about the after-effects of the invention of ocean sailing, printing, electricity, or television — but governments have been particularly slow to adapt to this wave of technological change.
Why? Perhaps because the technologies have changed things globally. Perhaps because of the type of governments that we have had. Perhaps because of lobbying by businesses. Who knows. Future historians will be better placed to assess this.
Anyway, my suggestions are not about the details of each of these areas. Instead they are about how to increase the rate of adaptation for the next government. About how to get more radical change.
Political parties should start with themselves. They need to be open about how they are using data and online advertising and publish data about their candidates to help voters make more informed decisions. Political parties should not use micro-targeted advertising during the election, and should challenge their opposition to follow their lead. Where necessary they should err on the side of caution when using advertising tools. After all, much targeted advertising is already likely to be illegal under existing legislation. Doing these things will help politicians learn how to responsibly use technology while competing for power. That will help them use technology responsibly if they get in to power.
Whoever gets into power should then ban targeted political advertising until it is shown to be reasonably safe. To understand the effects researchers will need access to data held by the big technology platforms like Facebook, Twitter, Google and Apple. Organisations in the USA have faced challenges when trying to do this with Facebook but approaches like the ONS ‘five safes’ and the Ministry of Justice data lab show that parts of the public sector have the necessary skills to design ways to do it. Government should use models like this to give accredited researchers access to data held by the platforms to inform future policy decisions and, perhaps, when to relax the ban for certain kinds of ads.
Develop technology literacy in more of the public sector
Unfortunately too many people still do not get it. In my own meetings with governments I am often surprised, and sometimes horrified, by whole teams of people with limited technology literacy making significant decisions about technology. (Similarly, I am often surprised, and sometimes horrified, by teams of technology experts making significant decisions that impact on policy or operations with no real experience in those areas.)
Not every public sector worker needs to be a technology expert, and it is certainly not true that everyone needs to know how to code, but it is necessary to have technology literacy in many more parts of government. More public sector workers need to understand both the benefits and the limitations of new technology and the techniques that people, like me, use to build it.
This is one of the most important things to focus on. Different skills are needed by different roles, but an underlying element of technology literacy is useful for everyone.
To start providing this technology literacy I would recommend vocally demonstrating that technology experience is as valued as other skill sets and encouraging more technology experts to join teams that lack that experience, and by seconding non-technology staff into technology teams. In both cases people can then listen to and learn from each other.
An independent inquiry into technology regulation
Finally, regulation. Technological change needs changes to regulators and can lead to the need for new ones. There are a growing number of known gaps in technology regulation. Some of these gaps affect public services, like the police. Others affect public spaces, like facial recognition. Some affect new services like social media. Others existing ones, like insurance. In some cases it is not clear if regulators are appropriately enforcing existing rules, like equalities and data protection legislation, while there will be a large number of gaps that people simply haven’t spotted yet.
Previous governments have set in process various initiatives such as considering the need for a new social media regulator, a national data strategy, and a Centre for Data Ethics and Innovation (CDEI), but these initiatives are not adequate. They are controlled and appointed by the current politicians, operate within current civil service structures, and are mostly taking place in London. The changes bought about by technology are too fundamental for this approach to work. The UK needs something more strategic, more radical, more independent, and more citizen-facing.
An independent inquiry into technology regulation should be set up. It should have representatives from around the UK; with different political views; with experience from the public sector, private sector and civil society; and from both citizens that love modern technology and from the groups that are most at risk of discrimination. It should look across the whole technology landscape, have the power to call witnesses, and be empowered to make a series of recommendations for changes to legislation and regulation to help set the UK on a better path for the next decade.
Inquiries like this can happen faster than you think. The recent German Data Ethics Commission took just 12 months to come up with a set of excellent recommendations. Setting a similar timescale for an inquiry in the UK will allow the next Parliament and the next Government to focus on delivery.
It is necessary and possible for the UK to adapt to technology faster
Politicians and their teams can learn how to use technology more responsibly by tackling the fear around technology and politics; mixing up teams in the public sector can help staff learn from each other; and an independent inquiry into technology regulation can help set the UK on a better path to the future.
The UK needs to adapt to technology faster. For the good of everyone in the UK, but particularly those who are being disadvantaged by irresponsible use of technology, can we do it? Please?
Hi, I’m Peter. I currently work at the ODI (Open Data Institute) where I am Director of Public Policy. I will start with my usual warning, particularly for an audience where English is not the first language. Sometimes I speak too quietly and too fast and I often make bad jokes and obscure references. I’m bad like that. This is my last public talk for the ODI so I am even more likely to do that than normal. Please tell me off if you cannot follow what I am saying. I will stop and get better.
About the ODI and about me
The ODI is a not-for-profit that works with businesses and governments to help build an open and trustworthy ecosystem. The ODI believes in a world where data works for everyone. As simple to describe, and as hard to achieve, as that.
In that world data improves the lives of every person, not necessarily every business or every government. Some businesses and governments are deliberately building new monopolies or causing harm to people. Sometimes it is not possible to fix that behaviour by working with organisations, instead it needs other ways to change behaviour. I will talk about those later.
At the ODI I have been heading up the public policy function — I’ve been responsible for the ODI’s views on the role of data in our societies.
I am a technologist by background and I somehow stumbled into the world of public policy a few years ago. One of the things I have been focussed on in that time is making sure that public policy is informed by and tested in practical research and delivery (and vice versa, that delivery work aligns with policy thinking). Data, technology and people are always changing. A strong link between practice and policy helps make stuff useful.
I am here to talk about practical data ethics. I would like to start by talking about how we create value from data; why we need to change the behaviour of people and organisations that collect, share and use data; and finally to talk about some possible interventions to change behaviour — including practical data ethics.
Creating value from data
Value is created from data when people make decisions.
To maximise the decisions that can be made we need to create tools that meet the needs of different decision makers — for example a mapping app to help me find the building that we are in today, a bit of sales and customer analysis to help a business decide whether to invest in a new product, or a research project to help a government decide whether and where to build a new road.
To create this range of tools we need to make data as open as possible.
This needs stewards — the people who decide who can get access to data — to make it accessible in ways that creators can use. There are a number of reasons why they might do this but it is (hopefully!) always driven by the need to use the data to tackle a problem by making a decision.
The problems with data
Unfortunately there has been a rush to collect data, open up data, share data, or make more decisions using data without thinking about whether or not we should.
This is an ethics event so I am going to start by talking about harms. Rather than organisations making data work for people, they make it work against them.
Harm to groups of people is not always caused by personal data. The excellent book Group Privacy contains many examples. One that sticks in my head is from the South Sudanese Civil War. The Harvard Humanitarian Initiative published analysis created from satellite imagery to help people find and get aid to refugees. Unfortunately terrible human beings used the same analysis to find and attack those same refugees. The tools that the team had available had helped them think about mitigating the risk to individuals from the release of personal data, but not the threats to groups of people created by non-personal data.
And as a final example there has been damage to our democracies. The use of data in political advertising, to spread misinformation, or most famously in the Facebook/Cambridge Analytica debacle. Personally I do not think that the data collected by Cambridge Analytica had much effect, I reckon they sold snake oil, but the fear of it having had an effect is damage in and of itself.
Left unchecked these harms will lead us to a data wasteland where organisations do not collect or use data, people withdraw consent and give misleading data, and as a result we will get poor conclusions when we try to make decisions based on data. It reduces the social and economic value that data could create.
But there is another type of harm. Where people and organisations collect data but use it only for their own purposes. They don’t make data work for everyone. They just make it work for themselves.
This is data hoarding. It is the attitude that “data is oil and I must control it”. Data is collected and used within a single organisation for too narrow a purpose.
A simple example comes from Google. In recent years Google have encouraged people to crowdsource data about wheelchair accessibility in cities so that it is easier for people in wheelchairs to move around. But the data is only available in Google Maps. The people who contributed the data would surely have wanted it made more widely available so that people in wheelchairs who used Apple Maps could find their way around, or that the data was made available to civil society and city authorities who might have been able to use it to improve wheelchair accessibility in cities. Instead the data is hoarded by Google to create a competitive advantage and bring in more customers
There are vast amounts of data locked up in data monopolies like Google, Facebook, Apple, and legacy organisations like big multinational corporates or national mapping agencies.
This leads to lost opportunities for innovation. Innovation that might have created better outcomes for people. As a result lotsofpeople are looking at data as a competition issue at the moment.
The challenge is finding a path between the data wasteland and data hoarding. If we make data too open and available then it causes harm, if we do not make it open enough then we lose benefits and concentrate power in monopolies.
In doing that we need to recognise that different societies will make different decisions about data. Just like they make different decisions about other forms of infrastructure. People’s needs and social norms vary.
As long as we stay within democratic norms and respect fundamental human rights then we should accept those differences. Many of my examples today are from high-income countries but personally I am excited to see what new futures emerge from the rest of the world. That would be a different talk though.
Anyway, moving to a better data future will require constant monitoring and intervening by a range of people and organisations. The ODI is one of the organisations doing that monitoring and intervening. The strategy for how and when we do it is on the website.
It is essential to think about the ecosystem around data and to think about multiple points of intervention. To create a world where data works for everyone many forms of intervention are needed. I am going to touch on some before getting to practical data ethics.
Many people start by thinking that better choices by citizens and consumers can change the world. Consumer power is the answer. Consumers will pick services from organisations that cause less harm and create more benefits.
Many people say that consumers are happy with the current situation — why else would they be using these organisations and services? Unfortunately work in the US by the academics Nora A Draper and Joseph Turow on digital resignation and the trade-off fallacy, and our own recent piece of work on how people in the UK feel about data about us, shows that most people do care and want a different future but that they feel unable to get there.
One of the things that is lacking is choice for consumers. The previously mentioned work on digital competition, and things like interoperability and data portability, will help but it will take time. It is not going to reduce some of the harms we can all see right now.
Regulators can intervene. In the UK the Open Banking movement designed a framework which was adopted by the UK’s banking regulator. It tackled competition issues, by giving bank customers more control over data about them, and had measures to protect against harms. Rather than open banking being solely down to consumer choice a regulator approves who bank customers can share data with. I helped a bit both with the framework and the persuasion to get it adopted. The process has taken at least four years and is just starting to see changes that benefit people.
Another necessary point of intervention is legislation. This is essential and can radically change the behaviour of businesses and governments. But again legislation takes time. That is a feature, not a bug, of democracy. Democracy comes with debate and compromise. GDPR took six years from the first legislative proposal until it came into force.
For more immediate change there is existing legislation that could be used — for example anti-discrimination legislation and worker’s rights — but that legislation is likely to need updating as, like any legislation, we will learn that there are gaps and changes to be made.
But these new institutions are in a research and development stage. We have to be realistic that it will take more time to determine if they are useful, where they are useful, and how to build and regulate them.
Practical data ethics
There are many other possible points of intervention but one important and often overlooked one is the people within the organisations that collect, share and use data. Which brings me (finally!) to practical data ethics.
In the USA there have been growing protests by tech workers against the decisions made by their employers, in the UK research by DotEveryone found that “significant numbers of highly skilled people are voting with their feet and leaving jobs they feel could have negative consequences for people and society.” Meanwhile consumers and citizens are saying that they do care and do want more ethical technology, organisations respond to that. The need to retain both workers and customers creates a need to change.
We should never forget that, as my friend Ellen Broad put it in her book, decision are made by humans. Humans decide to fund or stop projects, to buy technology, they make design and development decisions, and they decide whether and how to evaluate its outcomes.
These decisions are influenced by consumers, governments and regulators but they are also influenced by other things such as professional codes, training courses and organisational methodologies.
This does not mean that principles are useless, within an organisation they can demonstrate values and help create space for challenge, but we need to look at other techniques to make them more useful at the systemic level where the ODI is looking to intervene.
When Ellen Broad and Amanda Smith looked at this for the ODI a few years ago. They came to the conclusion that the most useful thing for the ODI for to do was something a bit more practical and a bit more like the tools that people already use.
In the two years since then various other people — like Fiona, Anna and Caley — have worked with me to iterate it and helped turn it into to what you can see today. Not all of those people work for the ODI. We have been iterating it based on feedback from our own users and audience too.
The canvas does not give easy answers it ask questions. It encourages people to take responsibility for coming up with their own answers in their own contexts. The questions are inspired by the problems we and other see.
It prompts people to think about their existing ethical and legislative context — perhaps they are already covered by health ethics or anti-discrimination legislation, or one of the many sets of AI and data ethics principles— and the limitations of data.
The canvas prompts people to think of both possible positive and negative effects, but it encourages them to think more deeply about which groups of people win and lose.
The canvas is designed to be used by multi-disciplinary teams of people, not just individuals. We have seen it used by groups including lawyers, developers, programme managers, user researchers, policy analysts, designers and product managers. It encourages people in organisations to create space and time for debate, and then to make and act on decisions.
The canvas also encourages transparency and openness. That way people outside an organisation can see how it plans to use data, what benefits and risks are expected, and what mitigation plans are in place. It encourages people in organisations to listen to people who they might affect.
But is it having any effect?
I have used it in public training, private workshops and conversations with a range of organisations. I have seen it broaden people’s minds about the range of ethical issues that they should consider before making a decision. I have seen senior people in organisations try it in a few projects then go on to implement it in their standard project governance.
I have also seen individuals sneak it into a few projects within a large organisation with the goal of proving its value before talking more with their bosses. You normally don’t need permission to try a new methodology. Give it a go in your own organisations.
It is hard to track usage of something that is openly published on the web but I know from our own research and surveys that hundreds of people in public, private and third sector organisations at local, national, and global levels are using it because of that decision to make it openly available.
Those people work in multiple sectors like academia, civil society, public service, health, finance, engineering. Some are in large corporates, some in small startups. People tell me that some organisations have stopped projects because of questions raised by the canvas. Others say that they have redesigned products and projects. Brilliant. It is causing some decisions to be made.
I can only share those stories vaguely, because I respect the confidence and privacy of those people.
One organisation, the UK Cooperative Group, have talked most about their use of the canvas. It forms part of their standard product development model. Because the canvas has an open licence they could adapt it to suit their own needs. Perfect. I hope some of the many, many others will share their stories too. I think it will be less scary than they might think.
I am always wary of over-confidence. At a place like the ODI we get listened to and the canvas could actually be making things worse. Is the effect overall positive and how big is it? Only time and more detailed evaluation will tell. But from my own checks I am reasonably confident that it is helping.
Obviously this approach to practical data ethics is only one type of intervention. Accountability — through organisational processes, professional codes, regulation and legislation is still very much needed. But practical data ethics can create some practical change now. If we can get people to be more open with their tales it should also inform policymakers on where the biggest problems are and what regulation and legislation is needed.
Building a better future for people with data will take quite a while. There are some obvious problems, some of which have obvious answers, but there also less obvious problems and no easy answers for all of the problems. We all have to keep monitoring and intervening at multiple points in the system.
We need to stay optimistic and believe that it is possible. I believe being optimistic is a political act that makes it more possible that we will build a world where data works for everyone.
Anyway, I have rambled on too long. It is time for less talking from stage and more talking with each other. Grab me if you want to chat or email me on email@example.com if you do not get a chance.
Following the story back to the original Washington Post article the idea seems to be that a $40m-$60m research project would encourage individuals to consent to the use of personal data by a new research organisation called HARPA. The personal data would come from a range of sources, including Apple Watches, Fitbits, Amazon Echo and Google Home. At HARPA a team would analyse the data to come up with a model that would “identify risk factors when it comes to mental health that could indicate violent behavior”. The story says that HARPA will need “real-time data analytics” to stop the mass shootings.
Why is it terrible?
Here are just a few of the reasons why it is a terrible idea:
the project will not generate a good model. As Emma Fridel is quoted as saying in the Gizmodo article “literally any risk factor identified for mass shooters will result in millions of false positives”. Improving the model will require the collection of ever more data about ever more people (from x% accurate, to x+1% accurate, to x+1.1% accurate accurate, etcetera etcetera while people’s behaviour continue to change). Even then it will inevitably face what Julia Powles and Helen Nissenbaum call the seductive diversion of solving bias
the consent model is naive. Individual consent is a model that is already being challenged on the grounds of both whether individuals can ever make truly informed decisions given the growing number of use cases where data is used, and how the decisions of individuals impact on the rights of groups of people. For example, data sources like Amazon Echo and Google Home do not only collect data on the single individual who controls the account for the device but also on every individual who goes into the physical place where these devices are collecting data
to deliver “real-time data analytics” will require data about the behaviour of individuals to be collected on a massive scale, will every individual with suspected mental health issues have data about them captured and analysed? how do we identify that group of people? perhaps just capture the data for every person in the USA?
even with the best will and capability in the world this massive collection, sharing and use of data will create a whole host of risks and unintended consequences whether it happens in a liberal democracy or an authoritarian state
even if an organisation could make this project work in a safe way then I would not support such mass surveillance based on my own moral values and fears of how people and societies will react to feeling like they are constantly being watched, it leads us to the data wasteland or worse
This organisation was founded by Bob Wright after his wife, Suzanne, died from pancreatic cancer. One of the main goals of the organisation is to create HARPA, which would be based on the defense research and innovation agency, DARPA, but instead focus on public health issues.
It is clear Bob Wright and the foundation are well-connected in Washington and savvy enough to connect research proposals to political topics, like the mass shootings that are sadly so prevalent in the USA. There are tales that Bob Wright and the USA President, Donald Trump, know each other personally.
Unfortunately Donald Trump has the dangerous mix of embracing conflicts of interest, latching onto ideas for political gain, and wielding a lot of power.
That this particular HARPA proposal is not published openly, either by the USA government or by the Suzanne Wright Foundation, reflects badly on both of them. It makes it hard to scrutinise. It is hard to tell if these organisations really think that this idea is useful, or if they are simply using it for short-term political gains, to head off the risk of measures to reduce access to guns and bullets, or simply to create momentum for the creation of HARPA. But, it is clear that some people are concerned enough about this proposal to leak it to the press.
That should worry both the people and the organisations who might be harmed by such a terrible idea. This type of mass collection of data might seem fanciful in many countries but the USA is already seeing Amazon’s Ring service encouraging people to share data from security cameras looking out from their homes with organisations like the police.
Neither Amazon Ring’s data sharing or the Suzanne Wright Foundation’s research plan for mass shooting are likely to be effective in reducing crime, but they will both be effective at wasting money and risking unintended consequences & harm for many people. This is a shame as a government agency with both policy and delivery capability that was focussed on working out how to improve public health using modern technology, techniques and data could actually be useful.
If we want to enjoy the benefits of modern technology then the real challenge is how do we stop such terrible ideas much earlier, and well before they become horrible, horrible reality.
17% of the UK’s population — about 8 million adults — would struggle in a cashless society. To meet the needs of everyone it is essential both that public services give people the opportunity to pay in cash and that government help private and third sector services to take cash payments. Government can play a role in helping make this happen by broadening the scope of its payments platform and team, GOV.UK Pay, to support cash.
The need for access to cash
Cash use has declined in recent years. It has become ever easier for most of us to buy things using other methods — for example credit and debit cards, direct debits, or through online payment services like Paypal or Apple Pay.
In 2017 direct debits overtook cash as a form of payment. These other payment methods are more convenient both for the people making payments and for people taking them — there is no pesky cash to count and send to the bank at the end of the day. Some call for a rapid transition to a ‘“cashless society” where cash would not be used. Left unchecked it seems likely that it will become ever harder to use cash as shops, buses, taxis, pubs and even public services favour these new payment methods as it saves them money.
The review said that 17% of the UK’s population — about 8 million adults — would struggle in a cashless society. The reasons are complex. The report talks about multiple reasons including lack of access to the internet (particularly in rural areas), people without bank accounts, physical and mental health, financial difficulties, or fear that the computers that run the other payment methods will break.
The review found that 51% of consumers felt it would be a good idea to change the law so that all shops and services had to accept cash.
The factor with the strongest correlation to use of cash was not old age, but poverty. While a cashless society might be convenient for many it would be a struggle for some of the most impoverished people in our society.
Meanwhile the public also had a range of concerns about a cashless society including the needs of those who have to use cash but also including other concerns like a loss of privacy and the loss of the ability to choose how to pay.
That does not mean that a cashless society is necessarily the wrong vision for the future, it means that any transition needs to happen over a period of time, that governments need to provide support for those impacted, and that in the intervening period we need to preserve access to cash.
“we recommend that essential government services and monopoly and utility services should be required, through their regulators, to ensure that consumers wishing to pay by cash can do so, either directly or through a partner“
GOV.UK Pay is a better experience for many citizens, and for the people building public services that need to take payments, but it does not handle cash. It continues the same trend as we see in the private sector. Making it easier to handle online payments while neglecting the needs of people who need to pay in cash.
It provides no direct benefit for the millions of people who can’t (or won’t) use either online payments or online services. As well as the Ceeney Review’s finding of 8 million adults who would struggle in a cashless society, the Financial Conduct Authority reports that there are 1.3 million UK adults without a bank account. Unless a friend helps they have no way to pay money to a public service that cannot take cash.
“but not with users who typically rely on cash payments”
That was a bit silly. You need to pick an appropriate audience to test with.
Other government services (DWP and Insolvency Service) also did usability research with the actual target audience. These services felt it was a valuable payment option.
Unfortunately cash and the needs of the people who use it were not prioritised. Back in 2015 the focus was on online payments and the people who use them.
Government has a strong moral, and often a legal, responsibility to make public services work for everyone. GDS have always said that they want to benefit everyone and have an emphasis on accessibility. The Ceeney review, and government’s positive response to it, provide good reasons to revisit the strategy for GOV.UK Pay.
Broadening the scope of GOV.UK Pay to support cash
To deliver on its commitment to safeguard the future of cash government will have to make a range of interventions. Some of those will include making sure that the public sector can handle cash payments. I expect that the current GOV.UK Pay team will be able to provide a lot of help in meeting that objective while still delivering on their historic focus of better online payments.
But the benefits will not only be felt by people using public services. By making it easier for people to pay government in cash government can start to change the cash payments system for the better.
Perhaps government’s payment experts will discover that to get continued good coverage of places to pay in cash that:
the government will need to make it easier to pay for any public service in the local authority offices that are in town centres across the country
they should provide support to make it easier for shops to offer cash payment services like Paypoint
they can develop and share good practice for how to handle cash payments
there are ways to share good practice across the organisations that process cash payments to help make face-face payment services better
or the many many other things that will emerge with some good open-minded research into the needs of people who use cash
These things will also provide benefits to people paying cash to private and third sector organisations too. That is good. Government’s responsibility goes beyond what we traditionally think of as public services that need payment — things like paying our council tax, buying a fishing licence, paying for a car parking space, or getting a passport.
Governments have a responsibility to the whole of society. Governments should be investing in public goods that benefit everyone. Access to cash will make it easier for more people to buy food, travel around and enjoy their lives. Government should make it easier for people to use new online payment methods, but it also needs to preserve access to cash for the people who need it.
Broadening the scope of GOV.UK Pay to support cash will help government do what it said it would do when it responded to the Ceeney review, and make things a little bit better for everyone.