Author: peterkwells (Page 2 of 8)

The virus was under control

An experiment in writing fiction about a sociotechnical system.

Fred and Gabriel knew that the virus was under control, but they were still worried. 

The DNA test on their newborn child, Ariel had shown that Ariel might easily be infected by the virus. The result was red. Fred and Gabriel’s own DNA tests had been taken years ago when the tests had first been invented. Fred was amber and Gabriel was green.

Those scores were ok but red spelt danger.

The tests

The test was designed to predict how likely it was that someone would catch the virus. The simple scores of red, amber and green were designed to be easily understandable. The real test results were more complex. 

Everyone was susceptible to the virus, particularly if there was a large number of infected people in a group. Scientists had found that people who were easily infected shared certain patterns in their DNA. The tests were designed to spot those patterns. It was important to know who was susceptible because people became infectious before any symptoms were visible. To stop the spread of the virus it was necessary to reduce the chance of the first infection.

Reducing the spread of the virus was a priority for everyone. When the virus had first appeared it had killed many people and caused panic in many, many more. The virus was under control but people needed to be confident that there would be no more major outbreaks. A systemic response had been required.

The maps and the rules

The system was designed to minimise the chance  that people who could be easily infected with the virus could mix with each other. That would reduce the chance of a single infection rapidly spreading.

The DNA tests were part of this system. Everyone needed to be tested. The results were recorded and made available for everyone to see.

People were wary about other individuals whose results showed danger but to reduce the chance of inadvertent mixing there were maps and rules. The rules said that spaces like towns, hospitals, supermarkets, and offices could only have a maximum percentage of reds and a maximum percentage of ambers.

Anyone could look at maps that showed both the maximum and the current percentage of red, amber and green in each place. The maps helped people know if the rules were being adhered to.

Fred thought the maps were beautiful. 

The cameras

To make the maps and rules work it was necessary to know where individuals were. There was a network of cameras for this. 

The tracking cameras were originally deployed by the government’s centre for data modelling. The centre made  sure the population was happy by measuring happiness and recommending ways to improve it. Their early models suffered as the data quality was poor. The solution was to collect and share higher quality data in larger volumes. The people who worked in the centre used the images of people captured by the cameras to estimate the levels of happiness in different parts of the country.

The original happiness tracking system was repurposed for the virus through a software update.

Originally the system had identified people through mobile phones, glasses and watches but people found it too easy to swap these devices with each other so the system now used other methods. Face masks had been popular when the virus first appeared but were now banned as the best way to stop an outbreak relied on identifying people. As well as faces, the system looked at other attributes like the shape of people’s bodies, how they walked, and how they gestured while they spoke.

At first it had been expensive and slow to do these checks as it required expert people recognisers. Other experts watched the people recognisers to learn enough that they could design algorithms to make the process faster. Over time the people who worked at the camera manufacturers had made it even easier by optimising the camera hardware to meet the needs of the algorithms. 

Gabriel worked in one of the organisations around the country that designed, installed, maintained and updated the cameras, and the network that connected the cameras together. The job was as important as those maintaining other bits of vital infrastructure like electricity power stations, roads, and water networks.

The images from the cameras were linked to individuals and test results. The beautiful maps updated in time with people moving around.

The people

The government had given police officers, immigration officials, nurses, teachers, landlords and employers the responsibility to make sure the rules were enforced. The tests, the rules, the maps, the cameras and the people were all part of the system.

If a maximum percentage was breached then it was someone’s fault. That person risked a fine, jail or losing their job. But if they kept the mix under control then there were rewards – perhaps a promotion or more simply praise from the people who had been kept safe. You could spot one of the responsible people by looking for people staring at a map with moving dots of green, amber and red with an occasional burst of movement to get someone out of a room before someone else entered through another door.

The rules would affect Ariel, Fred and Gabriel. It would affect where Ariel could go to school or, many years in the future, where to get a job or who they could fall in love with. It would affect where the family could live and go on holiday. It would even affect which park to play in on which day and which other families to play with. The family would need to stare at maps too.

The rules would affect Fred and Gabriel in other ways. The system knew that they were Ariel’s parents and shared bits of their child’s DNA. If Ariel’s score was red then this might mean that Fred and Gabriel were more susceptible than their tests had originally shown.

The test results were not perfect. They were just a prediction. More data could improve the prediction. Because of Ariel’s red score the system might change Gabriel’s green to an amber, while Fred might become a red.

Breaking the system

Fred suggested retesting Ariel. There were a range of test providers. As the government said “every market is better when it is competitive!”. A different provider might give a different result. But Gabriel was not sure if this was true. Gabriel had heard that nowadays the different test providers were just different brands. The test was the test. That made it both effective and efficient.

Perhaps there was another way. If there was a new family member whose test result was green then that could bring down the score for the other family members.

You could pay people to manipulate the DNA of an unborn baby. It was said that this DNA manipulation would generate a better test result, with only a small chance of harming the baby. You could even improve other things at the same time – perhaps a bit more height and better hair. 

The system was based on data. Data came from humans – whether it be the baby humans, the humans who created the tests, the humans running the cameras and maps, or the humans who manipulated DNA to manipulate the tests. To break the system humans could feed it false data. But there was a chance of harm to a baby. 

The virus was under control

It was complicated trying to live a life under the system. But Fred knew the virus was under control.

The last outbreak had been when Fred was still a child, fifteen long years ago. Despite that, the system still tested and monitored for the virus. There were many organisations working to make sure the system worked as well as it did. Gabriel worked for one. The job put food on the family’s table.

Those organisations were spreading to more and more countries around the world. The organisations exported the system to the world and, in return, bought taxes and jobs back to their home country.

The system had been built for a purpose, reducing the spread of the virus, but the system had proved useful for lots more things. The virus evolved so the test needed to evolve too. The scores of red, amber and green sounded very simple but outside of a small group of people no one really knew how the scores were calculated. 

Fred and Gabriel stared at the system.

They started talking to other people who wanted something different. To begin with they might only be able to meet in little groups but that would change. They could make the maps more beautiful with more colours. Lots of new colours catching light everywhere.

Three policy ideas to help the UK adapt faster to the internet

The UK is having a general election on December 12th. Over the next week political parties will put out their manifestos. Those manifestos will contain lots of commitments about what the parties will do if they are elected.

When I looked at the manifestos for the last general election in 2017 I was disappointed at their lack of recognition of the changes the world was going through because of technology. To help this time, here are three simple tech policy ideas for any party. They’re focussed on helping the UK adapt to the current wave of technology change. They are a bit late for the manifestos, but they still might be useful.

A bit of context

First, a bit of context. Technology is always changing but it has changed a lot in the last few decades with the proliferation of computers, the internet, the web, and data. These technologies have changed things for governments.

Some citizens now have higher expectations from public services. They expect public services to behave like those they get from Google, Amazon or whichever service is hot this year, *checks notes*, such as ByteDance’s TikTok. Technology is enabling things that some may think should be public services — like accurate mapping data on smartphones, or being able to have a video call with a doctor.

Other citizens now have more fear. Perhaps because they are excluded from those services because they lack skills or access to the internet or perhaps they are at risk of being discriminated against because technology is being used to perpetuate, or accentuate, existing societal biases.

Using new technology to help deliver public services that work for everyone is a tough job that, despite good work by Government Digital Services, government still has not cracked.

Image from For Everyone via the Web Foundation

New technology has also enabled new businesses, markets and types of services to emerge. Things like smartphones, social media, cloud computing, online retailers, online advertising, and the “sharing economy”. The world is now more interconnected. Someone in Wales can rapidly build an online service and start selling it to people in India, and vice versa. Meanwhile because the technologies have also been adopted by existing companies they affect government’s role in existing markets.

Technological waves of change like this are not new — I recommend reading some history about the after-effects of the invention of ocean sailing, printing, electricity, or television — but governments have been particularly slow to adapt to this wave of technological change.

Why? Perhaps because the technologies have changed things globally. Perhaps because of the type of governments that we have had. Perhaps because of lobbying by businesses. Who knows. Future historians will be better placed to assess this.

Anyway, my suggestions are not about the details of each of these areas. Instead they are about how to increase the rate of adaptation for the next government. About how to get more radical change.

Tackle the fear around technology and politics

There is a lot of fear about what technology means for politics. Misuse of data by companies and political organisations. Highly targeted advertising reducing accountability. Foreign governments interfering in elections. This fear is exacerbating a pre-existing low level of trust in and disengagement from UK democracy.

Political parties should start with themselves. They need to be open about how they are using data and online advertising and publish data about their candidates to help voters make more informed decisions. Political parties should not use micro-targeted advertising during the election, and should challenge their opposition to follow their lead. Where necessary they should err on the side of caution when using advertising tools. After all, much targeted advertising is already likely to be illegal under existing legislation. Doing these things will help politicians learn how to responsibly use technology while competing for power. That will help them use technology responsibly if they get in to power.

Whoever gets into power should then ban targeted political advertising until it is shown to be reasonably safe. To understand the effects researchers will need access to data held by the big technology platforms like Facebook, Twitter, Google and Apple. Organisations in the USA have faced challenges when trying to do this with Facebook but approaches like the ONS ‘five safes’ and the Ministry of Justice data lab show that parts of the public sector have the necessary skills to design ways to do it. Government should use models like this to give accredited researchers access to data held by the platforms to inform future policy decisions and, perhaps, when to relax the ban for certain kinds of ads.

Develop technology literacy in more of the public sector

To implement a party’s manifesto commitments — whether it be implementing municipal socialism, moving to a zero carbon society, (re)creating an independent Scotland, agreeing new trade deals (if Brexit actually happens), free broadband, a charter of digital rights, or implementing an industrial strategy and increasing R&D — public sector staff need to understand how technology affects their work and technology experts need to understand the public sector.

Sometimes a horrified face emerges from behind my polite face. I apologise to everyone who has seen it.

Unfortunately too many people still do not get it. In my own meetings with governments I am often surprised, and sometimes horrified, by whole teams of people with limited technology literacy making significant decisions about technology. (Similarly, I am often surprised, and sometimes horrified, by teams of technology experts making significant decisions that impact on policy or operations with no real experience in those areas.)

Not every public sector worker needs to be a technology expert, and it is certainly not true that everyone needs to know how to code, but it is necessary to have technology literacy in many more parts of government. More public sector workers need to understand both the benefits and the limitations of new technology and the techniques that people, like me, use to build it.

This is one of the most important things to focus on. Different skills are needed by different roles, but an underlying element of technology literacy is useful for everyone.

To start providing this technology literacy I would recommend vocally demonstrating that technology experience is as valued as other skill sets and encouraging more technology experts to join teams that lack that experience, and by seconding non-technology staff into technology teams. In both cases people can then listen to and learn from each other.

An independent inquiry into technology regulation

Finally, regulation. Technological change needs changes to regulators and can lead to the need for new ones. There are a growing number of known gaps in technology regulation. Some of these gaps affect public services, like the police. Others affect public spaces, like facial recognition. Some affect new services like social media. Others existing ones, like insurance. In some cases it is not clear if regulators are appropriately enforcing existing rules, like equalities and data protection legislation, while there will be a large number of gaps that people simply haven’t spotted yet.

Previous governments have set in process various initiatives such as considering the need for a new social media regulator, a national data strategy, and a Centre for Data Ethics and Innovation (CDEI), but these initiatives are not adequate. They are controlled and appointed by the current politicians, operate within current civil service structures, and are mostly taking place in London. The changes bought about by technology are too fundamental for this approach to work. The UK needs something more strategic, more radical, more independent, and more citizen-facing.

An independent inquiry into technology regulation should be set up. It should have representatives from around the UK; with different political views; with experience from the public sector, private sector and civil society; and from both citizens that love modern technology and from the groups that are most at risk of discrimination. It should look across the whole technology landscape, have the power to call witnesses, and be empowered to make a series of recommendations for changes to legislation and regulation to help set the UK on a better path for the next decade.

Inquiries like this can happen faster than you think. The recent German Data Ethics Commission took just 12 months to come up with a set of excellent recommendations. Setting a similar timescale for an inquiry in the UK will allow the next Parliament and the next Government to focus on delivery.

It is necessary and possible for the UK to adapt to technology faster

Politicians and their teams can learn how to use technology more responsibly by tackling the fear around technology and politics; mixing up teams in the public sector can help staff learn from each other; and an independent inquiry into technology regulation can help set the UK on a better path to the future.

The UK needs to adapt to technology faster. For the good of everyone in the UK, but particularly those who are being disadvantaged by irresponsible use of technology, can we do it? Please?

“Practical data ethics” talk at 2019 European Data Ethics Forum

Slides from talk

Hi, I’m Peter. I currently work at the ODI (Open Data Institute) where I am Director of Public Policy. I will start with my usual warning, particularly for an audience where English is not the first language. Sometimes I speak too quietly and too fast and I often make bad jokes and obscure references. I’m bad like that. This is my last public talk for the ODI so I am even more likely to do that than normal. Please tell me off if you cannot follow what I am saying. I will stop and get better.

About the ODI and about me


The ODI is a not-for-profit that works with businesses and governments to help build an open and trustworthy ecosystem. The ODI believes in a world where data works for everyone. As simple to describe, and as hard to achieve, as that.

In that world data improves the lives of every person, not necessarily every business or every government. Some businesses and governments are deliberately building new monopolies or causing harm to people. Sometimes it is not possible to fix that behaviour by working with organisations, instead it needs other ways to change behaviour. I will talk about those later.

At the ODI I have been heading up the public policy function — I’ve been responsible for the ODI’s views on the role of data in our societies.

I am a technologist by background and I somehow stumbled into the world of public policy a few years ago. One of the things I have been focussed on in that time is making sure that public policy is informed by and tested in practical research and delivery (and vice versa, that delivery work aligns with policy thinking). Data, technology and people are always changing. A strong link between practice and policy helps make stuff useful.

I am here to talk about practical data ethics. I would like to start by talking about how we create value from data; why we need to change the behaviour of people and organisations that collect, share and use data; and finally to talk about some possible interventions to change behaviour — including practical data ethics.

Creating value from data

Value is created from data when people make decisions.


To maximise the decisions that can be made we need to create tools that meet the needs of different decision makers — for example a mapping app to help me find the building that we are in today, a bit of sales and customer analysis to help a business decide whether to invest in a new product, or a research project to help a government decide whether and where to build a new road.

To create this range of tools we need to make data as open as possible.

This needs stewards — the people who decide who can get access to data — to make it accessible in ways that creators can use. There are a number of reasons why they might do this but it is (hopefully!) always driven by the need to use the data to tackle a problem by making a decision.

The problems with data

Unfortunately there has been a rush to collect data, open up data, share data, or make more decisions using data without thinking about whether or not we should.

Go, Gromit, go! https://www.youtube.com/watch?v=fwJHNw9jU_U

This is an ethics event so I am going to start by talking about harms. Rather than organisations making data work for people, they make it work against them.

Harm to individuals. People in the USA have been sent to prison based on decisions by a judge influenced by algorithms that could not be inspected or challenged. The algorithm was meant to reduce human mistakes and bias. Subsequent research has shown that the algorithm was “no more accurate or fair than predictions made by people with little or no criminal justice expertise”. It was probably less accurate than the person it replaced. It certainly wasn’t as accountable.

Harm to groups of people. These are often groups of people that are already disadvantaged.

The UK Government launched a new online service to check passport photos. It did this knowing that the service was more likely to fail to work for people with darker skin. To put it another way, the service was known to work better for white people than black people. Is that ethical? Should it be legal?

Meanwhile when the UK Government transferred the EU’s General Data Protection Regulation (GDPR) into UK legislation it put in place an exemption that reduce the protections in cases when government was using the data to enforce immigration controls. This follows the recent Windrush deportation scandal, which was partly built on unrealistic expectations of data availability and quality, and happened during the ongoing Brexit negotiations which could lead to 3 million EU citizens being at risk of deportation from the UK. A recent court case found that the data protection exemption was legal. But was it ethical?

Harm to groups of people is not always caused by personal data. The excellent book Group Privacy contains many examples. One that sticks in my head is from the South Sudanese Civil War. The Harvard Humanitarian Initiative published analysis created from satellite imagery to help people find and get aid to refugees. Unfortunately terrible human beings used the same analysis to find and attack those same refugees. The tools that the team had available had helped them think about mitigating the risk to individuals from the release of personal data, but not the threats to groups of people created by non-personal data.

And as a final example there has been damage to our democracies. The use of data in political advertising, to spread misinformation, or most famously in the Facebook/Cambridge Analytica debacle. Personally I do not think that the data collected by Cambridge Analytica had much effect, I reckon they sold snake oil, but the fear of it having had an effect is damage in and of itself.


Left unchecked these harms will lead us to a data wasteland where organisations do not collect or use data, people withdraw consent and give misleading data, and as a result we will get poor conclusions when we try to make decisions based on data. It reduces the social and economic value that data could create.

But there is another type of harm. Where people and organisations collect data but use it only for their own purposes. They don’t make data work for everyone. They just make it work for themselves.

This is data hoarding. It is the attitude that “data is oil and I must control it”. Data is collected and used within a single organisation for too narrow a purpose.

A simple example comes from Google. In recent years Google have encouraged people to crowdsource data about wheelchair accessibility in cities so that it is easier for people in wheelchairs to move around. But the data is only available in Google Maps. The people who contributed the data would surely have wanted it made more widely available so that people in wheelchairs who used Apple Maps could find their way around, or that the data was made available to civil society and city authorities who might have been able to use it to improve wheelchair accessibility in cities. Instead the data is hoarded by Google to create a competitive advantage and bring in more customers


There are vast amounts of data locked up in data monopolies like Google, Facebook, Apple, and legacy organisations like big multinational corporates or national mapping agencies.

This leads to lost opportunities for innovation. Innovation that might have created better outcomes for people. As a result lots of people are looking at data as a competition issue at the moment.

It also leads to lost opportunities for understanding and tackling major societal challenges like understanding the impact of the internet and web on our democracies, how to cope with aging populations or increasing urbanisation, or how to prevent or reduce the impact of climate change. We need to be careful of vital data infrastructure becoming over-reliant on the private sector firm, and the excessive data collection caused by some business models, but just imagine the data held by governments and businesses that could be made safely available to help with these problems.

The challenge is finding a path between the data wasteland and data hoarding. If we make data too open and available then it causes harm, if we do not make it open enough then we lose benefits and concentrate power in monopolies.


We need to move from a world where people are rushing to collect, share and use data to one where societies have more strategic decision making about data. Where data is as well maintained and useful as other forms of infrastructure like road, rail and energy. Where there is better legislation, rules, guidelines, and professionalism.

In doing that we need to recognise that different societies will make different decisions about data. Just like they make different decisions about other forms of infrastructure. People’s needs and social norms vary.

As long as we stay within democratic norms and respect fundamental human rights then we should accept those differences. Many of my examples today are from high-income countries but personally I am excited to see what new futures emerge from the rest of the world. That would be a different talk though.

Anyway, moving to a better data future will require constant monitoring and intervening by a range of people and organisations. The ODI is one of the organisations doing that monitoring and intervening. The strategy for how and when we do it is on the website.

Possible interventions

It is essential to think about the ecosystem around data and to think about multiple points of intervention. To create a world where data works for everyone many forms of intervention are needed. I am going to touch on some before getting to practical data ethics.

Many people start by thinking that better choices by citizens and consumers can change the world. Consumer power is the answer. Consumers will pick services from organisations that cause less harm and create more benefits.

Many people say that consumers are happy with the current situation — why else would they be using these organisations and services? Unfortunately work in the US by the academics Nora A Draper and Joseph Turow on digital resignation and the trade-off fallacy, and our own recent piece of work on how people in the UK feel about data about us, shows that most people do care and want a different future but that they feel unable to get there.

One of the things that is lacking is choice for consumers. The previously mentioned work on digital competition, and things like interoperability and data portability, will help but it will take time. It is not going to reduce some of the harms we can all see right now.

Regulators can intervene. In the UK the Open Banking movement designed a framework which was adopted by the UK’s banking regulator. It tackled competition issues, by giving bank customers more control over data about them, and had measures to protect against harms. Rather than open banking being solely down to consumer choice a regulator approves who bank customers can share data with. I helped a bit both with the framework and the persuasion to get it adopted. The process has taken at least four years and is just starting to see changes that benefit people.

Another necessary point of intervention is legislation. This is essential and can radically change the behaviour of businesses and governments. But again legislation takes time. That is a feature, not a bug, of democracy. Democracy comes with debate and compromise. GDPR took six years from the first legislative proposal until it came into force.


For more immediate change there is existing legislation that could be used — for example anti-discrimination legislation and worker’s rights — but that legislation is likely to need updating as, like any legislation, we will learn that there are gaps and changes to be made.

More recently people are proposing the creation of new institutions, like data cooperatives and data trusts, to create more collective power and collaborative decision making. I led a team doing both policy thinking and practical experimentation on these institutions. There are other people doing similar work in other countries.

But these new institutions are in a research and development stage. We have to be realistic that it will take more time to determine if they are useful, where they are useful, and how to build and regulate them.

Practical data ethics

There are many other possible points of intervention but one important and often overlooked one is the people within the organisations that collect, share and use data. Which brings me (finally!) to practical data ethics.

In the USA there have been growing protests by tech workers against the decisions made by their employers, in the UK research by DotEveryone found that “significant numbers of highly skilled people are voting with their feet and leaving jobs they feel could have negative consequences for people and society.” Meanwhile consumers and citizens are saying that they do care and do want more ethical technology, organisations respond to that. The need to retain both workers and customers creates a need to change.


We should never forget that, as my friend Ellen Broad put it in her book, decision are made by humans. Humans decide to fund or stop projects, to buy technology, they make design and development decisions, and they decide whether and how to evaluate its outcomes.

These decisions are influenced by consumers, governments and regulators but they are also influenced by other things such as professional codes, training courses and organisational methodologies.

Many people think the best way to intervene here is to define ethical principles. But, when we look out at the world we can see that many many principles have been created in the last few years yet are they having any impact? A recent study into the US Association for Computing Machinery’s code said no. Meanwhile, how will people know which principles to apply in a given organisation, sector, or country? Who gets to define the principles? What right do they have? Who holds people accountable to them?

This does not mean that principles are useless, within an organisation they can demonstrate values and help create space for challenge, but we need to look at other techniques to make them more useful at the systemic level where the ODI is looking to intervene.

When Ellen Broad and Amanda Smith looked at this for the ODI a few years ago. They came to the conclusion that the most useful thing for the ODI for to do was something a bit more practical and a bit more like the tools that people already use.

So, inspired by the business model canvas, we worked together to create a Data Ethics Canvas.


In the two years since then various other people — like Fiona, Anna and Caley — have worked with me to iterate it and helped turn it into to what you can see today. Not all of those people work for the ODI. We have been iterating it based on feedback from our own users and audience too.

The canvas does not give easy answers it ask questions. It encourages people to take responsibility for coming up with their own answers in their own contexts. The questions are inspired by the problems we and other see.

the black and white pictures are from the print-at-home canvas

It prompts people to think about their existing ethical and legislative context — perhaps they are already covered by health ethics or anti-discrimination legislation, or one of the many sets of AI and data ethics principles— and the limitations of data.

(By the way some principles, like those created by dataethics.eu, ask questions too)

The canvas prompts people to think of both possible positive and negative effects, but it encourages them to think more deeply about which groups of people win and lose.

The canvas is designed to be used by multi-disciplinary teams of people, not just individuals. We have seen it used by groups including lawyers, developers, programme managers, user researchers, policy analysts, designers and product managers. It encourages people in organisations to create space and time for debate, and then to make and act on decisions.

The canvas also encourages transparency and openness. That way people outside an organisation can see how it plans to use data, what benefits and risks are expected, and what mitigation plans are in place. It encourages people in organisations to listen to people who they might affect.

But is it having any effect?

I have used it in public training, private workshops and conversations with a range of organisations. I have seen it broaden people’s minds about the range of ethical issues that they should consider before making a decision. I have seen senior people in organisations try it in a few projects then go on to implement it in their standard project governance.

I have also seen individuals sneak it into a few projects within a large organisation with the goal of proving its value before talking more with their bosses. You normally don’t need permission to try a new methodology. Give it a go in your own organisations.

As well as providing paid training we openly publish a print-at-home version of the canvas and a detailed user guide on the ODI’s website so that anyone can use or remix it. The open licence on the canvas means that anyone has the ODI’s permission to do that.

It is hard to track usage of something that is openly published on the web but I know from our own research and surveys that hundreds of people in public, private and third sector organisations at local, national, and global levels are using it because of that decision to make it openly available.


Those people work in multiple sectors like academia, civil society, public service, health, finance, engineering. Some are in large corporates, some in small startups. People tell me that some organisations have stopped projects because of questions raised by the canvas. Others say that they have redesigned products and projects. Brilliant. It is causing some decisions to be made.

I can only share those stories vaguely, because I respect the confidence and privacy of those people.

One organisation, the UK Cooperative Group, have talked most about their use of the canvas. It forms part of their standard product development model. Because the canvas has an open licence they could adapt it to suit their own needs. Perfect. I hope some of the many, many others will share their stories too. I think it will be less scary than they might think.

I am always wary of over-confidence. At a place like the ODI we get listened to and the canvas could actually be making things worse. Is the effect overall positive and how big is it? Only time and more detailed evaluation will tell. But from my own checks I am reasonably confident that it is helping.


There are other people building similar tools that are useful in different contexts. I got my old team to use DotEveryone’s consequence scanning kit to look at data trusts — for that level of institutional change DotEveryone’s tool was a more useful approach. The team at the UK Digital Catapult have published a very useful paper categorising some of the tools that are available.

Obviously this approach to practical data ethics is only one type of intervention. Accountability — through organisational processes, professional codes, regulation and legislation is still very much needed. But practical data ethics can create some practical change now. If we can get people to be more open with their tales it should also inform policymakers on where the biggest problems are and what regulation and legislation is needed.


Building a better future for people with data will take quite a while. There are some obvious problems, some of which have obvious answers, but there also less obvious problems and no easy answers for all of the problems. We all have to keep monitoring and intervening at multiple points in the system.

We need to stay optimistic and believe that it is possible. I believe being optimistic is a political act that makes it more possible that we will build a world where data works for everyone.

Anyway, I have rambled on too long. It is time for less talking from stage and more talking with each other. Grab me if you want to chat or email me on peterkwells@gmail.com if you do not get a chance.

Fitbit, mass shootings and terrible ideas

My social media timeline is full of a Gizmodo story about a plan to use personal data, including data from Fitbits, to stop mass shootings in the USA. It is a terrible idea but it is interesting to think about some of the ways in which it is terrible, where it came from, and what’s next.

What is it?

Following the story back to the original Washington Post article the idea seems to be that a $40m-$60m research project would encourage individuals to consent to the use of personal data by a new research organisation called HARPA. The personal data would come from a range of sources, including Apple Watches, Fitbits, Amazon Echo and Google Home. At HARPA a team would analyse the data to come up with a model that would “identify risk factors when it comes to mental health that could indicate violent behavior”. The story says that HARPA will need “real-time data analytics” to stop the mass shootings.

Why is it terrible?

Here are just a few of the reasons why it is a terrible idea:

  • the project assumes that targetting the behaviour of individuals is the way to reduce mass shootings, rather than other interventions like reducing the availability of guns and bullets
  • the project is based on a false idea that the behaviour of individuals who commit mass shootings in the US is primarily linked to mental health. As a recent US National Council for Behavioural Health report shows the real reasons are far more complex
  • the project will not generate a good model. As Emma Fridel is quoted as saying in the Gizmodo article “literally any risk factor identified for mass shooters will result in millions of false positives”. Improving the model will require the collection of ever more data about ever more people (from x% accurate, to x+1% accurate, to x+1.1% accurate accurate, etcetera etcetera while people’s behaviour continue to change). Even then it will inevitably face what Julia Powles and Helen Nissenbaum call the seductive diversion of solving bias
  • the consent model is naive. Individual consent is a model that is already being challenged on the grounds of both whether individuals can ever make truly informed decisions given the growing number of use cases where data is used, and how the decisions of individuals impact on the rights of groups of people. For example, data sources like Amazon Echo and Google Home do not only collect data on the single individual who controls the account for the device but also on every individual who goes into the physical place where these devices are collecting data
  • to deliver “real-time data analytics” will require data about the behaviour of individuals to be collected on a massive scale, will every individual with suspected mental health issues have data about them captured and analysed? how do we identify that group of people? perhaps just capture the data for every person in the USA?
  • even with the best will and capability in the world this massive collection, sharing and use of data will create a whole host of risks and unintended consequences whether it happens in a liberal democracy or an authoritarian state
  • even if an organisation could make this project work in a safe way then I would not support such mass surveillance based on my own moral values and fears of how people and societies will react to feeling like they are constantly being watched, it leads us to the data wasteland or worse

Why is the idea in the news?

The story is based on an idea pushed by an organisation called the Suzanne Wright Foundation.

This organisation was founded by Bob Wright after his wife, Suzanne, died from pancreatic cancer. One of the main goals of the organisation is to create HARPA, which would be based on the defense research and innovation agency, DARPA, but instead focus on public health issues.

It is clear Bob Wright and the foundation are well-connected in Washington and savvy enough to connect research proposals to political topics, like the mass shootings that are sadly so prevalent in the USA. There are tales that Bob Wright and the USA President, Donald Trump, know each other personally.

Unfortunately Donald Trump has the dangerous mix of embracing conflicts of interest, latching onto ideas for political gain, and wielding a lot of power.

What’s next?

That this particular HARPA proposal is not published openly, either by the USA government or by the Suzanne Wright Foundation, reflects badly on both of them. It makes it hard to scrutinise. It is hard to tell if these organisations really think that this idea is useful, or if they are simply using it for short-term political gains, to head off the risk of measures to reduce access to guns and bullets, or simply to create momentum for the creation of HARPA. But, it is clear that some people are concerned enough about this proposal to leak it to the press.

That should worry both the people and the organisations who might be harmed by such a terrible idea. This type of mass collection of data might seem fanciful in many countries but the USA is already seeing Amazon’s Ring service encouraging people to share data from security cameras looking out from their homes with organisations like the police.

Neither Amazon Ring’s data sharing or the Suzanne Wright Foundation’s research plan for mass shooting are likely to be effective in reducing crime, but they will both be effective at wasting money and risking unintended consequences & harm for many people. This is a shame as a government agency with both policy and delivery capability that was focussed on working out how to improve public health using modern technology, techniques and data could actually be useful.

If we want to enjoy the benefits of modern technology then the real challenge is how do we stop such terrible ideas much earlier, and well before they become horrible, horrible reality.

Let’s make GOV.UK Pay support cash

17% of the UK’s population — about 8 million adults — would struggle in a cashless society. To meet the needs of everyone it is essential both that public services give people the opportunity to pay in cash and that government help private and third sector services to take cash payments. Government can play a role in helping make this happen by broadening the scope of its payments platform and team, GOV.UK Pay, to support cash.

The need for access to cash

Cash use has declined in recent years. It has become ever easier for most of us to buy things using other methods — for example credit and debit cards, direct debits, or through online payment services like Paypal or Apple Pay.

In 2017 direct debits overtook cash as a form of payment. These other payment methods are more convenient both for the people making payments and for people taking them — there is no pesky cash to count and send to the bank at the end of the day. Some call for a rapid transition to a ‘“cashless society” where cash would not be used. Left unchecked it seems likely that it will become ever harder to use cash as shops, buses, taxis, pubs and even public services favour these new payment methods as it saves them money.

image from the Ceeney Review

In 2018 an independent review of access to cash, the “Ceeney Review” was set up in the UK. It published its final report in March 2019.

The review said that 17% of the UK’s population — about 8 million adults — would struggle in a cashless society. The reasons are complex. The report talks about multiple reasons including lack of access to the internet (particularly in rural areas), people without bank accounts, physical and mental health, financial difficulties, or fear that the computers that run the other payment methods will break.

The review found that 51% of consumers felt it would be a good idea to change the law so that all shops and services had to accept cash.

The factor with the strongest correlation to use of cash was not old age, but poverty. While a cashless society might be convenient for many it would be a struggle for some of the most impoverished people in our society.

image from the Ceeney Review

Meanwhile the public also had a range of concerns about a cashless society including the needs of those who have to use cash but also including other concerns like a loss of privacy and the loss of the ability to choose how to pay.

That does not mean that a cashless society is necessarily the wrong vision for the future, it means that any transition needs to happen over a period of time, that governments need to provide support for those impacted, and that in the intervening period we need to preserve access to cash.

The Ceeney Review’s final report said that:

we recommend that essential government services and monopoly and utility services should be required, through their regulators, to ensure that consumers wishing to pay by cash can do so, either directly or through a partner

In response to this UK government said that it would:

safeguard the future of cash and ensure its availability for years to come

That sounds sensible.

GOV.UK Pay

The Government Digital Service (GDS), was set up back in 2011 to help implement the then UK government’s digital by default strategy. In July 2015 GDS announced that it would make payments [to government] more convenient and effective. This led to the announcement later in the same year of GOV.UK Pay — a free and secure online payment service for government and other public sector organisations.

This service is now part of the Government Transformation Strategy and a component in what people call government-as-a-platform. It is used by a growing number of public services in central and local government.

GOV.UK Pay is a better experience for many citizens, and for the people building public services that need to take payments, but it does not handle cash. It continues the same trend as we see in the private sector. Making it easier to handle online payments while neglecting the needs of people who need to pay in cash.

It provides no direct benefit for the millions of people who can’t (or won’t) use either online payments or online services. As well as the Ceeney Review’s finding of 8 million adults who would struggle in a cashless society, the Financial Conduct Authority reports that there are 1.3 million UK adults without a bank account. Unless a friend helps they have no way to pay money to a public service that cannot take cash.

In a response to an Freedom of Information request GDS has said that in 2015 they undertook user research on a prototype that integrated GOV.UK Pay with a cash payment service

“but not with users who typically rely on cash payments”

That was a bit silly. You need to pick an appropriate audience to test with.

Other government services (DWP and Insolvency Service) also did usability research with the actual target audience. These services felt it was a valuable payment option.

Unfortunately cash and the needs of the people who use it were not prioritised. Back in 2015 the focus was on online payments and the people who use them.

Government has a strong moral, and often a legal, responsibility to make public services work for everyone. GDS have always said that they want to benefit everyone and have an emphasis on accessibility. The Ceeney review, and government’s positive response to it, provide good reasons to revisit the strategy for GOV.UK Pay.

Broadening the scope of GOV.UK Pay to support cash

To deliver on its commitment to safeguard the future of cash government will have to make a range of interventions. Some of those will include making sure that the public sector can handle cash payments. I expect that the current GOV.UK Pay team will be able to provide a lot of help in meeting that objective while still delivering on their historic focus of better online payments.

But the benefits will not only be felt by people using public services. By making it easier for people to pay government in cash government can start to change the cash payments system for the better.

Perhaps government’s payment experts will discover that to get continued good coverage of places to pay in cash that:

  • the government will need to make it easier to pay for any public service in the local authority offices that are in town centres across the country
  • they should provide support to make it easier for shops to offer cash payment services like Paypoint
  • they can develop and share good practice for how to handle cash payments
  • there are ways to share good practice across the organisations that process cash payments to help make face-face payment services better
  • or the many many other things that will emerge with some good open-minded research into the needs of people who use cash

These things will also provide benefits to people paying cash to private and third sector organisations too. That is good. Government’s responsibility goes beyond what we traditionally think of as public services that need payment — things like paying our council tax, buying a fishing licence, paying for a car parking space, or getting a passport.

Governments have a responsibility to the whole of society. Governments should be investing in public goods that benefit everyone. Access to cash will make it easier for more people to buy food, travel around and enjoy their lives. Government should make it easier for people to use new online payment methods, but it also needs to preserve access to cash for the people who need it.

Broadening the scope of GOV.UK Pay to support cash will help government do what it said it would do when it responded to the Ceeney review, and make things a little bit better for everyone.

AI and the Committee for Standards in Public Life

The UK has a Committee for Standards in Public Life (CSPL). It advises the Prime Minister on ethical standards across the whole of public life in England (yes, only England — ethics must be a devolved matter).

A picture of some people by L S Lowry (via Flickr)

The committee is currently investigating Artificial Intelligence and whether the existing frameworks and regulations are sufficient to ensure that high standards of conduct are upheld as technologically assisted decision-making is adopted more widely across the public sector.

Big topic. After all AI is a range of techniques that uses people, mathematics, software and data to make guesses at the answer to things. It can help, and hinder, with lots of the huge array of things that the public sector does.

I represented the Open Data Institute (ODI) on a roundtable for this investigation. A couple of people have asked me what the roundtable was like and what I said. Here’s a quick blogpost.

Preparing for a roundtable

The ODI team get invited to lots of roundtables and events. We decide which ones to do and who does them based on a range of criteria. The invitation for this one went to our CEO, Jeni Tennison, she passed it to me to do. My goal was to help the committee, learn from what other attendees were saying, and test some of our ideas in front of this audience.

We did our usual preparation by sharing the questions around the team in the office and telling our network that we were going along to hear what advice they gave us. That technique provides a lot of input. It also helps me represent the ODI and the ODI’s network, rather than simply myself and my own views.

I summarised it down to a few key points to try and make, and then tried not to over-prepare. Over-preparation is the worst sin: it makes me sound even duller than normal.

Rounding a table

The roundtable itself was at Imperial College in London.

The setup was more informal and the committee was more friendly and asked more insightful questions than most similar things I’ve done. That was good. My background is technical and private sector — I previously spent 20 years working with telecoms operators building products, systems and networks — so I always worry that I’ll misunderstand or miscommunicate particular words or phrases. That would damage both me and the organisation I represent.

Anyway, I managed to get over versions of some of things that we’d prepared and/or that we regularly discuss in the office and that were relevant to how the roundtable took shape:

  • that there is little transparency over use of AI in the public sector and of how the UK government’s Data Ethics Framework is being used. I know that there is good and bad work being done, but mostly because I know some of the people doing it. How are the general public meant to know?
  • that we need to focus more on the people who design, build and buy AI services. Exploring what responsibility and accountability they should have and how we give them the space, time and money to design those services so that they support democracy, openness, transparency and accountability as well as being efficient and easy to use
  • that the current focus on ethical principles and AI principles do not seem to be having a useful effect. That instead we need to couple those top-down interventions with more bottom-up practical tools (like the framework or ODI’s Data Ethics Canvas) and more research into how the people designing, building or buying AI systems make decisions and what will influence them to comply with the law and think about the ethical implications of their actions
  • that control, distribution of benefits and harms, rights and responsibilities about AI models would be a useful area to explore
  • that eliminating bias is the wrong goal. Bias exists in our society, some of that bias becomes encoded in data and technology. AI relies on the past to predict the future, but the past might not reflect the present let alone the world we want. We should build systems that take us towards the future we want, and that can adapt as things change
  • that in a world which is increasingly online-first and where we risk the state disappearing behind a smartphone screen and automated decisions, that the principles of public life should be updated to put the need for humanity front and centre

I also learnt a lot from other attendees with some interesting things for myself and the team back in the office to chew over.

After the roundtable

A couple of weeks after the roundtable I was sent the transcript to review. The committee will publish that transcript openly — which is good and healthy. Attendees get to see the transcript first so they can suggest corrections to simple grammatical errors or transcription problems. That’s why I’m not commenting on or sharing what other people said.

It is important to review the transcript. There are sometimes errors. For example, in this transcript I was recorded as saying that my boss, Jeni, was “whiter than me” rather than “wiser than me”. I have no idea how I’d measure the former but I certainly know that she’s the latter. Some of the words and thoughts in this blogpost come from Jeni and others in the team like Olivier, Miranda, Renate, Jack &c &c &c.

Reading the transcript also helps me understand the difference between the clarity of my speech and the clarity of my writing. I’ve left most of my spoken errors in place. Just like the state we can’t only communicate in words and pictures that are sent through a computer. Most of us need to get better at speaking with humans.

The data wasteland is polluted

Part of the ODI’s theory of change

At the Open Data Institute we use a theory of change. It is one of the tools that we use internally to help us make decisions and externally to explain to people what we do and how we do it.

Our theory of change describes the farmland, oilfield and wasteland futures and helps us try to steer between the extremes of the oilfield and wasteland futures to get to the farmland.

The wasteland future emerges when there are unaddressed fears arising from legitimate concerns — such as who has access to data and how it might be used.

We frequently talk through the theory of change to explain what we do and how we do it. We try to provide pauses in the conversation to get other people to give their opinions. It helps people to think and learn for themselves. It helps us learn too. We hear what other people think happens in the wasteland future. How they think people and organisations will react to their fears being unaddressed.

Most of us the people we talk with think that the wasteland future has a lack of data. They realise that with a lack of trust then many people and organisations will reduce how much data they share. They imagine people refusing to use services because they don’t trust them, and that organisations similarly refuse to share data because they fear being punished. They think the data stops flowing.

A smaller group of people realise the wasteland is more complex and weird. People’s behaviour will change in many different ways. Humans are fun like that.

Some people might post inaccurate data. Perhaps you will post fake claims of jogging exploits to social media if it is the only way to get a fair life insurance deal. Other people will hide in the data. Maybe we will give our children common names so they are hard to identify or so they appear to be from an ethnic group that is not discriminated against.

Similarly businesses will feel the need to create fake data. Organisations that fear that their supply chain data is being captured and used unfairly by their competitors might start to create ever more complex corporate structures to hide the data. Obviously reducing the chance of this unfair behaviour will also make it harder for regulators and civil society to know if a business is acting fairly.

I’m sure that even if you hadn’t thought of them at first you can now think of many more things that happen in the wasteland future.

You can see some of this future now. There are already people and organiastion hiding in the flows of data. Some of those people need and deserve help to hide because they have a genuine fear of harm, perhaps due to their political beliefs, ethnicity or sexuality. Equally there are others who are trying to evade fair scrutiny, for example tax dodgers and other criminals, and organisations providing services to help them do so. But if we increasingly fear harm then more people will want and need these services and, inevitably, they will become ever cheaper and used by more of us.

As this behaviour becomes widespread we will see data that is massively biased and misleading. People and organisations that use data-enabled services to tackle global challenges such as global warming, to price a life insurance premium in a way that doesn’t unfairly discriminate, or to decide whether or not to take a job will struggle. That would not be good for any of us.

Navigating the a route between the wasteland future and a different future where we get more economic and social value from data will not be easy. There will always be some people who need to pollute and hide in data to protect themselves from harm, we need to allow that to happen. Understanding and addressing people’s fears is not only a technical challenge, it is also a social and political one. To retain trust we need businesses and governments to adapt to people’s ever-changing expectations in a range of cultural contexts.

An increasing fear of how data is used will not simply stop people using services or sharing data, it will change peoples behaviour in a range of ways. If that happens we can expect data to be increasingly poor quality, biased and misleading. And that pollution will make data less useful to help people, communities and organisations make decisions that hold the potential to improve all of our lives. Some of that potential is false — the use of data required is too scary and people do not want or need it — but that is why it is important to understand and address the concerns we can if societies are to navigate towards the farmland.

You can read more about the ODI’s strategy and theory of change on our site.

Putting Blackpool FC on the blockchain

Blackpool Football Club, its fans and community have been treated horrendously over the last few years. The owners, the Oyston family, have run the club into the ground, abusing and taking legal action against fans in the process, while both the football authorities and Blackpool’s town councillors sat by and watched.

Last year there was light at the end of the tunnel when Justice Marcus Smith ruled that the Oystons had “illegitimately stripped” the club “in a manner which involved “fundamental breaches” of their duties as directors”. The ruling came after a legal case bought by Valeri Belokon, a major investor at the club. To compensate Belokon the Oystons need to raise money so have put the club up for sale.

Seven patient months later a company called vSport are reported to have bid for Blackpool FC and are quoted saying that they expect to complete the purchase by the end of the month.

vSport say that they are the “world’s first non-profit, open-source and blockchain empowered platform which is completely dedicated to the Sports Industry”.

Sounds impressively futuristic but what on earth does that mean? And should fans and journalists be welcoming the news, or undertaking a bit more scrutiny?

What on earth is blockchain?

Blockchain is a new technology that is generating a lot of interest. Many people believe that it will change the world.

I was part of a team that looked at blockchain two years ago. Our first assessment was that it was useful, but not for everything. We then wrote a longer report, which looked at the promises and risks. Our simplest definition was:

Blockchains provide a way to store information so that many people can see it, keep a copy of it, and add to it. Once added, it is very difficult to remove information. This can reinforce trust in a blockchain’s content.

This type of data storage can support lots of new business and organisational models. Bitcoin is the most famous new model associated with blockchain, indeed blockchain was invented as part of the development of Bitcoin.

Bitcoin was originally intended as a way for people to send payments to each other without an intermediary, unfortunately Bitcoin is currently most famous for being notoriously hard to spend and use, generating a few (Bitcoin) billionaires, losing some people a lot of money, and using as much energy as Ireland or Denmark. Bitcoin doesn’t seem a great thing.

from “Blockchain for 2018 and Beyond: A (growing) list of blockchain use cases

There are many many other proposed uses of blockchain. I won’t list them all. They seem to exist in every sector.

I still haven’t seen one working at scale. And I do spend time looking, Because while I still think there may be some good in blockchain (making it really hard to change data, and making it easier for more people to see when it is changed, must be useful for something! perhaps our project on national archives will help find it?) there is also a lot of hype.

It is good to see that the hype is gradually being seen by more people. We need to get past it to see where, or if, blockchain can be used for positive purposes. Where there is hype there is danger. Not just of lost money — some people will always lose some money while experimenting with new technology — but of more unintended harmful impacts, such as Bitcoin’s impact on the environment, or a direct and immediate impact on individuals.

As societies we need to experiment with new technology to see where it could be useful, but we need be wary of harmful impacts and who could be affected. In the age of a global internet and world wide web, harm can happen at great scale and speed.

What on earth is vSport?

vSport say that they are “world’s first non-profit, open-source and blockchain empowered platform which is completely dedicated to the Sports Industry”. It was founded by Bai Qiang and, Dutch ex-footballer, Wesley Sneijder. vSport is based in China.

The two previously founded a company called Sport8. The English-language version of Sport8’s website has not been updated since 2015 although the Chinese-language version seems to have been more recently updated.


There are reports that Sport8 signed a deal with Borussia Dortmund in 2016, although the Borussia Dortmund website has no mention of it. Bai Qiang previously produced a 3D music video in 2012 and worked as a Vice-President at large speech recognition firm iFlytek.

Play vSports roulette! Could this be offering the chance to win rare Brett Ormerod memorabilia soon?

vSport, like many other companies, has raised some initial funds from investors, put out some blogposts and a whitepaper and is trying to show its potential so that it can bring in more funds. When I signed up on the website I got the chance to play a boring roulette game, I don’t think that game will bring in many funds or players in Blackpool.

The whitepaper has lots of big words and claims but little technical information or detail on how the capabilities will be built and adopted across the many claimed scenarios. I couldn’t find any source code to review, either to check if the roulette game was fair or to help form an opinion on whether the larger technical claims were credible.

A list of applications from the vSport website

The list of applications on the website and in the whitepaper is long and varied. In the time it would take to build a business across the sports sector then most of these ideas would be out of date. If I was advising them then, as for most other blockchain companies, I would recommend that a little more focus and a lot less hype would make them more likely to have long-term success.

from the vSport whitepaper

Some of the applications are a little strange and will have harmful impacts. For example, a section on data sharing talks about personal data like people’s names, sports activities and achievements being put into the blockchain. It says that this could be used in marketing and in making decisions about teenagers at school. Don’t put personal data in a blockchain. Sometimes people need protecting and data about them needs changing or removing. The very same factors that make it hard to change data in a blockchain, also make it hard to protect the people that the data is about.

Most of the applications don’t require a blockchain, they could be built with existing and less experimental technologies, and most of them are about creating financial value for vSport, football clubs, and celebrities but, one of the promises of blockchain is that it can widen the number of people involved in decision making.

from the vSport whitepaper

If vSport follow that model (and the whitepaper hints at this) then football fans could influence its direction and get it to build applications that they want. Perhaps Blackpool fans could vote to finally build a training ground?

Unfortunately the limited information about the foundation shows a simple organisation chart with no detail of who is in which box, how decisions are made, and how they can be appealed.

vSport looks like the very early stage of a classic top-down business. Lots of promises, few products and in need of customers to both develop the products and prove that it can deliver what it promises. I worry that it wants to buy Blackpool football club either for marketing or to test its new technology on the club and fans.

vSport needs more scrutiny

Blackpool fans have had a terrible time. Many, like me, haven’t been to see their football club in years. Any escape route from the Oystons might seem a good one but vSport doesn’t seem the right next destination.

Being either a marketing vehicle for vSport or a testing ground for its technology doesn’t seem like something Blackpool FC, its fans, or its community need. Fans, journalists and local councillors (who’ve hopefully learnt a lesson from their failure to get to grips with the Oystons) need to ask more and better questions of any potential new investor. Any investor that fails to talk with fans before bidding should immediately raise alarm bells.

Many fans were happy to start a new fan-owned club if the Oystons failed to leave. We can ask more questions of vSport, or wait and see if Belokon can use his court case to get ownership from the Oystons, but we should also continue to be prepared to start a new club rather than accepting the first rescue ship that comes along.

The bumpy road to economic and social value

I moved to Newcastle in the North East of England last year. It’s a great place, but one of the things that first struck me about the town was the roads. There’s a motorway right through the town centre. It makes me think of tech and data, and the need to broaden the debate.

Roads for prosperity

Aerial view of the construction of the Central Motorway and Swan House roundabout, estimated to be in 1971. Image via The Evening Chronicle

When we were looking for a place to live we stopped in a few hotels near the town centre. They were on both sides of the motorway.

One side is full of shops, restaurants, cinemas, theatres and bars. The other is full of newly built university accomodation. There’s some rather strange, and a bit scary when it’s late and you’re tipsy…, skywalks connecting the two.

The motorway was opened in 1973 and was controversial at the time. Unsurprising when, as Professor Mark Tewdwr-Jones of Newcastle University says, “school playing fields and houses were…demolished”.

Glasgow motorways, courtesy of Google Maps and their various data suppliers

It was built following the Traffic in Towns report by Professor Sir Colin Douglas Buchanan. The report focussed on the growth in road traffic by cars, and the potential economic benefits that could be gained by supporting it.

Traffic in Towns was later followed by a 1989 government white paper, called The Roads for Prosperity, that followed the same tracks. Both reports gave a higher emphasis to inreasing road use and cars than to reducing environmental impact or other transport options, such as mass public transit or walking. They were design standards for urban transport. Their priority was economic growth.

Urban planners in other UK cities, like Birmingham and Glasgow, followed the same reports and the standards they set. Existing communities were again displaced or affected by roads that were built. A similar story happened in countries and cities across the world. Sometimes earlier, sometimes later.

New York City in the 1920s, Beijing in the 2000s

From the 1920s Robert Moses rebuilt New York City to favour car users as part of larger urban transformation plans. He constructed highways, bridges and parkways that cut through the city and surrounding regions to get cars to where they wanted to be. Debate over the impact of these decisions on communities, and whether Robert Moses’ politics and racism played a part in his decisions and the type of road uses he favoured, continues to this day.

Robert Moses had set the standard, other people followed his lead. Urban planners across the USA built roads that favoured road users and impacted on existing communities living in or near their path.

Beijing smog via a post by Marco Rinaldi

Many decades after Robert Moses, and as part of its preparation for the 2008 Olympics, Beijing refurbished 200 miles of roads and built two additional ring roads.

I was there in 2003 and remember standing in a hutong neighbourhood due for demolition. A resident showed me the straight lines on the map indicating where new roads were being built, and the lanes, streets and houses underneath that were either being demolished or left with greater air and noise population.

The potential benefits to be gained from the new roads had been decided to be greater than the current needs of the people who lived in Beijing. This wasn’t just about the Olympics. As part of the transition from the communist system under Mao Zedong to the market socialist / state capitalist society of current China there were similar infrastructure changes happening elsewhere across the country.

People push back

In each of these cases central authorities had decided that the potential economic gains outweighed the negative impact on people and communities without involving them in the process. People protested at the time but over the years the push back became more effective. It ended up changing the way we plan.

Anyone who followed the environmental protests in the UK in the 1990s will remember Swampy. (image copyright Reuters, I think).

In the UK there were growing protests against road developments during the 1980s and 1990s with calls for integrated transport solutions that considered different types of users like car, bus, rail, freight, bicycles and pedestrians and a reduced impact on the environment.

Gradually UK urban and road planning guidelines were changed to include the need for public consultation and the consideration of societal impacts like air quality, noise or other environmental issues. We now consider more viewpoints and needs before a decision is a made.

In parts of the USA change happened earlier. Jane Jacobs was one of the most famous figures amongst the groups in New York City arguing against Robert Moses’ plan to redevelop Greenwich Village in the 1950s and 1960s. She was part of the Joint Committee to Stop the Lower Manhattan Expressway, the ‘slum’ clearances it proposed and the decrease in air quality that it was forecast to generate. The Committee eventually won. Jane Jacobs started to formalise her thinking on urban planning in the book The Death and Life of Great American Cities. It argued for a new standard for urban design which shifted the emphasis towards the people who lived in the city.

A nail house in Hongkou, picture by Drew Bates. CC-BY-2.0.

In China, the most visible protests against the new roads and urban transformationm were ‘nail houses’, stubborn holdouts against the change. This became possible due to the strengthening of private ownership rights in the post-Communist era. In some cases the holdouts are people who don’t believe the public interest in this development outweighs their own interests, in others it will be speculative investors looking to profit from the public investment.

The parallels to tech and data

I work in the world of data policy at the Open Data Institute. We’re based in the UK but work globally.

I believe data, and large parts of what we call the technology or digital sector, are becoming infrastructure, just like roads became infrastructure in the past. This means that we need to think strategically and for the long-term. The effects of the decisions that we make today will persist.

A clip from one of the boss’s talks on the challenges of strengthening data infrastructure.

One of the things I’ve been doing over the last few years is reading about the history of technology-driven change. Things like the wireless, telephone, radio and roads. The web and internet have helped us communicate over a larger scale and at much faster speeds than previously, but we are still humans. We can learn from our history and the stages technology goes through as, or if…, it gets adopted. Perhaps by learning more historical lessons we can go through those stages faster and make better decisions than before.

An important of this process is how we moved from infrastructure decisions made solely by technocrats, whether in companies or in governments, to decisions being made with society and through our democratic processes. Unfortunately technology and data is currently stuck in the world of the technocrats with very little public involvement. We have more progress to make, otherwise the protests and bumps on the roads will get bigger.

We need to broaden the conversation, and open things up

We need to have broader conversations about technology.

This will be particularly important with data. Most data is about people, and multiple people at that. Our DNA reveals information about our parents, family and even our distant relatives. Utility bills reveal who we live with. Health records contain information about medical professionals as well as ourselves. Data is about us, our families, communities and society.

When we learn how to design services for multiple people then we will have to think about their different interests and rights & how they might compete with each other.

Yet, most internet services, and much current data regulation, are designed for individuals, particularly those who are currently online. That’s part of why technology can feel uncomfortable for many. It doesn’t match much of our societies. Rather than reflecting the richness and variety of communities and societies around the world tech is bringing in the political beliefs and cultural values of the people who built it.

As the French government showed with the Digital Republic Bill, and UK organisations like DotEveryone and the Carnegie Trust are exploring, engaging the public in decisions about technology is complicated but possible. We need more politicians and large technology companies around the world to embrace this approach.

We need to have broader and more open conversations that allow the public to both take part in and influence the outcomes of the current debates about technology. We need to go beyond technology experts to include a range of other experts and the people, businesses and communities who could be beneficially or negatively impacted by a decision. They will have different opinions, and different societies will choose to give those opinions different weights, but learning from the range of views and how they develop during a debate will help us make better decisions.

As societies learnt when we were building roads the debate can’t be left to technocrats solely focussed on economic gains, it needs to be opened up to the public so that we can also debate societal values.

You don’t control your Facebook posts, the reasons why are more complex than you might think

[facebook url=”https://www.facebook.com/FacebookUK/videos/1635229329867267/” /]

It told me that my “photos and posts” belong to me and that “[Facebook] won’t use them without [my] permission”.

The same advert has appeared in the feed of friends and work colleagues based in the UK. It seems to be part of a campaign. Perhaps the campaign is related to the imminent European Union’s General Data Protection Regulation and the growing public awareness that there is debate around data, how it is used, and whether to trust those uses.

There is a similar message in Facebook’s terms and conditions saying:

“You own all of the content and information you post on Facebook, and you can control how it is shared through your privacy and application settings”.

Both messages are simplistic, at best. I don’t fully own or control the content I post on Facebook. It doesn’t only belong to or affect me. By over-simplifying its messaging Facebook, like many other organisations, is missing the chance to help explain how its services work and help us all make better decisions when sharing content.

Social media content is more complex than you might think

This will sound counter-intuitive to many. I mean shouldn’t I have control over my data on Facebook? It’s about me! I created it!!

Don’t be silly. Data ‘ownership’ is not as straightforward as it sounds. Most of my content on Facebook is not only about me. It is about other people too.

These people are not my friends. They are from a film called Peter’s Friends. But it shows some people in a picture they may regret in later life.

My list of friends is a list of relationships with other people, people tag someone in a post saying that they went to a restaurant or pub with them, or share a picture or comment about a group of friends.

Most of us will think about our friend’s feelings when sharing content about them on social media, but we don’t always know what will be important to them. The rules aren’t written down. Many of us will have had the experience of sharing something and then having a friend say “hi, do you mind deleting that post because of X…”.

Sometimes we listen to those objections and sometimes we don’t. Our friends might not be able to delete our Facebook content without our consent but their views are part of the complex set of things we think about when posting. They can unfriend us in real-life as well as on social media.

Adverse impact on other people

Beyond affecting a personal relationship there are many types of adverse impact that a Facebook post might have. Affecting copyright owners is one. Copyright has many many flaws but it is one of the ways societies help creators benefit from their work.

A picture by a famous artist, Mr and Mrs Clark and Percy. Image used under fair use. Copyright David Hockney.

If I did own all the content I posted on Facebook then presumably I could post a picture created by someone else and start to make money off it by selling things. Money that could have gone to the artist.

I could, but I shouldn’t.

Both Facebook and I recognise that we need to abide by copyright legislation and that governments help enforce it. A copyright holder can complain directly to Facebook, or through the relevant national or international rules. The content is not mine to own to control and use how I wish. If I breach copyright in a way that unfairly impacts creators then fewer nice things get created. That would be bad.

Germany recently passed a new law stating that social media platforms have to take down hate speech within 1–7 days or face large fines.

Going deeper into adverse impact it could be that someone on Facebook posts something with the intent of causing harm.

To give just a few examples the content might libel someone, use hate speech, endorse terrorism, or use a sexual image of someone without their consent.

Facebook is a global service, and the legislation and definitions of those things will change from country to country, but in many countries those things would be illegal. A poster would lose control of the content, and perhaps even their liberty, as democratic governments use the powers given to them by people to stop the content from being seen and shared.

Facebook has its own moderation rules and tools that allow Facebook’s moderators to intervene proactively or for people to report content and get it removed. Again, that removal can happen without the poster’s consent. The poster is not in control.

Not all of the adverse impacts that moderation rules try to prevent are illegal and intentional. Others are unethical, or against social norms for a particular community or society. Moderation exists because the adverse impact from my posts might damage the health and goals of a community.

Both sassy socialist memes, with 1 millions followers, and sassy libertarian memes, with 200 followers, are real Facebook groups.

Moderation is not only done by Facebook and governments. Many community groups within Facebook have their own moderators and policies. Group moderators can also remove content without a poster’s consent.

Perhaps the moderators of sassy socialist memes or sassy libertarian memes will remove content I post in their groups if my content just ain’t sassy enough. The local Facebook group for the town I live in, like many other local Facebook groups, certainly has a fierce response to excessive advertising or outsiders criticising the town.

Other people can benefit from content

Shifting to a more positive, and less sassy, note people should also be aware of other people who can benefit from content they post. As the Financial Times recently noted “an explosion of [trustworthy data, such as that posted on Facebook] would give us the capability to understand our world in far more detail than ever before”. Facebook shares some of the data you post already so that other people can benefit, I think it should do more.

OpenStreetMap’s data is freely available as open data and used by governments, businesses, communities and indivudals all over the world.

For example, Facebook users help maintain data about things like cafes, restaurants and leisure centres. We don’t only need this type of data in Facebook, we need it in many other parts of our lives, so Facebook have been exploring how to share data with the community-maintained OpenStreetMap. That will help everyone using the thousands of services that use OpenStreetMap. The Facebook users are not in control of this flow of data but they, and many other people, will benefit.

In other sectors rather than downloading data I can give a third party that I trust the right to access it

In other contexts then Facebook users might want to share content that they post with a third party that they trust.

The EU’s General Data Protection Regulations strengthens this want to a right, although it is a right with limitations.

I might decide to do this so that it benefits my local community, for example helping local government understand feelings on a particular topic, to help deliver another service I want to receive, for example by asking my friends if they want to join me on a a new photo-sharing service, or to help me learn things about my own behaviour and habits.

Unfortunately despite Facebook telling me that I can control how data is shared I can’t easily share that data with third parties.

Facebook allows people to download data they post, but it is not in a standard format and I can’t simply give another organisation that I trust the right to access it to the same extent that, say, the UK banking sector is starting to do.

The UK’s banking sector is expecting to see increased competition and new services as a result of making it easier for people to share data. Perhaps social media firms and the people who use their services would benefit from a similar collaborative effort to determine how to safely share data, which mostly includes other people, without creating adverse impacts.

It is good that Facebook is starting to share data to create benefits outside of their own service. They should do more of it by sharing carefully anonymised data openly, more sensitive data in secure conditions with researchers working for the public good, and by giving people ways to safely share data that they post with third parties that they trust.

Explaining this stuff is hard, but it is necessary

This stuff is complex and can be hard to explain in an accessible way, but it is necessary to understand the complexity before trying to make it simple.

Like many other types of content and data, Facebook posts and photos can be about more than one person. The content can create adverse impacts for those other people but it can also create benefits too. Because of this, users are not fully in control of the content they post, and they certainly don’t own it in the same way that we might own a house or car. Instead civil society, governments and service providers need to work together to design ways to help give people more control and to maximise the social and economic benefits, while minimising the adverse impacts.

Over-simplifying this necessary complexity risks us slipping into a world where instead individuals fully control the data that they create. That is the world that Facebook’s ad is describing to many people. How silly. That world will reduce the benefits and increase the risk of harms.

We don’t need more lengthy and unreadable terms and conditions but as the debate over data grows it would be helpful if major service providers like Facebook took greater responsibility in helping to create a more informed debate and helping people to make better decisions.

« Older posts Newer posts »

© 2021

Theme by Anders NorenUp ↑