One of the problems in society that AI decision-making was meant to solve, was bias. After all, aren’t computers less likely to have inherent views on, for example, race, gender, and sexuality?
Well, that was true back in the days when, as a general rule, computers could only do what we told them. The rollout of machine learning, thanks to the explosion of Big Data, and the emergence of affordable computers with enough processing power to handle it have changed all that.
In the old days, the term “garbage in, garbage out” concisely summed up the importance of high-quality data. When you give computers the wrong information to work with, the results they come up with are unlikely to be helpful.
Back then, this was mostly a problem for computer programmers and analysts. Today, when computers are routinely making decisions about whether we are invited to job interviews, eligible for a mortgage, or a candidate for surveillance by law enforcement and security services, it’s a problem for everybody.
In possibly the highest profile example of getting this wrong so far, a study found that an AI algorithm used by parole authorities in the US to predict the likelihood of criminals reoffending was biassed against black people.
Exactly how this came about is unknown – the workings of the proprietary algorithms have not been made available for independent auditing. But the ProPublica study found that the system overestimated the likelihood of black offenders going to commit further crimes after completing their sentence while underestimating the likelihood of white offenders doing the same.
Biassed AI systems are likely to become an increasingly widespread problem as artificial intelligence moves out of the data science labs and into the real world. The “democratisation of AI” undoubtedly has the potential to do a lot of good, by putting intelligent, self-learning software in the hands of us all.
But there’s also a very real danger that without proper training on data evaluation and spotting the potential for bias in data, vulnerable groups in society could be hurt or have their rights impinged by biassed AI.
It’s possible AI may be the solution to, as well as the cause of this problem. Researchers at IBM are working on automated bias-detection algorithms, which are trained to mimic human anti-bias processes we use when making decisions, to mitigate against our own inbuilt biases.
This includes evaluating the consistency with which we (or machines) make decisions. If there is a difference in the solution chosen to two different problems, despite the fundamentals of each situation being similar, then there may be bias for or against some of the non-fundamental variables. In human terms, this could emerge as racism, xenophobia, sexism or ageism.
While this is interesting and vital work, the potential for bias to derail drives for equality and fairness runs deeper, to levels which may not be so easy to fix with algorithms.
I spoke to Dr. Rumman Chowdhury, Accenture’s lead for responsible AI, who outlined that there may be situations where data and algorithms are clean, but societal biases may still throw a spanner in the works.
She said, "With societal bias, you can have perfect data and a perfect model, but we have an imperfect world."
"Think about the use of AI in hiring … you use all of your historical data to train a model on who should be hired and why. Then you parse their resume or look at people’s faces while they’re interviewing.
“But you’re assuming that the only reason people are hired and promoted is pure meritocracy, and we actually know that not to be true.
“So, in this case, there's nothing wrong with the data, and there's nothing wrong with the model, what's wrong is that ingrained biases in society have led to unequal outcomes in the workplace, and that isn't something you can fix with an algorithm."
In very simplified terms, an algorithm might pick a white, middle-aged man to fill a vacancy based on the fact that other white, middle-aged men were previously hired to the same position, and subsequently promoted. This would be overlooking the fact that the reason he was hired, and promoted, was more down to the fact he is a white, middle-aged man, rather than that he was good at the job.
Chowdhury lists three specific steps which organisations can take to minimise the risk of perpetuating societal biases.
The first is to look at the algorithms themselves and ensure that nothing about the way they are coded perpetuates bias. This is particularly necessary when AI is constantly making predictions which are out-of-step with reality (as seems to be the case with the US probation example mentioned above).
Second is to consider ways in which AI itself can help to mitigate against the risk of biassed data – IBM’s bias detection algorithms could play a part here.
Thirdly, we must “make sure our house is in order – we can’t expect an AI algorithm that has been trained on data that comes from society to be better than society – unless we’ve explicitly designed it to be.”
This leads onto the discussion of the regulation of AI – who will be responsible for setting the parameters AI operates within – teaching machines what is valid data to learn from, and where inbuilt societal biases could limit its ability to make decisions which are both valuable and ethical.
Tech leaders including Google, Facebook and Apple jointly formed the Partnership on AI in 2016 to encourage research on the ethics of AI, including issues of bias. Part of the partnerships work involves informing legislators, but this "top-down" approach may not produce solutions to every problem, and may even stifle innovation.
Chowdhury says “What we don’t want … is every AI project at a company having to be judged by some governance group, that’s not going to make projects go forward. I call that model ‘police patrol, ' where we have the police going around trying to stop and arrest criminals. That doesn't create a good culture for ethical behaviour."
Neither should the burden of regulation and enforcement be put solely on the front-line – the data scientists themselves, argues Chowdhury.
"Yes, the data scientist plays a role, the AI researcher plays a role, but at a corporation, there are many moving parts. “We put a lot of responsibility on the data scientist … but they shouldn’t shoulder all of it.”
Basically, if society is at a stage where we are ready to democratise AI, by making it available to all, then we need to be ready to democratise the oversight and regulation of AI ethics.
Chowdhury refers to this concept as the Fire Warden model. "Think about how if there's a fire in your building right now, everyone knows what to do – you all meet outside at a pre-arranged location, someone will raise the alarm – you won't put out the fire, but you've been educated on how to respond.
“That’s what I want to see in the governance of AI systems, everybody has a role to play, everyone’s roles are a bit different, but everyone understands how to raise ethical issues.”
Crucially, this will only work if there is faith that someone will put out the fire – no one would bother calling the fire brigade if they knew they didn’t have the ability and motivation to do their job. Some top-down regulation will undoubtedly be a necessary part of tackling the issue of AI bias.
But building a culture of reporting and accountability throughout an organisation means there will be a far greater chance to spot and halt, bias in data, algorithms or systems before it is perpetuated and becomes harmful.
Bernard Marr is a bestselling author, keynote speaker, and advisor to companies and governments. He has worked with and advised many of the world's best-known organisations. LinkedIn has recently ranked Bernard as one of the top 10 Business Influencers in the world (in fact, No 5 - just behind Bill Gates and Richard Branson). He writes on the topics of intelligent business performance for various publications including Forbes, HuffPost, and LinkedIn Pulse. His blogs and SlideShare presentation have millions of readers.