3D visualization of a human head, through which a layer with many differently colored points leads
 
#Digital Infrastructures

Doing the right thing: the ethics of AI

Global Trends
8 Mins.

Artificial intelligence is set to have an enormous impact on our lives. It promises business-boosting improvements in efficiency, accuracy, and speed. In 2020, intelligent systems based on reliable data are helping humans make better, more informed decisions in all industries. AI is all around us, and we’re already taking its benefits for granted. Ours is an adaptable species, meaning AI is being adopted quickly – but the ethical considerations and regulation need to keep pace with the technology.

When 18-year-old Brisha Borden was arrested in 2014 for taking an unlocked bike and riding it down the street in Fort Lauderdale, she was assessed by a computer program in the federal jail as being at high risk of committing a future crime. A year before, 41-year-old Vernon Prater, a criminal with previous convictions, had been picked up while shoplifting US$86,000 worth of tools from a store. The same algorithm had rated Prater as being at low risk.

Two years later, while Borden had not been charged with any new crimes, Prater was back in prison. And the one crucial difference between the two cases, according to the non-profit investigative journalism organization ProPublica, which analyzed the statistics from the algorithm: Borden was black, and Prater white.1 Although a person’s features, such as skin color or gender, may not be specifically disclosed, algorithms make correlations based on other characteristics, such as school or place of residence, as well as historical data which may already contain human bias.  But who’s responsible for inaccurate forecasting or prejudice in a case like this?

Where’s the accountability?

Take a moment to sit back, and questions spring to mind. If decisions are being taken by artificial intelligence (AI) machines – or by humans basing their decisions on the data produced by algorithms – then who’s responsible for the outcome? Is it the algorithm or the human programmer or the decision maker? In the UK in 2020, when more than a third of high-school leavers’ exam results were downgraded in England alone, was the AI software developer responsible for their shattered dreams, or the exam regulator, or the members of government that decided to use AI to predict students’ grades?2

If AI systems don’t have legal personalities, how can we define accountability in the future? And when systems need to make decisions that have ethical dimensions, whose moral compass are they using? AI developers, integrators, operators, regulators, and data providers must work together to resolve such questions around accountability and moral responsibility and find a responsible way of working with AI.

Predictive analytics vs. the moral compass

Smart AI technologies are prediction machines, using algorithms to analyze big data and learning over time to achieve better results. AI can analyze a limitless number of documents in a very short time; it can recognize and follow patterns (which may already contain human bias), anticipate behavior, and forecast probability. Precise predictive analytics are already helping to diagnose health issues and suggesting treatment; algorithms can make predictions about people who have committed crimes, like Brisha Borden and Vernon Prater; they can recommend whether or not an applicant is suitable to receive credit or an insurance policy. Faced with the probability of an accident, an autonomous car can “decide” probably quicker than a human driver how to react to protect lives.

But how does an algorithm assess the value of a life, using which principles and whose ethics? AI isn’t human; it mimics human behavior. At present, the use of AI without proper ethical standards in place to protect the violation of human dignity and rights carries the risk of projecting human bias, including racial or gender discrimination, into automated decision making.3 If we use machine learning and algorithms, we must apply our human empathy and human ethics to reach a final decision.4 In the earlier example, where the predictive algorithm ascribed weight to schools’ past performance, in many cases to the detriment of socially disadvantaged students, ultimately the results were scrapped. Instead, schoolteachers’ professional assessment of students’ previous performance and overall achievement was adopted, based on human knowledge, expertise, empathy, and a moral compass that has been developing for millions of years.

Against the background of increasingly intelligent, self-learning systems, as well as the rapid advancement of AI, the need for values, rules, and regulation is becoming urgent.5 At present, the most secure protection against bias, invasion of privacy, and other ethical issues is tight human control over AI systems.

According to Dirk Wacker, Director Technology and Innovation at G+D Corporate Technology Office, “Ongoing research in AI is helping to develop methods for assessing and improving the fairness of AI systems. However, there are many complex sources of unfairness – some societal and some technical,” he adds. “It is not possible to fully remove bias from decision making, neither for AI nor for humans. The goal has to be to avoid bias-related suffering as much as possible and to agree on the trade-offs between fairness and efficiency.”

“It is not possible to fully remove bias from decision making, neither for AI nor for humans. The goal has to be to avoid bias-related suffering as much as possible and to agree on the trade-offs between fairness and efficiency“
Dirk Wacker
Director Technology and Innovation at G+D Corporate Technology Office

Extending the human skill set

Perhaps the most visible manifestation of AI at present is in industry. According to statistics generated by PricewaterhouseCoopers, 37% of workers in the US are already worried about losing their jobs because of technical advancements in automation.6

And not without reason. Research from 2017 on future job losses owing to automation shows that the transportation, storage, and manufacturing industries, along with financial services, will face a huge impact from automation job displacement.7 As well as threatening the jobs of industrial workers who perform manual tasks, over the next decades AI is set to put many office administration workers out of work. AI can create accurate contracts without any loopholes; it can analyze X-rays and scans, develop software, create graphics, write texts, collect and communicate news, and edit videos.

It’s true that AI can do all this. But this also opens up opportunities for new, more specialized jobs. Lower-skilled workers could have the ability to take on more highly skilled roles, as human decision-making will remain crucial.  A report by Dun & Bradstreet found that only 8% of 100 surveyed business executives said that they are cutting jobs due to AI, while 40% are adding more jobs as a result of AI adoption.8 The World Economic Forum’s 2018 “The Future of Jobs Report” even estimated that 133 million jobs would be created by AI by 2022.9

“Rather than viewing new technology as competition for jobs, we should see it as complementary – a means of benefiting both processes and people“
Dr. Jutta Häusler
Head of Corporate Human Resources at G+D

Dr. Jutta Häusler, Head of Corporate Human Resources at G+D, says, “Rather than viewing new technology as competition for jobs, we should see it as complementary – a means of benefiting both processes and people. By eliminating repetitive, routine work, AI allows people to focus on the skilled part of manual work as well as creative and meaningful tasks. Therefore AI can make people more inspired by enabling a clear view on the purpose of what they do.”

AI will certainly help machines to support us humans, boosting our efficiency, reducing our errors, and extending our human skill sets to help us find increased job fulfillment and create new jobs for lower-skilled workers. Looked at this way, AI becomes just another means to increase our individual potential and benefit society as a whole, just as the wheel helped agricultural workers, or the microchip increased connectivity, or the medical scan allowed doctors to make more informed diagnoses.

Ethical concerns about how data is used

Glowing laser beam dots on a woman's face symbolizing the big vision of AI
AI applications in fields like biometrics must strike a balance between secure and convenient

In the field of biometrics, too, AI offers enormous potential. We’ve already seen advances in systems using biometric indicators for human identity authentication, such as those developed by companies like Veridos for use in passports and visas, ID cards, and drivers’ licenses. AI-powered presentation attack detection and live detection algorithms also enable more sophisticated fraud prevention, which is particularly helpful for products focused on self-service onboarding, automated verification, and remote identification scenarios. Companies like Veridos are embracing the potential of AI in the field of facial recognition to protect personal identities, thereby guaranteeing both security and convenience.

“At Veridos we offer identity solutions with biometric technologies that are trustworthy and at the same time respectful of citizens’ privacy rights,” says Dr. Silke Bargstädt-Franke, Head of Product Management at Veridos. “Our privacy standard is the EU standard, and we follow our solution designs strictly.”

In Berlin, a young company called brighter AI is using artificial intelligence to anonymize the data of private individuals collected by video and CCTV, without weakening its usefulness. G+D has invested in brighter AI “as a solution component for creating a balance between public security and data protection for its customers,” says Michael Hochholzer at G+D Ventures GmbH in Munich. Because while biometric technology itself is not designed to invade our privacy, the way data is stored and possibly linked to other information about us – in other words, the application of the data – inevitably raises concerns about the ethics of the technology itself.

Setting standards for a safe future

Many companies, including G+D, are now starting to define their own position as regards the ethics of AI. It’s important to find an ethical framework for future regulation to be able to continue our tradition of creating reliable solutions that are based on security, confidence, and trust, as well as ethical and legal standards. Microsoft, for example, has said it won’t sell facial recognition technology to US police departments until legislation is in place to regulate the technology, grounded in human rights.10 IBM and Amazon have both set limitations on plans to sell facial recognition. As Microsoft’s president Brad Smith said at a recent Washington Post press event, “We need Congress to act, not just tech companies alone.”

In a world where important decisions – such as whether a person can receive credit or an insurance policy, or a new lung – may in the future be taken by machines that mimic human intelligence but lack a certain empathy, we need to ensure that uncertainty and fear don’t cast a shadow over the potential and opportunities offered by AI and ML. As the Future of Humanity Institute at the University of Oxford writes, “It is critical that global governance institutions are put in place to steer these transformations in beneficial directions.”11 In the end, the direction we go is in our own, very human, hands.

Further reading

Illustration of a surveillance camera filming a woman and reading out various personal data

Is AI good for humans? In a world where decisions are increasingly taken by algorithms, what are the challenges and the opportunities of this new technology? Find out in our animated infographic here

  1. "Machine bias," ProPublica, 2016

  2. "A-level results," Guardian, August 2020

  3. "Ethics guidelines for trustworthy AI," European Commission, 2019

  4. "Towards a humanistic approach," UNESCO, 2020

  5. "Vertrauenswürdiger Einsatz von Künstlicher Intelligenz," Fraunhofer IAIS, 2019

  6. "Automation and job loss statistics in 2020 – the robots are coming," Fortunly, 2020

  7. "Will robots really steal our jobs?" PwC economics, 2018

  8. "Artifical intelligence is creating jobs, Dun & Bradstreet survey finds," AI World Conference and Expo, 2019

  9. "The Future of Jobs Report 2018," World Economic Forum, 2018

  10. "Microsoft says it won't sell facial recognition technology to US police departments," CNN Business, 2020

  11. "Standards for AI governance," Future of Humanity Institute, University of Oxford, 2019

Published: 30/09/2020

Share this article

Listen to our G+D articles

On the go? We've made it easier for you to access our articles, wherever you are.
Explore our audio articles