Predictive analytics vs. the moral compass
Smart AI technologies are prediction machines, using algorithms to analyze big data and learning over time to achieve better results. AI can analyze a limitless number of documents in a very short time; it can recognize and follow patterns (which may already contain human bias), anticipate behavior, and forecast probability. Precise predictive analytics are already helping to diagnose health issues and suggesting treatment; algorithms can make predictions about people who have committed crimes, like Brisha Borden and Vernon Prater; they can recommend whether or not an applicant is suitable to receive credit or an insurance policy. Faced with the probability of an accident, an autonomous car can “decide” probably quicker than a human driver how to react to protect lives.
But how does an algorithm assess the value of a life, using which principles and whose ethics? AI isn’t human; it mimics human behavior. At present, the use of AI without proper ethical standards in place to protect the violation of human dignity and rights carries the risk of projecting human bias, including racial or gender discrimination, into automated decision making.3 If we use machine learning and algorithms, we must apply our human empathy and human ethics to reach a final decision.4 In the earlier example, where the predictive algorithm ascribed weight to schools’ past performance, in many cases to the detriment of socially disadvantaged students, ultimately the results were scrapped. Instead, schoolteachers’ professional assessment of students’ previous performance and overall achievement was adopted, based on human knowledge, expertise, empathy, and a moral compass that has been developing for millions of years.
Against the background of increasingly intelligent, self-learning systems, as well as the rapid advancement of AI, the need for values, rules, and regulation is becoming urgent.5 At present, the most secure protection against bias, invasion of privacy, and other ethical issues is tight human control over AI systems.
According to Dirk Wacker, Director Technology and Innovation at G+D Corporate Technology Office, “Ongoing research in AI is helping to develop methods for assessing and improving the fairness of AI systems. However, there are many complex sources of unfairness – some societal and some technical,” he adds. “It is not possible to fully remove bias from decision making, neither for AI nor for humans. The goal has to be to avoid bias-related suffering as much as possible and to agree on the trade-offs between fairness and efficiency.”