On 29 May 2020, the Australian government was forced to pay over €430 million to low-income Australians whom it had wrongly accused of receiving inflated welfare benefits.
Four years earlier, in what was to became known as the “Robodebt” scandal, an AI system set up by Services Australia had used an algorithm to cross-reference the self-reported income of individuals against their estimated income calculated by the algorithm. Where the algorithm identified a discrepancy, an autogenerated debt notice was sent to the individual, without any human check. Unfortunately, hundreds of thousands of these debt notices had been incorrectly calculated, resulting in distress for the individuals concerned, as well as a costly outcome for the government.
Not only did the Robodebt system breach existing laws, but it infringed on key principles that should firmly lie at the heart of AI. The accused individuals had no idea that an algorithm was being used to assess them. Neither did they know what data it had used to reach its decision, how it had gathered the data, or which criteria were applied in the analysis. In other words, people were being forced to prove their innocence without access to any of the data that led to the algorithm’s decision.