Biden Orders “Trustworthy” AI Equity
Back when the effort was to end discrimination in bail and sentencing decisions by removing the decision-making from judges and introducing empirical factors, it seemed like a great step forward. Until, that is, it turned out that the use of the Sentence-O-Matic 1000 was just as “bad,” if not moreso, than judges. As reliance on empiricism failed to fix disparate outcomes, but rather further embedded them and gave cover to judges who could no longer be blamed, a fix was demanded.
The argument was that the same factors being used for empirical decision-making were the factors giving rise to disparate outcomes in the first place. The fix was simple: tweak the factors to produce the desired outcome. The only problem, of course, was that it was no longer empirical, but manipulated to create the impression of empiricism while producing the “right” outcomes.
Artificial intelligence wasn’t borne of a desire to end discrimination, but of its own accord. It could be done, and so they did it. Except that brought back the old problem. What if AI returned results that were socially unacceptable or undesirable? What if someone asked Chatbot AI to name the ten best things about Hitler? There cannot, of course, be any “best things” about Hitler and so the algos were written in such a way as to make AI refuse to answer. It was a dishonest response, and flouted the purpose of obtaining a stone cold factual response, but there were lines that AI was programmed not to cross. And there were few who felt the need to stand up for Hitler truthism.
President Biden has now issued an Executive Order to address safety standards for AI that incorporate concerns about AI being used to further discrimination.
Advancing Equity and Civil Rights
Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. The Biden-Harris Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing an Executive Order directing agencies to combat algorithmic discrimination, while enforcing existing authorities to protect people’s rights and safety. To ensure that AI advances equity and civil rights, the President directs the following additional actions:
Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.
Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.
There is no question that AI has enormous potential to do grave harm to civil rights and can certainly exacerbate many of the problems we’ve spent decades trying to fix. Surveillance? Crime forecasting? Predictive policing? The potential for abuse is mind-boggling. These things alone are exceptionally problematic, as we learned during the last go-round. AI doesn’t care about your rights. AI doesn’t care about cherished principles of liberty. AI doesn’t know if it got things right, as long as it satisfies its algorithmic requirements. There’s a 99% chance you were the murderer, except you’re the 1% and innocent? AI doesn’t give a damn.
But at the same time, note that the president didn’t limit his EO to civil rights, but included “equity.” Much like the bad old days when the realization hit home that the use of empirical factors in the Sentence-O-Matic 1000 resulted in disproportionate outcomes for certain races and genders, there is a high probability that the same will happen with AI, that it will reach results that are unpalatable to social justice advocates and fail to satisfy their notions of “equity,” whatever that means.
How would the government prevent AI, which is only using the cold, hard facts it finds, from producing outcomes that it deems inequitable? If a landlord was to inquire whether a particular person would be a good tenant, would AI be programmed to ignore evictions of black people so they aren’t rejected, while returning evictions of white people because that comported with equity? Putting aside the racial distinctions, what use is AI if it’s programmed to return false information because the truth wouldn’t produce “equity”?
And while the potential for egregious harms in criminal law are obvious, what does that leave us with?
“We are encouraged by President Biden’s executive order, which is an important step towards addressing the many dangers that artificial intelligence and automated decision-making systems pose to civil rights. These systems continue to reproduce and exacerbate inequities, bias, and discrimination in ways that undermine the fabric of our multiracial democracy. Addressing the civil rights consequences of AI requires a comprehensive approach that prevents AI harms to the livelihoods, privacy, and freedom of Black communities, including harms from unwarranted and biased intrusions by law enforcement.
No one wants “unwarranted and biased intrusions, by law enforcement,” but that does to the reliability of AI, something we’re still far from achieving. Once we start tweaking AI to game its outcome to comport with equity, can it ever achieve accuracy, or only the “accuracy” that social justice deems acceptable? And if that’s the case, are we not requiring that AI be created with an inherent political bias that will make the future of AI nothing more than the algorithmic equity police?
It is critical that AI be programmed not to exacerbate discrimination and to recognize and account for civil rights, even when they limit the path AI would otherwise take. But drawing the lines are going to be extremely difficult, if not impossible. But equity is another matter, programming AI to ignore what’s real and reach the desired outcome. If that’s the case, then what use is AI since we already know the outcomes equity demands? If the goal is to ultimately create a “trustworthy” AI, then it can’t be AI only as far as it tells us what we want to hear.
Responses