Civil rights advocates demand oversight of automated decision-making systems

By: - August 2, 2022 6:51 am

Governments rely on automated systems to decide things like health care eligibility and pretrial detention. Civil libertarians are concerned. (Photo by David Becker/Getty Images)

If you’re a domestic violence survivor who needs public benefits, a developmentally disabled adult who needs services, or a child who needs Medicaid-funded private nurses in New Jersey, chances are an algorithm played a pivotal role in whether you got these services or not.

Governments have increasingly relied on automated systems instead of people to make bureaucratic decision-making more efficient. But advocates at the American Civil Liberties Union of New Jersey worry some decisions are too important to be made by machines instead of human judgment. Algorithms, they argue, can perpetuate biases and leave people little recourse to contest denials.

So the civil rights group has launched a new initiative, called the Automated Injustice Project, to investigate and challenge how the state uses artificial intelligence, which they say should have more transparency and oversight.

“When you flatten the evaluation of a person’s circumstances to a questionnaire or a quantitative assessment, you’re forcing people to only fit one model of, say, what a perfect domestic violence survivor is, or what a person who needs 24 hours of nursing care is, when everyone’s situations are very individualized and unique, and more open-ended, thoughtful assessments might better capture it,” said Dillon Reisman, the ACLU-NJ researcher heading up the project.

It requires careful auditing to ensure the system has the impact that it should, or that it doesn’t have a harmful impact, Reisman added, but there’s no way for advocates to measure the impact of automated assessments because they are rooted in secretive software that’s not subject to public scrutiny.

Advocates already know some algorithms are problematic, including facial recognition technology, which police around the country sometimes use to identify subjects and find missing people. In New Jersey, where the technology has resulted in misidentifications, the Attorney General’s Office is now drafting a statewide policy on it.

The ACLU has examined four ways New Jersey uses algorithms:

  • Facial recognition. The ACLU has called for a total ban on any law enforcement use of this technology, which uses biometrics to map facial features from people caught on camera, and then compares the results to a database of known faces to hunt for a match. Sen. Nia Gill (D-Essex) also has introduced several bills to limit its use.
  • Medicaid budgeting. New Jersey uses algorithms to ration care and decide how much home- and community-based services adults with developmental disabilities should get, as well as how many hours of in-home private-duty nursing children with complex health needs can receive through Medicaid. Such rote decision-making was one of the biggest problems a state watchdog identified in a report earlier this year. Gill also has a bill that would prohibit discrimination by an automated decision-making system in the provision of health care services.
  • Pretrial risk assessments. Since 2017, judges have relied on computer-generated reports for “public safety assessments” to gauge whether a defendant is likely to be rearrested or fail to appear in court — and therefore whether they should be incarcerated before trial.
  • Domestic violence risk assessments. New Jersey uses a 118-entry questionnaire to measure the severity of a survivor’s experiences and future risk of more violence, creating a “score” used to determine their eligibility for benefits.

Advocates hope state legislators will ultimately take up their call for algorithmic accountability, as policymakers in Washington, D.C., and Washington state have tried to do.

At the federal level, Sen. Cory Booker and several other Democratic lawmakers introduced similar legislation in February that would require public disclosure of automated decision-making systems and impact assessments for bias, efficacy, and other factors.

Automated systems can work, Reisman said. But until their impact is fully known, he added, authorities shouldn’t rely on them to decide things as important as a person’s liberty, health care, and more.

“It’s time that we started actually investigating how these systems work and pushing back against them where they go wrong,” Reisman said.

Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our website. AP and Getty images may not be republished. Please see our republishing guidelines for use of any other photos and graphics.

Dana DiFilippo
Dana DiFilippo

Dana DiFilippo comes to the New Jersey Monitor from WHYY, Philadelphia’s NPR station, and the Philadelphia Daily News, a paper known for exposing corruption and holding public officials accountable. Prior to that, she worked at newspapers in Cincinnati, Pittsburgh, and suburban Philadelphia and has freelanced for various local and national magazines, newspapers and websites. She lives in Central Jersey with her husband, a photojournalist, and their two children.

New Jersey Monitor is part of States Newsroom, the nation’s largest state-focused nonprofit news organization.

MORE FROM AUTHOR