Search This Blog

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks

        I am currently racing against the clock to catch up on all of my remnant book reviews for the year. The temptation presents itself: could I just post an AI review of Virginia Eubanks’ Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor? Of course not! If you’ve learned nothing this year from the non-fiction reviews I’ve been posting, it’s that there is significant reason to distrust the automation of thinking, and really, I would be going against the central premise of Eubanks’ book: we need more human intervention, not less.

In Automating Inequality, Eubanks outlines the ways in which technology is used to, essentially, engage in social engineering. The books is a mix of testimonials, public record, anecdotes, case studies, and histories to explore the ways in which the increasing automation of public services has actually served to reduce services and potentially replicate—even exacerbate— the problems the state claims to want to reduce. She makes her case with reference to three main places: Los Angeles, Indiana, and Allegheny County in Pennsylvania. The approach allows her to see both the forest and the trees with respect to big data and social services.


Overall, the book was somewhat illuminating, but maybe leans a little too heavily on the particulars rather than the broader systems that motivate the transition to so-called “modernization.” I place the phrase in quotation marks, because as Eubanks notes about Indiana, the attempt to have IBM “modernize” social service programs ultimately resulted in worse performance and worse access. The situation became so volatile that IBM and the State of Indiana sued one another. The State claimed that IBM misrepresented their ability to modernize complicated social service programs and that they did not meet the expectations outlined in their contract while creating a falsely aggrandized perception of its performance. When comparing counties that used more traditional means of providing social services against those that IBM “modernized,” automated counties reportedly “lagged behind in every area of performance: timeliness, backlogs, data integrity, determination errors, and number of appeals requested.”


It’s oddly satisfying knowing that IBM’s attempt at a privatized technocracy over public data did not meet its own ends, but the case gets even worse because of its human impacts. Purportedly, IBM’s “coalition workers were so far behind in processing applications that they would often recommend denial of an application to make their timeliness numbers look better but then would tell the applicant to appeal the decision.” It’s an instance of numbers being cast as more significant than the humans that the figures actually affect. Prior to this moment, Eubanks discussed how difficult it was for people to access services. Services might be declined by failure to send in a particular form or sign on a particular line: any kind of minor error might result in a denial. However, systems are often so overloaded that they refuse to tell you why your application for support has been declined. Imagine sending in all of your paperwork only to have IBM deny you, and not tell you why, because they can’t keep up. What they would do instead is deny you and tell you to appeal so that while you’re filing your appeal, they can catch up, process the application, and confer its benefits before the hearing date of the appeal. It’s a clear manipulation of the numbers at your own expense, which is pretty disgusting.


Obviously, corporate interests play a factor here because they’re trying to make money off of necessities, which harms people in the moment. Meanwhile, in Allegheny County in Pennsylvania, they developed a predictive model for administering child welfare and making screening decisions. The predictive nature of the model aimed to reduce the need for human agents to consult on cases. In each of Eubanks’ case studies (Indiana, Los Angeles, and Allegheny), she notes that their “technologists and administrators explained [...] that new high tech tools in public services increase transparency and decrease discrimination. They claimed that there is no way to know what is going on in the head of a welfare case worker, a homeless service provider, or an intake call screener without using big data to identify patterns in their decision-making.” The claim, first of all, exposes their ignorance: technology is never value-neutral. Removing human agents does not remove discrimination, though it does make it harder to trace. There’s any number of ways that biases creep their way into tech (cf. Algorithms of Oppression by Safiya Noble) and it’s naive to think otherwise.


In response to this philosophy, Eubanks offers a passage that beautifully encapsulates the problem. She writes the following:


“I find the philosophy that sees human beings as unknowable black boxes and machines as transparent deeply troubling. It seems to me a worldview that surrenders any attempt at empathy and forecloses the possibility of ethical development. The presumption that human decision making is opaque and inaccessible is an admission that we have abandoned a social commitment to try and understand each other.”


I think when it comes down to it, this is the heart of the issue. Human beings are allowing technology to alienate themselves from themselves. As much as people want to claim that AI is the future, it is not capable (in my view) of the ethical nuance human beings are capable of and will still rely on false metrics to make its decisions. The inversion of which of us is the black box is a nice metaphor for considering the importance of these issues: the more we automate, the more difficult it is to explain processes and ensure that people get the supports they need. Following the passage above, Eubanks quotes from an interview that puts it all in simple language: “I trust the case workers more. You can talk and be like, ‘You don’t see the bigger problems?’”. Accessing supports is already an accessibility issue, which is only exacerbated by including the mediating force of technology: where is an algorithm’s complaint department? Especially when we’ve decided it knows all.


Even in terms of the application of these services, we face problems. The Allegheny Family Screening Tool, the AFST, is designed to see the use of public resources as “a sign of weakness, deficiency, and even villainy.” The predictive model attributed a higher score to families that had accessed social services before, which meant they were under greater scrutiny and potentially unable to access support services. It disincentivizes people from seeking support and consequently increases the risk of abuse or neglect. Accessing supports leads to more scrutiny, which leads to withdrawal, which leads to lack of connection. It creates a perfect storm. In Eubanks’ words:


“Targeting high-risk families might lead them to withdraw from networks that provide services, support, and community. [...] The largest risk factors for the perpetration of child abuse and neglect include social isolation, material deprivation, and parenting stress, all of which increase when parents feel watched all the time, lose resources they need, suffer stigma, or are afraid to reach out to public programs for help. A horrible irony is the AFST might create the very abuse it seeks to prevent. It is difficult to say a predictive model works if it produces the outcome it is trying to measure.”


It’s pretty clear to see the way that social services intended to provide support are co-opted by systems in which we place blind trust (cf. The Technological Society by Jacques Ellul). It presents a larger idea about technology, as well: does it do what we want it to do? Or does it create the context for what we have already done? The last phrase in the paragraph above—that the predictive model creates what it is trying to measure—is the same ouroboros I struggle with with respect to generative AI. Services like Chatgpt can gather what has already been done and spit it out: no original thinking has been produced. As teachers are encouraged to use ChatGPT and teach children how to use it, I fear we’re making the outcome we wanted to measure. We lose faith in ourselves as a species so quickly that it’s pretty dispiriting to see the lack of genuine, original thinking. I fear that our use of technology is continually producing a vicious cycle. In Eubanks’ words again: “Human discretion is the discretion of the many: flawed and fallible, yes, but also fixable.” Meanwhile, for AI, “the automated discretion of predictive models is the discretion of the few.” Seeking, receiving, and providing resources all becomes more opaque, increasing the demand for a system to save us, which exacerbates the problem.


Moreover, Eubanks points out the degrading effect on real, living people. Predictive models and automation deny people their basic humanity: “poor and working class families feel forced to trade their rights to privacy, protection from unreasonable searches, and due process, for a chance at the resources and services they need to keep their children safe.” These benefits ought not to be mutually exclusive. People should be able to obtain what they need without being scrutinized (especially when you consider how “welfare fraud” is a comparatively small sum in the grand scheme of things). Instead, we engage in “poverty profiling” and target people not based on their actual behaviours but their characteristics (i.e. living in poverty). In yet another effective turn of phrase, Eubanks notes that “the model confuses parenting while poor with poor parenting” and that “the AFST views parents who reach out to public programs as risks to their children.” Essentially, if they accessed services, they get a higher score, which means Child Services is more likely to respond to calls about their home (however unfounded), and then is more likely to return again and again on subsequent calls. Those who dare parent while poor face the state’s punishment.


Ultimately, Eubanks’ book is illuminating in several ways and offers some effective case studies for showing the problems of automation. I wouldn’t say this is the masterwork on automation, but it is a good example of how big data has real-world implications. Now all it needs is a follow-up book in which Eubanks examines the tools for dismantling our impulse towards automaticity. There are so many factors that run parallel or are interconnected when it comes to society and technology that placing these social services within a broader framework would prove both illuminating and useful for moving forward productive, but moreover, more humanely.


Wishing that you may all access the services you need while falling free from the algorithm—don’t forget to help others escape its clutches, too, and demand that public and social services be offered fairly to all those who need them. We can do better.

No comments:

Post a Comment