Public benefits infrastructures are going digital—and there are clear upsides. In many states, digital technologies have begun delivering on promises of making service delivery easier and faster. We can and should continue to find ways to utilize digital technologies and data-driven systems to reimagine and improve the interactions between people and public benefit systems.
But, we should also be cautious. We should attend to the fact that dynamics of inequality, such as racism, sexism, and ableism, have shaped our institutions. Digital technology risks reinforcing rather than mitigating inequalities.
This blog argues that privacy is intertwined with these inequalities. Whose right to privacy is respected and whose is stripped has tracked along familiar divides of race, class, gender—making the right to privacy an uneven promise.
The Class Differential in Privacy
One clear reality in the United States is that people with low incomes have little to no protections under existing privacy laws, embedding what one poverty law scholar Michele Gilman has described as the “class differential” in privacy.
Take, for example, the case of Sanchez vs San Diego (2006). In this case, the ACLU challenged the constitutionality of Project 100%, a program run by San Diego County’s district attorney to reduce fraud in public benefits programs. Under this program, public benefits applicants had to submit to home visits by law enforcement who would show up unannounced to examine the home, including searches of cabinets, closets, and even garbage. The applicants who refused agents’ entry were generally denied benefits.
The Ninth Circuit upheld the program. The court’s reason was that “a person’s relationship with the state can reduce that person’s expectation of privacy, even within the sanctity of the home.” Dissenting circuit judges called the ruling “nothing less than an attack on the poor.” In a striking passage, the dissenters wrote:
“The government does not search through the closets and medicine cabinets of farmers receiving subsidies. They do not dig through the laundry baskets and garbage pails of real estate developers or radio broadcasters. The overwhelming majority of recipients of government benefits are not the poor, and yet this is the group we require to sacrifice their dignity and their right to privacy. This situation is shameful.”
This asymmetry in privacy protections has deep roots in decades-old racist and sexist stereotypes, such as the “welfare queen.” The long-lived rhetorical paranoia about “welfare fraud” continues to drive invasive, burdensome verification rules that fall hardest on women and communities of color. For example, the work of legal scholar and anthropologist Khiara Bridges shows that mothers with low incomes face “informal disenfranchisement” of their right to privacy.
How the Class Differential Gets Built Into Digital Infrastructures
Scholars in the field of science and technology studies (STS) have long recognized the ways in which new technologies can further entrench existing dynamics of power. As such, one key concern here is if and how the class differential becomes further embedded in a digital public benefits infrastructure.
The coordinated entry (CE) systems that screen for services for people experiencing homelessness is one example of the heightened privacy risks in digitalized systems. The CE systems rely on matching people’s access to housing resources and prioritizing access based on risk factors—a match.com for homelessness services. In places such as Los Angeles, individuals in need of services are required to fill out surveys. Some of the information requested is incredibly invasive, e.g., sexual history, trauma, and substance use. The data is stored within a government database and shared with as many as 168 agencies—including the police department—for up to seven years. While the invasion of privacy is certain, the CE system does not guarantee resources. Tens of thousands of people remain in databases without ever receiving services.
Most worrying is the prospect of vast surveillance powered by digital technologies and linked datasets. In recent months, the Trump administration has sought to create a single, centralized database of personal information drawn from across federal agencies—representing a stunning betrayal of the ethos that drove the Privacy Act of 1974. Observers noted that such a vast trove of inter-agency data (potentially including tax records, immigration files, social media data, and even geolocation info from commercial brokers) could dramatically expand state surveillance powers.
Digital Governance at a Crossroads
In 2019, a UN expert on poverty and human rights remarked that the world has entered an “era of digital governance.” However, the observation came with a warning. Without course correction, the report cautioned that nations risked “stumbling zombie-like into a digital welfare dystopia.” This feared dystopia was one characterized by the use of digital technologies to surveil, target, harass, and punish participants in public benefits programs. Today, federal, state, and local governments and agencies are still experimenting with emerging technologies. This early stage is an opportunity, providing some room to shape what emerging digital architectures may harden into—and to ensure that we correct differentials in privacy, instead of ossifying them into technical systems.