Ruha Benjamin, Associate Professor of African American Studies at Princeton, published Race After Technology: Abolitionist Tools for the New Jim Code in 2019. Where Birhane's work focuses on the colonial dynamics of AI deployment in the Global South, Benjamin focuses on the reproduction of racial hierarchy through algorithmic systems within societies that already have deep histories of racial classification and control.
Benjamin's central concept is the New Jim Code: a range of discriminatory designs that 'explicitly work to amplify hierarchies, many that ignore and thus replicate social divisions, and a number that aim to fix racial bias but end up doing the opposite,' as Benjamin summarizes. The 'Jim' references the Jim Crow laws that enforced racial segregation in the American South after Reconstruction, and the parallel is precise: new technologies reproduce segregation and exclusion, but in ways that are coded as neutral, scientific, and objective rather than as explicitly racial.
The argument, as scholars have argued, is that 'structural racism conditions contemporary technological classification systems, perpetuating already separated and stratified societies along racialized lines.' Benjamin is not claiming that programmers are deliberately building racist systems. She is making a more subtle and more troubling argument: that structural racism is embedded in the historical data on which machine learning systems are trained, in the categories and classifications used to frame problems, and in the social contexts in which systems are deployed. A hiring algorithm trained on historical employment data will encode and perpetuate the racial biases in that data. A predictive policing algorithm deployed in communities that have been over-policed will reinforce that over-policing. A facial recognition system trained primarily on light-skinned faces will misidentify dark-skinned faces at substantially higher rates, as Joy Buolamwini's Gender Shades research has documented.
The concept of the New Jim Code also highlights how the appearance of technical neutrality is itself a mechanism of domination. Jim Crow laws were explicit: they said in plain language that certain things were prohibited or required for people of a particular race. The New Jim Code is implicit: the algorithm appears to be making decisions on the basis of neutral criteria (credit scores, risk assessments, behavioral predictions) while in fact encoding racial hierarchies that the appearance of neutrality makes harder to challenge or even identify.
Benjamin draws on a tradition of critical scholarship that includes Safiya Umoja Noble's Algorithms of Oppression (2018), which demonstrated that Google search results systematically sexualized and degraded Black women when users searched for 'Black girls,' while producing benign and professional results for searches on 'white girls.' Noble's argument, as Noble and others have argued, examines 'the intersections of AI with colonial, gender and racial relations, demonstrating the necessity of incorporating a broader set of perspectives for technological progress.'
The abolitionist framing in Benjamin's subtitle is significant. She is not calling for the reform of existing algorithmic systems but for a more fundamental reconsideration of what technologies we build, for whom, in whose interests, and subject to whose control. The abolitionist tradition, she argues, provides intellectual resources for imagining alternatives that go beyond the incrementalist fixes typically proposed in AI ethics discourse: better training data, more diverse teams, fairness metrics. These reforms address symptoms while leaving the underlying structure of power intact.
Quick reflection
Benjamin argues that the appearance of technical neutrality in algorithmic systems is itself a mechanism of racial domination: the algorithm looks objective while encoding structural racism in ways that are harder to challenge than explicit racial laws. Think about an algorithmic system you interact with or that affects people you know, whether in hiring, credit, criminal justice, education, or content moderation. Can you identify ways in which the system's apparent neutrality might be masking structural inequalities embedded in the data or the categories it uses? And what would it mean, in practice, to address those inequalities without simply replacing one set of biases with another?