Disabling intelligences: The eugenics of artificial intelligence and the marginalization of disabled individuals.
Executive Summary
The book Disabling Intelligences: Legacies of Eugenics and How We are Wrong about AI by Rua M. Williams presents a foundational critique of artificial intelligence systems by tracing their ideological roots directly to the eugenics movement of the early twentieth century.
Williams argues that contemporary AI development is not merely influenced by ableist assumptions but is fundamentally structured by the same logic that animated eugenicist arguments about human worth, intelligence measurement, and social fitness.
The intelligence quotient test, developed ostensibly for educational diagnosis, became the instrumental tool through which eugenicists in the United States justified the forced sterilization of over 65,000 citizens, predominantly poor people and people of color deemed genetically unfit by their low test scores. These historical eugenic practices were not anomalies but rather expressions of widely accepted scientific and social doctrine.
Williams traces how these doctrines persist within contemporary artificial intelligence systems through their encoding of normative assumptions about human cognition, learning, ability, and what constitutes intelligence itself. Current AI systems deployed across healthcare, education, and criminal justice reproduce eugenic logic by systematizing the marginalization of disabled people, measuring human worth through reductive quantification, and advancing narratives of enhancement that implicitly construct disability as a problem to be eliminated.
The book argues that understanding AI’s eugenic foundations is essential to recognizing that algorithmic bias is not a technical problem requiring better data or fairer algorithms, but rather a manifestation of deep ideological commitments to ableism, eugenics thinking, and the technological optimization of human populations. Only through this critical historical understanding can societies begin to construct alternatives to ableist, eugenic AI systems.
Foreward
The Unexamined Past That Haunts the Present
When computer scientists speak of artificial intelligence achieving superintelligence, scaling general intelligence, or optimizing human potential through technological enhancement, they employ language and frameworks that, upon historical examination, contain troubling echoes of eugenic ideology.
The emergence of the large language models, transformer architectures, and autonomous AI agents represents not merely a technical innovation but a continuation of long historical projects of quantifying human value, ranking populations by purported cognitive capacity, and advancing technologies ostensibly designed to improve humanity.
Williams’s work insists that to understand contemporary artificial intelligence, one must confront a discomfiting historical reality: the intellectual lineages connecting modern AI systems to the pseudoscientific racism and ableism of twentieth-century eugenics are not metaphorical but material and continuing.
The intelligence quotient test exemplifies this connection. Alfred Binet developed his intelligence scale in France as a tool to identify schoolchildren requiring additional educational support. Yet within years, American eugenicists had repurposed this tool into an instrument of systematic oppression.
Researchers including Henry Goddard and Lewis Terman argued that intelligence was fundamentally fixed, heritable, and unequally distributed across populations—claims that scientific consensus has since thoroughly rejected, yet whose assumptions persist within contemporary algorithmic systems.
Goddard identified over 15,000 schoolchildren as feeble-minded, recommending their forced segregation and sterilization. The Supreme Court’s 1927 decision in Buck v. Bell, upholding forced sterilization on the basis of perceived low intelligence, codified this eugenic logic into law.
The ruling, which remains on the books today, resulted in the coerced sterilization of over 65,000 Americans through the 1970s, the vast majority being poor, disabled, people of color, and women whose autonomy was irrelevant to eugenic policy objectives.
These historical horrors might seem safely distant from contemporary technological development. Yet Williams demonstrates that the fundamental logics animating AI research and deployment carry forward the core eugenic commitments: the belief that human populations should be quantified and ranked according to narrow metrics of intelligence; the conviction that inequality reflects immutable differences rather than structural barriers; the drive to optimize human populations by identifying and removing those deemed deficient; the alliance between claims of scientific objectivity and deeply ideological hierarchies of human worth.
The Genealogy of Intelligence: From IQ Tests to Algorithmic Systems
The historical foundations of contemporary artificial intelligence’s ableist character trace directly to the eugenics movement and its obsession with intelligence measurement.
The eugenics movement, which emerged in the late nineteenth century as a pseudoscientific ideology promoting controlled human breeding to improve genetic populations, found in the intelligence test a perfect instrument for legitimating its claims of human inequality. Early twentieth-century eugenicists did not invent the idea that humans possess innate, fixed, and unequally distributed cognitive capacity.
This idea had deep roots in racial and class ideologies that preceded them. But eugenicists weaponized the newly emerging science of mental testing to advance pseudoscientific justifications for state-mandated population control. When Alfred Binet’s intelligence scale was adapted for use in America, it was immediately corrupted from its original purpose of identifying children needing educational support into a mechanism for ranking human populations and justifying their exclusion, segregation, and sterilization.
The intellectual architecture of eugenics rested on several foundational claims: that intelligence is a single, measurable quantity; that this quantity is largely determined by heredity; that populations differ significantly in average intelligence; that differences in measured intelligence justify differences in social power and reproductive rights; that societies should be reorganized to concentrate reproductive capacity among the genetically superior.
Each of these claims has been systematically debunked by contemporary genetics, psychology, and anthropology. Yet in modified and often obscured forms, all persist within contemporary artificial intelligence systems. The reductionist equation of human cognitive complexity with a single metric—whether IQ points or algorithmic outputs—continues to structure how AI systems assess, categorize, and rank humans.
The parallel between historical IQ testing and contemporary AI deserves sustained attention. The IQ test claimed to measure pure, native intelligence without regard to education, social advantage, or cultural context.
This claim of objectivity—that the test measured something fundamental and unchangeable about individuals—provided the epistemological justification for using test results to make consequential decisions about who should be allowed to reproduce, who should be institutionalized, who should be excluded from education and opportunity.
Contemporary AI systems make strikingly similar claims. Machine learning algorithms are represented as objective, data-driven, free from human bias (or at least more objective than human decision-makers), measuring underlying patterns in data without regard to social context or ideology. Yet just as IQ tests encoded the prejudices of their developers and the societies that produced the data on which they were trained, contemporary AI systems encode the historical inequalities, discriminatory practices, and ableist assumptions embedded in their training data.
The eugenic movement targeted disabled people with particular intensity. Intellectual disabilities, psychiatric conditions, hearing and deaf differences, physical disabilities, and other conditions deemed to reduce a person’s economic or reproductive value were identified as targets for elimination through forced sterilization, institutionalization, and genocide.
The measurement of intelligence served to operationalize these eugenic commitments. Those deemed unintelligent—a category that overlapped substantially with disabled people—were subjected to eugenic interventions. Contemporary AI systems perpetuate this logic by constructing algorithmic systems that measure, categorize, and intervene in human populations, often with particular consequences for disabled people.
The Machinery of Modern Inequality: How AI Embeds Ableism
Contemporary artificial intelligence systems embedded across healthcare, education, criminal justice, and employment reproduce eugenic logics through multiple mechanisms of what Williams and collaborating scholars term sociotechnical ableism—the way ableist assumptions become encoded into the technical architecture of AI systems.
This encoding operates across several integrated levels. At the foundational level, the datasets on which AI systems are trained systematically underrepresent disabled people and people from other marginalized communities.
The absence or underrepresentation of disabled people in training data means that AI systems are optimized to work for non-disabled people while failing or misclassifying disabled people. A machine vision system trained predominantly on images of non-disabled faces will perform poorly when asked to recognize disabled faces.
A natural language processing system trained on text produced primarily by non-disabled people will misunderstand or fail to recognize patterns in disabled people’s communication. This underrepresentation is not incidental but structural—disabled people are often explicitly excluded from datasets, either because researchers do not think to include them or because disabled people refuse to provide data to systems designed to harm them.
At the algorithmic level, systems trained on biased data perpetuate and amplify the biases present in historical records. A healthcare algorithm trained on historical patterns of healthcare spending will learn that Black patients are healthier than White patients at equivalent disease burdens, a pattern that reflects not actual health but the historical reality that Black patients have systematically received less healthcare investment.
An algorithm learning from this biased data will then recommend allocating additional care to White patients at equivalent risk levels, perpetuating the historical discrimination embedded in the training data. Educational algorithms trained on decades of disciplinary records will learn that disabled students are more frequently disciplined than non-disabled students, a pattern reflecting institutional discrimination rather than actual behavior.
The algorithm will then recommend increased surveillance and discipline for disabled students, systematizing the discrimination.
The architecture of algorithmic systems compounds these biases through mechanisms that operate below the level of conscious awareness.
Learning analytics systems flag as problematic any deviation from normative patterns—irregular login times, nonlinear progress through material, patterns of help-seeking that differ from statistical norms. For neurodivergent and disabled students, these flagged patterns may reflect legitimate alternative approaches to learning, yet the system treats them as indicators of risk or disengagement.
Automated assessment systems including essay scoring and plagiarism detection penalize non-standard organization, unusual vocabulary choices, and communication patterns common among neurodivergent students, effectively discriminating against these students while claiming to measure objective quality.
Facial recognition and proctoring systems assume neurotypical patterns of eye contact, movement, and attention, flagging disabled students as suspicious when they move in patterns associated with autism, engage in stimming behaviors, or require sensory accommodations.
These systems are particularly insidious because they operate under claims of objectivity and neutrality. The algorithm presents itself as innocent of bias, as merely applying mathematical principles to data. Yet the algorithm is never innocent. It is thoroughly shaped by human choices about what to measure, what data to include, how to weight different factors, what outcomes to optimize for.
When a system optimizes for efficiency and standardization, it implicitly devalues the flexibility, relationship-building, and individualized attention that disabled students often require. When a system targets risk reduction, it often means reducing the presence of disabled and marginalized people deemed to constitute risk.
The claimed objectivity of these systems obscures the ideology embedded within them and prevents contestation of their discriminatory outcomes.
Disabled people are not merely harmed by biased AI systems; they are systematically excluded from meaningful participation in the design, development, and governance of AI. Disabled people are not invited to advise on what constitutes fairness in algorithmic systems, even when those systems will directly affect their lives.
The artificial intelligence fairness movement, which emerged partly to address algorithmic discrimination, typically does not include disabled people as central stakeholders or decision-makers. Researchers studying AI bias focus attention on race and gender but often treat disability as peripheral.
Yet as Williams and collaborating scholars argue, disability justice frameworks offer essential resources for understanding and resisting algorithmic discrimination.
Disability justice principles emphasize the leadership and expertise of those most impacted by oppressive systems, recognize the intersecting nature of oppressions, and insist that justice-oriented technological change must emerge from the communities most harmed.
Eugenic Ideologies in Artificial General Intelligence Discourse
The most sophisticated expression of eugenic thinking in contemporary artificial intelligence appears not in specific applications but in the broader ideological frameworks driving artificial general intelligence development.
The TESCREAL bundle—encompassing transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism—represents the predominant ideological formation guiding AGI research and development.
These ideologies share several characteristics: a profound optimism about technological solutions to human problems, a conviction that human enhancement through technology is desirable and necessary, a vision of transcending current human limitations, and a tendency to speak of humanity as an abstract category without attending to the differential impacts of technological change on different populations.
When examined through a disability justice lens, the TESCREAL ideologies reveal troubling continuities with eugenic thinking.
The transhumanist vision of human enhancement, the cosmist drive to transcend human limitations, the rationalist conviction that intelligence can be quantified and optimized—all carry forward the eugenic project of identifying deficiency and engineering its elimination.
The very concept of human enhancement presupposes that current human populations contain defects requiring technological correction. This presupposition lands particularly heavily on disabled people. When transhumanists celebrate technologies that might allow humans to transcend disability, they express implicitly the view that disability is a deficit worth transcending, a deficiency rather than a variation of human existence.
This position differs from the assertion that disabled people should have access to technologies supporting their lives; rather, it expresses the conviction that disability itself should cease to exist through technological enhancement.
The focus on artificial general intelligence as a salvific technology parallels eugenic logic in another crucial respect. Both eugenics and AGI ideology identify some humans as inferior and believe that humanity should be reorganized, optimized, or transcended to eliminate this inferiority.
Eugenicists believed that eliminating the genetically unfit through sterilization and segregation would improve the human population. AGI enthusiasts imagine that superintelligent machines will solve humanity’s problems, implicitly assuming that unaugmented human intelligence is insufficient and requires replacement or transcendence by machine intelligence.
Both narratives express anxiety about human adequacy and a drive to fix perceived deficiency through technological intervention. Both narratives downplay or ignore the political dimensions of who gets to decide which humans are deficient, what forms of human existence are desirable, and what the consequences are when technologies are deployed to eliminate or transcend human variation.
The obsession with intelligence metrics within AGI discourse directly echoes the eugenic preoccupation with IQ measurement. When researchers benchmark AGI progress through intelligence tests, when they argue about whether current models achieve human-level intelligence, when they develop measures of intelligence that rank entities on a single scale from stupidity to superintelligence, they replicate the fundamental eugenic logic of reducing human complexity to a single metric.
Yet experience with IQ testing teaches that such reductive metrics inevitably become instruments of oppression. The appearance of neutrality and objectivity masks the political work these metrics perform in justifying hierarchy, exclusion, and elimination.
A society that measures human worth through a single metric and treats those below the median as deficient is a society structured for oppression. When these metrics become increasingly sophisticated and powerful—when algorithmic systems leverage these metrics to make decisions across major institutions—the potential for systematic discrimination multiplies.
The Framework of Structural Harm: Medical Models and Technological Mediation
Williams’s analysis demonstrates that the harms experienced by disabled people within AI systems are not incidental defects but rather structural expressions of how these systems are designed, implemented, and governed.
The fundamental problem is not that individual algorithms contain bugs that introduce bias. Rather, AI systems are built on the medical model of disability, the understanding of disability as an individual defect located within the disabled person requiring correction or cure. From this perspective, disabled people are problems to be solved through technological intervention.
The alternative framework, grounded in disability justice and the social model of disability, locates disability not as an individual deficit but as the product of the interaction between human variation and an inaccessible, ableist environment.
From this perspective, what must change is not disabled people but rather the environments, structures, and assumptions that treat certain variations of human functioning as deficient. This framework generates radically different approaches to technology.
Rather than designing AI to measure and correct disabled people, a disability justice approach would design technology in partnership with disabled people to support access, autonomy, and flourishing within systems already structured to welcome human variation.
Yet contemporary AI systems are overwhelmingly designed from the medical model perspective. They treat disabled people as objects of intervention and measurement rather than as autonomous agents.
They optimize for efficiency and standardization, which necessarily marginalizes people whose needs or modes of engagement differ from statistical norms. They claim to be neutral and objective while encoding ableist assumptions into their architecture. They are implemented through institutions that already disadvantage disabled people, multiplying the harm.
A disabled student already facing barriers in an educational system structured for non-disabled students now encounters AI systems that measure her against non-disabled norms, flag her as at-risk when she deviates from expected patterns, and implement interventions designed to force conformity to those norms.
The consequences of these systems are not merely individual harms. They are systemic processes of marginalization and elimination. As Williams emphasizes, contemporary AI systems don’t merely fail disabled people; they actively construct disabled people as marginal, deficient, and requiring correction.
Over time, these systemic processes accumulate to shape the material conditions of disabled people’s lives—who gets hired, who gets approved for credit, who gets adequate medical care, who gets disciplined in schools, whose life is deemed worth living.
The Absence of Accountability and the Illusion of Neutrality
One of Williams’s most damning observations concerns the absence of meaningful accountability for algorithmic discrimination. When human decision-makers discriminate, legal frameworks exist—imperfect and inconsistently applied, but existing—for holding them accountable. When algorithms discriminate, accountability dissolves.
Developers claim they did not intend discrimination; they were merely training systems on historical data. Deployers claim they are merely using objective, data-driven tools; they bear no responsibility for the tools’ discriminatory outcomes. The algorithms themselves are presented as innocents, implementing mathematical functions without bias or intent.
This deflection of accountability is particularly dangerous for disabled people. When a healthcare algorithm systematically underestimates the health needs of disabled patients, resulting in withholding of care and harm, who is responsible? The developer can claim the algorithm merely reflects historical patterns in the data.
The healthcare system can claim it implemented the algorithm correctly and bears no responsibility for the algorithm’s bias. The algorithm itself has no intent and therefore no capacity for wrongdoing. Yet someone is harmed. The system-level outcome is discrimination and harm, but the structure of algorithmic systems makes it nearly impossible to assign responsibility.
The claimed objectivity of algorithmic systems makes this evasion of accountability easier. Because algorithms are presented as mathematical and therefore neutral, their outputs are treated as facts rather than decisions.
When a school administrator disciplines a disabled student recommended by an algorithmic system, the administrator can defend the decision by noting that it reflects objective algorithmic assessment, not subjective judgment.
The human decision-maker retreats behind the apparent neutrality of the machine. Yet the algorithmic recommendation is not neutral; it reflects all the biases embedded in its training data, its design choices, its optimization objectives. The claim of neutrality obscures rather than illuminates the politics embedded in the system.
Williams insists that disabled people must refuse the illusion of algorithmic neutrality and demand accountability for algorithmic systems’ discriminatory outcomes.
This requires not merely fixing biased algorithms but fundamentally reconsidering the role of quantification, measurement, and algorithmic decision-making in institutions serving disabled people.
It requires recognizing that the problem is not technical but political—not that algorithms measure the wrong thing, but that algorithmic measurement itself has become a mechanism of oppression.
The Possibilities of Resistance and Justice-Oriented Alternatives
Williams’s analysis, while deeply critical, does not end in despair or resignation. Rather, it opens toward possibilities for resistance and transformation.
Disabled people have long histories of resisting technologies and systems designed to control, correct, or eliminate them. Disabled communities have developed their own technological cultures, hacking and modifying technologies to serve their needs rather than those of the institutions that deploy them.
Disabled people have created alternative frameworks for understanding disability, intelligence, and human value that resist ableist assumptions.
These resources—the technological agency of disabled people, the frameworks of disability justice and neurodiversity, the lived expertise of those most harmed by ableist systems—constitute resources for building alternatives.
Such alternatives would require genuine transformation rather than reform of existing systems. Reformist approaches attempt to fix biased algorithms by improving training data, adding disabled people to development teams, or implementing fairness constraints. While such measures may reduce some harms, they leave intact the fundamental logic of quantification, measurement, and algorithmic ranking that animates eugenic logic.
Justice-oriented alternatives would instead begin from the insistence that disabled people must be centered as designers, not merely included as consultants.
They would adopt participatory design processes in which disabled communities define problems and design solutions. They would interrogate the very assumptions about measurement and intelligence that structure contemporary AI.
They would recognize that some forms of algorithmic decision-making—particularly high-stakes decisions affecting disabled people’s access to care, education, employment, and freedom—may need to be fundamentally displaced by human decision-making with meaningful disabled community participation.
Building such alternatives requires resisting the technosolutionist logic that treats technological problems as having purely technical solutions. It requires recognizing that the core issue is not faulty algorithms but ableist societies deploying technologies to reinforce ableism at scale.
It requires the kind of sustained political work that has always characterized disability justice movements: coalition-building, consciousness-raising, community education, and the slow work of building alternatives from within marginalized communities themselves.
The Future Reckoning: When AI’s Eugenic Foundations Become Undeniable
The Reckoning Begins: AI’s Eugenic Inheritance Exposed and Contested
As artificial intelligence systems proliferate across major institutions, their eugenic foundations become increasingly evident to those willing to see them. Healthcare algorithms systematically disadvantage disabled and Black patients.
Educational AI systems flag and discipline disabled and neurodivergent students at escalating rates. Criminal justice algorithms concentrate law enforcement on marginalized communities.
Employment screening tools discriminate against disabled workers. Autonomous weapons systems make decisions about targeting and killing with minimal human oversight. These harms accumulate daily, and societies must soon reckon with the reality that they have deployed eugenic logic at algorithmic scale.
This reckoning will be difficult precisely because it requires acknowledging that artificial intelligence, widely celebrated as neutral, progressive, and beneficial, carries forward the logic of one of history’s most shameful episodes.
It requires recognizing that many well-intentioned technologists have unwittingly perpetuated eugenic thinking. It requires abandoning the faith in technological solutions to problems that are fundamentally political.
Yet this reckoning is essential. Only by understanding AI’s eugenic foundations can societies understand what they have built and why building alternatives is so urgent. Only by centering the voices and expertise of disabled people can communities develop approaches to AI that serve human flourishing rather than population control.
Only by refusing the illusions of algorithmic neutrality and objectivity can meaningful accountability for algorithmic systems become possible. The future of artificial intelligence will be determined by choices made in the present about whether to defend and extend eugenic logics at algorithmic scale or to build something profoundly different. Williams’s work insists that disabled people must be central to those choices.
Conclusion
Rua M. Williams’s Disabling Intelligences fundamentally reframes how societies should understand artificial intelligence. Rather than treating AI as a neutral tool that can be made fair through better algorithms and fairer data, the work traces artificial intelligence’s deep ideological roots to the eugenics movement and argues that this genealogy continues to structure contemporary AI development and deployment.
The intelligence quotient test, weaponized by eugenicists to justify forced sterilization of over 65,000 Americans, provides the historical model for how contemporary AI systems reduce human complexity to quantifiable metrics then use those metrics to justify systematic discrimination.
Understanding this genealogy is not merely historically interesting; it is essential to understanding what artificial intelligence systems actually do in the world. They measure. They rank. They categorize humans as superior or inferior, normal or deficient, desirable or undesirable.
They concentrate these judgments into algorithmic systems that can operate at previously unimaginable scale. They do so while claiming to be neutral and objective, thereby obscuring the political work they perform. They target disabled people with particular intensity, treating disability as a problem to be solved through technological intervention rather than as legitimate human variation deserving support and accommodation.
The alternatives Williams points toward require more than technical fixes or even ethical guidelines for responsible AI development. They require fundamental contestation of the logics underlying contemporary AI: the belief that human worth can be quantified and ranked; the conviction that some humans are deficient and require correction; the assumption that technological enhancement is desirable and inevitable; the faith that measurement and quantification constitute progress.
They require centering disabled people not as subjects of AI research but as designers and theorists of alternative approaches to technology. They require building on the long histories of disabled resistance and the frameworks of disability justice that have always insisted on the value and possibility of human diversity.
The trajectory of artificial intelligence is not predetermined. The future of AI will be shaped by choices made about whether to defend and extend eugenic logics through increasingly sophisticated systems or to build something fundamentally different—technological systems designed not to measure and rank humans but to support their autonomy, flourishing, and full participation in collective life.
These choices will be made in the coming years, perhaps the coming months. Williams’s work insists that disabled people must not be spectators to these choices but rather central figures in shaping AI’s future.
The alternative is a world in which eugenic logic, updated and digitized, operates at scales and speeds that make its historical precursors seem quaint. The choice before societies is as stark as it is urgent.



