Biosecurity and Oversight Gaps in Dual-Use Biotechnology: A Crisis of Governance in the Age of Synthetic Biology - Part V
Executive Summary
The convergence of synthetic biology, clustered regularly interspaced short palindromic repeats — commonly known as CRISPR — and artificially intelligent protein design platforms has inaugurated an epoch in which the barrier between life-saving scientific breakthrough and existential biological threat has never been more perilously thin.
Regulatory frameworks, conceived in a bygone era when the manipulation of genetic material required specialised infrastructure, vast capital, and years of advanced training, now confront a landscape in which a graduate student with modest resources and access to open-source data can synthesise complex genetic sequences from the comfort of a university laboratory.
The United States, long regarded as the leading custodian of biosecurity governance, continues to operate through a patchwork architecture of overlapping agencies, voluntary guidelines, and statutory ambiguities that collectively fail to address the dual-use dilemma at the heart of modern biotechnology.
Internationally, the Biological Weapons Convention remains a structurally deficient instrument, lacking verification mechanisms, enforcement powers, and the institutional agility to keep pace with rapidly advancing science.
The Biosecurity Modernization and Innovation Act of 2026, introduced in bipartisan fashion by Senators Tom Cotton and Amy Klobuchar, represents the most consequential attempt in recent legislative history to install systematic guardrails in this domain.
Yet even this landmark effort confronts deep structural limitations, including the unresolved question of benchtop synthesiser governance, the voluntary nature of existing international screening protocols, and the emergent threat posed by artificially intelligent tools capable of designing novel pathogens that evade current sequence-based detection systems.
FAF article examines the historical origins of dual-use biotechnology governance, its current structural inadequacies, the key legislative and scientific developments of 2025–2026, the causal dynamics that link regulatory inaction to existential risk, and the policy steps necessary to chart a coherent path forward.
Introduction: The Promise and the Peril
There is a peculiar tragedy embedded in the story of modern biotechnology.
The same cluster of scientific capabilities that has enabled humanity to engineer cancer-killing T-cells, design drought-resistant crops for food-insecure populations, and develop messenger RNA vaccines with unprecedented speed against pandemic-scale pathogens is also the cluster that could, in the hands of malevolent stakeholders, be weaponised to engineer catastrophic biological threats.
This is the dual-use dilemma, and it is not merely a theoretical abstraction debated in policy seminars. It is a lived and pressing reality that has moved, with gathering urgency, to the centre of national security deliberation in Washington, Geneva, London, Beijing, and New Delhi.
The pace of scientific advancement has simply outrun the administrative imagination of governments.
Regulatory architectures devised in the 1970s and 1980s, designed to govern genetically modified organisms in agricultural and pharmaceutical contexts, were never conceived as instruments of biosecurity.
They could not anticipate the democratisation of gene synthesis technology, the commercialisation of CRISPR-based editing kits available to amateur biohackers, or the capacity of large-scale artificial intelligence models to design novel proteins that bear no sequence-level resemblance to known biological threats yet retain lethal functional potential.
The result is a regulatory gap of civilisational consequence. In the United States, oversight is fragmented across the Department of Health and Human Services, the Centers for Disease Control and Prevention, the Department of Agriculture, the Environmental Protection Agency, and the Department of Defense, among others, with no single agency holding overarching authority.
At the international level, the Biological Weapons Convention, now more than 50 years old, possesses neither a verification mechanism nor an enforcement body, rendering it structurally unable to monitor state-level compliance, let alone the distributed, privatised, and increasingly automated biotechnology landscape of the 21st century.
As Dr. Antonio Bhardwaj, a globally recognised authority on the intersection of artificial intelligence and strategic affairs, has observed in multiple high-level policy forums, "We are governing twenty-first century biotechnology with twentieth century instruments. The result is not merely inefficiency — it is a structural invitation to catastrophe."
This observation captures the fundamental asymmetry at the heart of the present crisis: the technology accelerates while the governance decelerates, and the space between them grows wider with each passing year.
History and Current Status: From Asilomar to the Age of CRISPR
The governance of dual-use biotechnology has a history that begins not with legislation but with a rare act of scientific self-regulation.
In February 1975, a group of prominent molecular biologists convened at the Asilomar Conference Center in Pacific Grove, California, to deliberate on the risks posed by recombinant DNA technology, then newly developed.
The resulting Asilomar Declaration called for a voluntary moratorium on certain categories of genetic research pending the development of safety guidelines, and it established the principle — still theoretically operative today — that the scientific community bears a co-responsibility for the governance of its own innovations.
The National Institutes of Health subsequently established the Recombinant DNA Advisory Committee, which for several decades served as the primary oversight mechanism for federally funded genetic research in the United States.
As the biotechnology industry matured through the nineteen-eighties and nineteen-nineties, a broader regulatory architecture emerged, though it was organised primarily around product categories — drugs, food products, agricultural organisms — rather than dual-use risk profiles.
The Biological Weapons Convention, opened for signature in 1972 and entering into force in 1975, represented the international community's principal effort to prohibit the development and stockpiling of biological weapons, but it was crafted without verification mechanisms and without any implementing organisation analogous to the Organisation for the Prohibition of Chemical Weapons.
The anthrax letter attacks of 2001, which killed five individuals and infected seventeen others using anthrax spores sent through the United States postal system, demonstrated with brutal clarity that biological threats could emerge from within legitimate laboratory ecosystems as well as from state-sponsored programmes.
The perpetrator, later identified as a U.S. government scientist, Dr. Bruce Ivins of Fort Detrick, had access to the biological agents through his legitimate research work.
This incident catalysed the passage of the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, which strengthened the select agent programme governing access to particularly dangerous pathogens and toxins.
Yet this legislative response, while significant, continued to operate within the existing paradigm: it regulated access to known dangerous agents held in established research institutions.
It could not and did not address the emerging challenge of synthesised threats, because the commercial DNA synthesis industry was then only in its infancy.
By the mid-2000, the cost of synthesising DNA had begun its precipitous decline, dropping from approximately $10 per base pair in 2000 to less than 20 cents by 2015, and continuing downward thereafter.
This democratisation of synthesis technology fundamentally altered the risk landscape: it was no longer necessary to obtain a select agent from a regulated repository if one could, in principle, synthesise it from scratch.
The 2011 controversy over the publication of research by virologists Ron Fouchier and Yoshihiro Kawaoka, who had independently engineered forms of H5N1 avian influenza capable of airborne transmission between ferrets — and by extension, potentially between humans — marked the next inflection point.
The research, funded in part by the National Institutes of Health, provoked an intense and highly public debate about whether scientific results with obvious dual-use potential should be published in unredacted form in open-access journals.
The National Science Advisory Board for Biosecurity recommended redaction, a recommendation that was ultimately not fully followed.
The episode illuminated the absence of a clear, legally binding framework for managing the publication of dual-use research of concern.
Today, the current status is characterised by a landscape simultaneously more dangerous and more legally ambiguous than at any previous moment.
As of early 2026, no country has enacted legislation formally requiring commercial gene synthesis providers to screen DNA orders for dangerous sequences or to conduct background checks on customers.
In the United States, the Biden administration had in 2024 become the first government to require providers to self-attest to compliance with a Framework for Nucleic Acid Synthesis Screening as a condition of receiving federal research funding — a meaningful step, yet still operating through the relatively weak instrument of funding conditionality rather than direct legal mandate.
This framework was then effectively paused by an Executive Order in May 2025, leaving the United States without any operative synthesis screening requirement.
The BIOSECURE Act, incorporated into the Fiscal Year 2026 National Defence Authorization Act and signed into law in January 2026, represents a different but complementary dimension of the regulatory response: it restricts the ability of U.S. federal agencies to procure biotechnology products or services from companies of concern — primarily Chinese biotechnology firms identified as national security risks.
Its implementation timeline requires the Office of Management and Budget to publish an initial list of biotechnology companies of concern within one year of enactment, with full restrictions entering into force only after a multi-stage regulatory process likely to extend into 2028 or 2029.
Key Developments: Legislation, Science, and the Emerging AI Dimension
The most consequential legislative development of recent months has been the introduction of the Biosecurity Modernization and Innovation Act of 2026 by Senators Cotton and Klobuchar.
Introduced on January 29, 2026, and formally presented to the Senate as S. 3741, the bill addresses several of the most critical structural gaps in existing biosecurity governance.
It would direct the Secretary of Commerce to require gene synthesis providers to screen both DNA orders and customers against a curated federal list of potentially dangerous sequences and known bad stakeholders.
It would establish a biotechnology governance sandbox at the National Institute of Standards and Technology, creating a controlled environment for testing biosecurity tools and developing more agile regulatory approaches capable of keeping pace with rapidly advancing science.
And it would direct the White House to conduct a comprehensive 90 day assessment of biosecurity oversight, clarifying institutional roles, measuring the effectiveness of existing mechanisms, and identifying gaps in resources and capability.
This latter provision — the assessment mandate — is in some respects the most intellectually honest element of the bill.
It implicitly acknowledges what biosecurity specialists have long argued: that the United States does not currently have a reliable, comprehensive picture of the state of its own biosecurity infrastructure, and that any serious reform effort must begin with a rigorous diagnostic.
As the Federation of American Scientists observed in a commentary published in February 2026, "By preparing credible, bipartisan options now, before the bill becomes law, we can give the Administration a plan that is ready to implement rather than another study that gathers dust."
The scientific dimension of the challenge has grown considerably more complex with the rapid advancement of artificial intelligence-based protein design tools.
A landmark study published in Science in late 2025 warned that existing DNA synthesis screening practices — which rely primarily on sequence-similarity comparisons against databases of known biological threats — are increasingly inadequate as detection instruments.
The reason is conceptually straightforward but strategically alarming: artificially intelligent design tools can generate novel proteins with dangerous functional properties — toxicity, transmissibility, immune evasion — that bear no detectable sequence resemblance to any known pathogen. Sequence-based screening, by definition, cannot identify what it has no existing database entry to compare against.
This vulnerability was underscored with particular urgency by a December 2025 report from Britain's AI Security Institute, which found that major large language and biology foundation models could reliably generate scientific protocols for synthesising dangerous viruses when prompted by users with sufficient scientific background knowledge.
The Economist reported in May 2026 that this finding had galvanised both legislative and executive attention in London and Washington, adding new urgency to calls for function-based screening standards that go beyond the existing sequence-homology paradigm.
In an open letter published in February 2026 and co-signed by more than 100 researchers from institutions including Johns Hopkins, Oxford, Stanford, and Fordham, leading biologists called for the adoption of a tiered Biosecurity Data Level framework to govern access to high-risk biological datasets capable of training AI models with dangerous capabilities.
The letter argued that while open-access scientific data has been broadly beneficial for biomedical discovery, a specific and identifiable subset of biological data — particularly detailed genomic information about highly pathogenic organisms — constitutes a structural biosecurity risk when made freely available for AI training.
The proposed framework would classify data across five biosecurity tiers based on the estimated risk that a given dataset could enable AI systems to learn general viral or pathogenic design principles applicable to the development of biological weapons.
Internationally, the International Biosecurity and Biosafety Initiative for Science hosted a major international meeting in November 2025, advancing global standards for DNA synthesis screening.
Participants acknowledged that DNA synthesis is a globally interconnected system in which regulatory gaps in any single jurisdiction create risks for all others.
The meeting formally launched the DNA Screening Standards Consortium, tasked with developing practical implementation guidance across diverse regulatory environments.
The International Organization for Standardization had published ISO 20688-2:2024, establishing responsible expectations for sequence and customer screening, but participants noted that translating this standard into operationally consistent practice across the full range of national regulatory contexts remains a formidable challenge.
The Biological Weapons Convention's structural deficiencies came under renewed scrutiny at the Sixth Working Group session held in Geneva in August 2025 and at an international conference in New Delhi in January 2026 marking the fifty-year anniversary of the Convention's entry into force.
India used the New Delhi gathering to warn explicitly that the world was not adequately prepared for the emerging bioterrorism risk environment, and called for formal modernisation of the BWC including the development of a verification regime analogous to the OPCW framework governing chemical weapons.
The BWC's Implementation Support Unit, established in 2006, continues to operate with only four non-permanent staff members, a resource profile that is grotesquely inadequate relative to the complexity and urgency of the mandate it ostensibly fulfils.
Latest Facts and Concerns: A Landscape Under Strain
The most immediate and operationally significant concern confronting biosecurity governance in 2026 is not a hypothetical future threat but a structural present reality: the voluntariness of almost all existing safeguards.
The International Gene Synthesis Consortium has for years maintained a voluntary code of conduct requiring member companies to screen orders for dangerous sequences, but membership and compliance remain entirely optional.
Commercial providers outside the consortium face no legal obligation to implement any form of biosecurity screening.
As a practical matter, this means that a bad stakeholder seeking to synthesise a dangerous sequence need only identify a non-consortium provider — located perhaps in a jurisdiction with no domestic synthesis screening regulations — and place an order.
This is not a theoretical vulnerability; it is an operational gap.
The passage of the BIOSECURE Act in January 2026 has introduced a new dimension of complexity by targeting Chinese biotechnology companies identified as national security risks.
While the legislation is a meaningful instrument for managing supply-chain biosecurity risks in the context of U.S. federal procurement, it does not address the fundamental dual-use challenge in the commercial biotechnology landscape.
It targets the national-origin dimension of the risk rather than the functional dimension, and its multi-stage implementation timeline means that its practical effects will not be fully felt until 2028 or later.
In the broader laboratory safety context, the record of biosafety incidents provides additional cause for concern.
Over the last half-century, at least four hundred and thirty-five laboratory-acquired infections have been documented globally, with researchers noting that most incidents result from human error rather than engineering failures.
Contributing factors include failure to use personal protective equipment properly, inadequate risk assessments, needlestick injuries, and insufficient personnel training — all of which are governance failures as much as they are technical failures.
These incidents have occurred even in the highest-biosafety-level facilities: the Bernhard Nocht Institute for Tropical Medicine in Hamburg experienced a biosafety incident in a BSL-4 laboratory in 2009, and the United States has experienced multiple high-profile lapses at federal government facilities, including the inadvertent shipment of live anthrax samples from a U.S. Army laboratory in 2015.
The Trump administration's revocation of the Biden-era executive order on AI security, which had included biosecurity provisions, in May 2025 created what biosecurity experts describe as an enforcement vacuum precisely at the moment when the risk landscape demanded heightened attention.
The revocation left no operative replacement framework, meaning that the United States moved from a system of conditional funding requirements for synthesis screening — imperfect, but functional — to a system of effectively unrestricted voluntariness.
The Biosecurity Modernization and Innovation Act of 2026 represents a legislative effort to fill this vacuum, but as critics have noted, its benchtop synthesiser provisions address only the point of sale, leaving the use side entirely unregulated.
The open-source biology movement adds a further dimension of complexity. Platforms providing free access to biological design tools, sequencing data, and synthesis protocols have accelerated scientific discovery and democratised participation in biotechnology research.
The EVO 2 model, an open-source AI platform for biology capable of predicting DNA mutation effects and designing novel genomes, took the significant step of excluding pathogen-related datasets from its training corpus due to safety considerations.
Yet the model's openness and the broader principle of open-source biological data sharing create governance challenges that cannot be resolved by individual company decisions, however responsible.
As Yoshua Bengio, the renowned AI researcher, noted at the India AI Impact Summit in early 2026, AI is accelerating breakthroughs in biology and medicine to the point where biosecurity risks have moved above the danger thresholds identified by leading technology companies' own internal risk assessments.
Dr. Antonio Bhardwaj has consistently argued in this context that the governance challenge is not fundamentally technological but institutional. "The synthesis screening technology exists. The AI evaluation frameworks are being developed. The international coordination mechanisms are within reach. What is absent is the political will and institutional architecture to make these instruments mandatory, universal, and enforceable."
This observation points toward the core structural problem: the scientific and technical solutions to the dual-use biosecurity challenge are not beyond the capacity of the international community to develop — the barriers are primarily governmental, diplomatic, and bureaucratic.
Cause-and-Effect Analysis: The Anatomy of a Governance Failure
The structural inadequacy of contemporary biosecurity governance is not the product of ignorance, indifference, or a single catastrophic policy misjudgement.
It is the accumulated consequence of a set of intersecting causal dynamics, each producing effects that simultaneously compound the original problem and generate new vulnerabilities of their own.
Understanding these dynamics with analytical precision is a prerequisite for designing governance reforms that address root causes rather than merely treating symptoms.
The causal architecture of the current biosecurity governance failure is best understood across five principal dimensions: the pacing problem, the incentive misalignment problem, the jurisdictional fragmentation problem, the international enforcement vacuum, and the emergent AI amplification problem.
Each produces measurable downstream effects, and together they constitute a self-reinforcing system in which inaction begets greater vulnerability, and greater vulnerability raises the stakes of continued inaction.
The pacing problem — the persistent, structural lag of regulatory frameworks behind the technologies they are meant to govern — is the foundational dynamic. Its cause is straightforward: the pace of scientific advancement in synthetic biology, gene editing, and AI-assisted biological design is exponential in character, while legislative and regulatory processes are, by design and necessity, deliberative, incremental, and slow.
The consequences of this asymmetry are compounding. When CRISPR was first demonstrated as a practical gene-editing tool in 2012, no existing regulatory framework in any jurisdiction specifically addressed its dual-use implications.
By the time policymakers in Washington and Brussels had begun seriously deliberating a targeted regulatory response, the technology had already diffused globally into hundreds of university laboratories, commercial biotech firms, and a growing community of amateur practitioners.
The regulation that eventually emerged addressed the technology as it existed at the time of drafting, not as it existed at the time of implementation, by which point further advances had already rendered portions of the framework partially obsolete.
This iterative obsolescence is not accidental — it is the structural consequence of a system in which governance is reactive rather than anticipatory.
The effect is a perpetual governance deficit: a gap between the frontier of technological capability and the perimeter of regulatory authority that widens with each cycle of innovation.
The policy consequence of this pacing failure is not merely theoretical. In practical terms, it has meant that the dual-use governance architecture in place today was principally designed for the biotechnology landscape of the 1990 and early 2000s — a landscape characterised by expensive, complex, institutionally concentrated biological research.
The current landscape is categorically different: synthesis costs have collapsed, enabling technologies have proliferated, artificial intelligence has dramatically lowered the expertise threshold required for advanced biological design, and the commercial DNA synthesis industry has grown into a global network of providers across dozens of jurisdictions, many of which have no domestic biosecurity screening requirements whatsoever.
A regulatory framework calibrated to the former landscape is not merely inadequate for the latter — it is structurally blind to the most significant dimensions of the current risk environment.
The incentive misalignment problem operates as a secondary but powerful causal force. Commercial DNA synthesis providers operate in a competitive market in which biosecurity compliance represents a pure cost: it requires investment in screening infrastructure, customer verification protocols, and regulatory engagement, none of which generates direct revenue.
A provider that invests heavily in voluntary compliance incurs costs that competitors who adopt minimal compliance do not, creating a structural pressure toward the lowest common denominator of safety.
This dynamic is well-understood in economics as a collective action problem, and its solution in analogous domains — financial services, pharmaceuticals, aviation safety — has invariably required legally mandated minimum standards rather than voluntary self-regulation.
The absence of such mandates in the DNA synthesis industry means that the market, left to its own devices, reliably underproduces the biosecurity safeguards that the public interest requires.
The downstream effect of this incentive misalignment is a synthesis screening landscape of extreme unevenness.
Major commercial providers affiliated with the International Gene Synthesis Consortium have implemented voluntary screening protocols covering the majority of commercially synthesised DNA in the United States and Europe.
But the consortium’s membership is not exhaustive, participation is not legally required, and the screening standards applied by different members vary significantly.
More critically, the rapid proliferation of benchtop desktop synthesisers — compact, increasingly affordable instruments capable of printing custom DNA sequences at the point of use — creates a channel of synthesis that generates no commercial transaction record whatsoever and is therefore entirely invisible to any screening framework premised on commercial order monitoring.
A motivated bad stakeholder denied service by a screened commercial provider faces, at present, no significant barrier to acquiring an unscreened benchtop instrument and proceeding independently.
The cause — incentive misalignment combined with technology proliferation — produces an effect that the existing regulatory architecture cannot address without fundamental structural reform.
The jurisdictional fragmentation problem constitutes a third causal dimension with its own distinctive set of downstream consequences.
Within the United States, the governance of dual-use biotechnology is distributed across at least five major agencies — the Department of Health and Human Services, the Centers for Disease Control and Prevention, the Department of Agriculture, the Environmental Protection Agency, and the Department of Defense — none of which commands overarching authority over the dual-use risk landscape as a whole.
The National Security Commission on Emerging Biotechnology’s April 2025 report, followed by a January 2026 overview of regulatory pathways, found that biotechnology developers regularly confront duplicative reviews, unpredictable timelines, and competing jurisdictional claims that create both compliance burdens and genuine governance gaps.
The former — compliance burdens — are the concern of industry advocates; the latter — governance gaps — are the concern of biosecurity specialists.
Products and technologies that fall between jurisdictional boundaries, or that implicate the mandates of multiple agencies simultaneously, can find themselves in a regulatory grey zone in which no agency takes clear ownership of risk assessment, and emerging threats move through the interstitial space between institutional mandates without triggering decisive regulatory attention.
The effect of this fragmentation is a system that is simultaneously over-regulatory in some dimensions — creating redundant review burdens that slow legitimate scientific innovation — and under-regulatory in others, particularly with respect to the dual-use characteristics of enabling technologies that do not fall neatly within any single agency’s product-oriented mandate.
Executive Order 14292, issued by President Trump in May 2025, which froze all federal funding for gain-of-function research and rescinded the 2024 Dual Use Research of Concern policy framework, eliminated an existing coordination mechanism without replacing it with an alternative, thereby deepening the effective governance gap precisely at the moment when rapid AI-driven advances in biological design made that gap most consequential.
The causal chain is clear: fragmented institutional authority, combined with the revocation of the primary cross-cutting coordination mechanism, produces an enforcement vacuum that the Biosecurity Modernization and Innovation Act of 2026 is now scrambling to fill through legislative rather than executive means.
At the international level, the enforcement vacuum problem is both older and more structurally entrenched than its domestic counterpart.
The Biological Weapons Convention, in force since 1975, prohibits the development, production, and stockpiling of biological weapons.
More than one hundred and eighty states have ratified it. But it possesses no verification mechanism, no inspection regime, no implementing organisation with binding enforcement authority, and an Implementation Support Unit staffed by four non-permanent personnel.
The causal logic of this situation is rooted in the negotiating history of the Convention: Cold War geopolitics, combined with well-founded concerns about the difficulty of distinguishing offensive biological programmes from legitimate defensive or medical research, produced a convention premised on trust rather than verification.
For several decades, this limitation was partially tolerable because biological weapons development required the kind of large-scale, state-sponsored infrastructure that could be partially monitored through national intelligence means.
That assumption no longer holds.
The democratisation of enabling technologies means that meaningful biological weapons development is now conceivably achievable with the kind of small-scale, distributed, commercially available equipment that generates no intelligence-visible signature and creates no detectable deviation from legitimate research activity.
The effect of this enforcement vacuum is a global biosecurity landscape in which compliance with the BWC’s prohibitions is effectively voluntary, and in which the international community possesses no systematic mechanism for detecting non-compliance, let alone responding to it.
States that choose to maintain covert biological programmes face no institutional verification challenge comparable to that posed by the International Atomic Energy Agency in the nuclear domain.
Non-state stakeholders — terrorist organisations, criminal networks, or radicalised individuals — are entirely outside the convention’s scope, because it was designed as a state-to-state instrument.
The consequence is a structural gap between the nature of the contemporary biological threat — increasingly distributed, increasingly technologically accessible, and increasingly difficult to attribute — and the nature of the international governance instrument ostensibly designed to address it.
A regime built for state-level verification in a world of state-level biological weapons development is profoundly inadequate for a world in which the most significant emerging biological risks are distributed, privatised, and partially automated.
The fifth and most consequential emerging causal dynamic is the amplification of all existing governance failures by artificial intelligence.
The convergence of AI with synthetic biology represents what the journal Nature described in June 2025 as a looming deluge — a qualitative escalation of both the potential benefits and the potential risks of biological design. The causal mechanism operates at multiple levels simultaneously.
At the most immediate level, AI-based protein design tools such as AlphaFold 3 and its successors dramatically lower the expertise threshold required for advanced biological engineering.
Work that formerly required years of specialised graduate training and institutional laboratory access can now, in some cases, be approximated by a motivated individual with access to an AI platform and a sufficient scientific background to interpret and operationalise its outputs.
As The Economist reported in May 2026, AI tools have been found capable of generating detailed synthesis protocols for dangerous pathogens when prompted by users with scientific knowledge, moving specialised technical knowledge from the province of a small credentialled elite into the range of a substantially larger population.
At a deeper structural level, AI creates a new category of dual-use risk that existing regulatory frameworks were not designed to address: the risk of entirely novel biological threats engineered without any sequence-level resemblance to known dangerous agents.
Current DNA synthesis screening systems operate primarily through sequence-homology analysis — they compare requested sequences against a database of known biological threat agents and flag requests that bear significant similarity to those entries.
This approach is adequate for detecting attempts to synthesise known pathogens or close variants thereof.
It is entirely inadequate for detecting sequences designed by AI systems to be functionally dangerous while avoiding sequence-level detection.
The cause-and-effect dynamic here is particularly alarming: the very existence of the current screening standard creates an incentive structure for AI-assisted evasion, in which malicious stakeholders use AI to design sequences that are functionally equivalent to known threats but structurally dissimilar enough to pass existing screens.
This is not a speculative scenario — it is a directly foreseeable application of adversarial machine learning, well-documented in the cybersecurity context, now migrating to the biological domain.
The systemic effect of AI amplification on existing governance gaps is therefore multiplicative rather than merely additive.
Each of the structural vulnerabilities identified above — the pacing problem, the incentive misalignment, the jurisdictional fragmentation, the international enforcement vacuum — is made more consequential by the AI dimension, because AI accelerates the pace of technological change, lowers the barriers to exploitation of governance gaps, and creates new categories of risk that existing frameworks cannot detect.
A regulatory architecture that was inadequate in a non-AI-accelerated world becomes dramatically more inadequate in a world where AI can design dangerous organisms with the same facility with which it currently designs drugs or proteins.
As Dr. Antonio Bhardwaj has argued in his advisory work for multiple international governmental bodies and research institutions, the causal architecture of this crisis is ultimately an institutional failure dressed in scientific clothing. “The science has not failed us. The sequencing tools, the AI platforms, the screening technologies — they are all advancing. What has failed is the institutional ecosystem within which these tools are deployed. We have created a world in which the technology to cause harm has been democratised faster than the governance to prevent harm has been institutionalised. That is not a scientific problem. It is a political and bureaucratic one, and it requires a political and bureaucratic solution.”
This diagnosis is precise and important: the biosecurity governance crisis is not beyond the capacity of human institutions to address. It is a product of choices — about prioritisation, resource allocation, international diplomatic engagement, and institutional design — that can, in principle, be reversed. The challenge is the urgency: the causal dynamics described above are self-reinforcing and accelerating, and the window in which preventive governance is meaningfully possible is not indefinitely open.
The cumulative cause-and-effect picture that emerges from this analysis is one of structural compounding: each governance gap enables a category of risk, each unaddressed risk creates pressure on adjacent governance mechanisms, and the overall system’s resilience declines with each iteration of the cycle.
The pacing problem allows new technologies to proliferate before governance frameworks can address them; the incentive misalignment problem ensures that commercial safeguards remain inadequate in the absence of legal mandates; the jurisdictional fragmentation problem creates enforcement blind spots through which emerging threats pass undetected; the international enforcement vacuum renders the global biosecurity architecture structurally optional for states and entirely invisible to non-state stakeholders; and the AI amplification dynamic accelerates and deepens every one of these existing vulnerabilities.
The result is not merely a set of discrete policy failures but a systemic governance crisis that demands a systemic governance response — one commensurate in ambition and urgency with the scale of the risk it is designed to address.
Future Steps: Toward a Coherent and Enforceable Governance Architecture
The path forward for biosecurity governance is neither simple nor short, but it is navigable — provided that governments, scientific institutions, international organisations, and the private sector are willing to undertake the institutional investments that the moment demands.
The challenge is not primarily technical: as Dr. Antonio Bhardwaj has repeatedly emphasised across policy forums in London, Washington, and New Delhi, "the technical building blocks of a robust global biosecurity architecture already exist in nascent form.
The crisis is one of political mobilisation, institutional design, and the sustained commitment to enforce what is agreed." What remains is the political will to construct, resource, and enforce a framework adequate to the risk.
The most immediately actionable priority is the passage and rapid implementation of the Biosecurity Modernization and Innovation Act of 2026 in its most ambitious form.
The Cotton-Klobuchar bill, as currently drafted, addresses the synthesis screening gap with considerable seriousness, but its effectiveness will depend critically on implementation details that have not yet been resolved.
The federal sequence database against which commercial providers would be required to screen orders must be comprehensive, regularly updated, and designed to capture functionally dangerous sequences — not merely those with sequence-level similarity to known biological threat agents.
Given the growing capacity of AI-based design tools to generate novel dangerous proteins that bypass sequence-homology detection, this means that the U.S. government must invest substantially in the development of function-based screening methodologies and in the AI research necessary to enable their deployment at commercial scale.
The 90 day White House assessment mandated by the bill is a critical first step, but only a first step: it must be resourced adequately and followed by a binding implementation timeline.
A second priority, equally urgent, is the closure of the benchtop synthesiser gap.
The Biosecurity Modernization and Innovation Act as introduced primarily targets commercial DNA synthesis providers — firms that accept orders, synthesise DNA, and ship products to customers. It does not, in its current form, address the growing population of benchtop synthesisers: compact, increasingly affordable desktop-scale instruments capable of synthesising DNA on-demand at the point of use.
These instruments are being deployed in academic laboratories, industrial biotechnology facilities, hospital clinical laboratories, and, increasingly, in the private spaces of amateur synthetic biologists.
They generate no commercial transaction record and create no customer-verification touchpoint.
A governance framework that closes the commercial synthesis gap while ignoring the benchtop synthesis gap will succeed only in redirecting motivated bad stakeholders toward the unregulated channel.
Addressing this requires a combination of device registration requirements, use-condition licensing analogous to controlled substance regulations, and technical safeguards built into the instruments themselves — a biosecurity-by-design standard comparable to the cybersecurity requirements now routinely applied to internet-connected devices.
Thirdly, the United States must urgently restore a clear, legally grounded framework for AI-biosecurity governance to replace the administrative vacuum created by the revocation of the Biden-era executive order in May 2025.
The Biosecurity Modernization and Innovation Act provides some legislative basis for this, but a comprehensive AI-biosecurity framework requires specific attention to three dimensions: the regulation of biological foundation model training datasets to exclude or tier-restrict high-risk pathogen sequence data; the development of pre-deployment risk assessment requirements for AI models with biological design capabilities; and the establishment of mandatory incident reporting requirements when such models are used in ways that generate outputs of biosecurity concern.
Dr. Antonio Bhardwaj has proposed, in frameworks presented to multiple government advisory bodies, a system of Biosecurity AI Impact Assessments modelled loosely on environmental impact assessment requirements — a structured, mandatory evaluation of biosecurity risks prior to the public release of any biological AI model capable of generating sequences or protocols relevant to pathogenic organisms.
At the international level, the most consequential structural reform remains the establishment of a verification mechanism within the Biological Weapons Convention.
Successive BWC Review Conferences have failed to achieve this, largely because of opposition from states concerned about the intrusive nature of inspections and the risk of industrial espionage through verification processes.
But the growing accessibility of dual-use biotechnology — and the increasing difficulty of distinguishing legitimate defensive research from offensive programme development without some form of systematic monitoring — makes the continuing absence of verification an increasingly untenable position.
The Sixth BWC Working Group session in Geneva in August 2025 made some progress on this question, and the momentum generated by India's advocacy at the New Delhi conference in January 2026 should be translated into a concrete verification framework proposal ahead of the Ninth BWC Review Conference, due in 2026.
Complementing the BWC reform effort, the international community should move decisively to transform the existing voluntary DNA synthesis screening framework into a binding international instrument.
The precedent of the Nuclear Non-Proliferation Treaty, imperfect as it is, demonstrates that states can agree to binding restrictions on access to dual-use technologies of civilisational risk significance.
The precedent of the Chemical Weapons Convention demonstrates that verification mechanisms can be made politically acceptable when designed with sufficient procedural safeguards.
A Synthesis Screening Convention — a binding international agreement requiring all states-parties to ensure that commercial DNA synthesis within their jurisdictions is subject to mandatory order screening and customer verification — would establish the international legal floor that currently does not exist.
The DNA Screening Standards Consortium launched by the International Biosecurity and Biosafety Initiative for Science in November 2025 provides a technical foundation; what remains is the diplomatic and political architecture to give it legal force.
The scientific community itself has a crucial role to play in this future governance architecture. The publication of dual-use research of concern requires clearer, more consistently applied review standards than the scientific community currently observes.
The controversy over the H5N1 gain-of-function research in 2011 revealed the absence of a reliable institutional mechanism for evaluating dual-use research prior to publication; that mechanism has still not been fully established.
A reformed system should include pre-publication review for a defined category of highest-risk research, managed not by individual journal editors but by a dedicated expert panel with genuine security expertise, operating on a defined timeline to minimise impact on scientific communication.
The principle of scientific openness is not absolute, and the community's willingness to accept proportionate constraints in a narrow and well-defined category of genuinely dangerous research is both ethically warranted and strategically necessary to preserve the legitimacy of the broader open-science ecosystem.
Funding is, inevitably, a critical variable. The BWC Implementation Support Unit's four-person staffing profile and annual budget of approximately $1 million is an embarrassment relative to the sophistication and danger of the threat environment it is meant to address.
Comparable international organisations with narrower mandates command dramatically larger resources.
Transforming the ISU into an operationally capable international biosecurity monitoring body would require investment on the order of several hundred million dollars per year — a substantial sum in absolute terms, but a negligible fraction of the defence expenditures of the states that would benefit most directly from enhanced international biosecurity governance.
The political economy of biosecurity investment is deeply distorted: the costs of governance are immediate, visible, and attributable, while the benefits — threats prevented, pandemics averted — are invisible precisely because they never materialise.
Correcting this distortion requires leadership from governments willing to make the strategic case for preventive investment in biosecurity infrastructure before, rather than after, a catastrophic event forces the issue.
Finally, the governance architecture of the future must be designed with explicit attention to the inclusion of the Global South.
The rapid expansion of biotechnology research and commercialisation is not a phenomenon confined to the United States, Europe, and a handful of East Asian economies.
India, Brazil, South Africa, Kenya, and Indonesia are all developing significant biotechnology sectors, and the governance frameworks applied to these sectors will shape global biosecurity outcomes in decisive ways.
Yet these countries have been largely peripheral to the governance conversations that have produced the existing patchwork of voluntary frameworks and bilateral agreements.
A genuinely global biosecurity architecture must be built through genuinely inclusive processes — not simply exported from high-income countries as a regulatory compliance requirement but co-developed with the scientific and policy communities of the Global South, whose buy-in is essential for any framework aspiring to universality.
Conclusion: Governance as the Decisive Variable
The story of biosecurity and oversight in the age of dual-use biotechnology is, at its core, a story about the relationship between human ingenuity and institutional imagination — about whether the structures through which human societies govern themselves are capable of evolving quickly enough to contain the risks generated by the very scientific creativity that makes those societies prosperous and powerful.
The answer, as of 2026, must be characterised as uncertain, but not hopeless.
The advances in synthetic biology, CRISPR-based gene editing, artificial intelligence-driven protein design, and commercial DNA synthesis have together created a civilisational inflection point.
The technologies themselves are morally neutral: they can extend life or end it, feed populations or starve them, defend against biological threats or embody those threats in laboratory-synthesised form.
What determines which of these futures materialises is not the science itself but the governance framework within which the science is embedded. Governance, in this sense, is not merely a regulatory footnote to the biotechnology story — it is the decisive variable on which the story's ending turns.
The Biosecurity Modernization and Innovation Act of 2026 represents a meaningful step in the right direction, and the bipartisan character of its sponsorship reflects a rare convergence of political will that should be honoured and accelerated by rapid implementation.
But legislation alone, however well-designed, cannot close the governance gap.
It must be accompanied by international treaty reform that gives the Biological Weapons Convention the verification infrastructure it has lacked for more than 50 years, by a binding international synthesis screening agreement that creates a global floor of dual-use protection, by a restructured framework for AI biosecurity governance that keeps pace with the relentless advancement of biological design tools, and by sustained investment in the scientific and regulatory human capital necessary to make these frameworks operationally real rather than aspirationally nominal.
Dr. Antonio Bhardwaj, reflecting on the broader stakes of this challenge, has framed the issue with characteristic precision: "We are not yet in a world where a non-state stakeholder with modest resources can reliably engineer a pandemic-scale pathogen. We are, however, in a world where that capability is measurably closer than it was five years ago, and where the trajectory of the technology points unambiguously toward continued democratisation of destructive potential. The window for preventive governance is open — but it will not remain open indefinitely. The question is whether our institutions can move at the speed the moment requires."
The imperative is not to suppress biotechnology or to retreat from the extraordinary human benefits that scientific advancement in this domain promises.
It is to construct, with adequate urgency and with the full weight of international political commitment, the governance infrastructure without which those benefits realized can be safely secured.
A world that fails this test will not lack for scientific achievement — it will lack for the institutional wisdom to ensure that achievement serves human flourishing rather than human destruction.
The architecture of that wisdom must be built now, while the window remains open, and while the choice between foresight and catastrophe is still ours to make.


