Categories

Beginners 101 Guide: When Science Turns into a Double-Edged Sword—Understanding the Risks of Modern Biotechnology

Beginners 101 Guide: When Science Turns into a Double-Edged Sword—Understanding the Risks of Modern Biotechnology

Summary

Biotechnology — the science of editing, designing, and working with living things — has given us some of the most life-saving tools in human history.

In just the last few years, it helped create the COVID-19 vaccines that saved millions of lives around the world.

Scientists have used it to develop new ways to fight cancer, grow more food with less water, and even potentially cure genetic diseases that were once thought to be permanent. But the same tools that can save lives can also, in the wrong hands, be used to cause enormous harm.

This is the central problem that governments and scientists are now wrestling with: how do you encourage the good uses of biotechnology while making sure the dangerous uses do not slip through the cracks?

Think of it like a very sharp kitchen knife. In a restaurant kitchen, a sharp knife is an essential and valuable tool. In the hands of a dangerous person, it becomes a weapon.

Now imagine that kitchen knives are getting sharper and cheaper every year, and that anyone with a basic cooking class can buy one.

The governance challenge around modern biotechnology is essentially that problem, scaled up to a civilisational level of consequence.

What Is Dual-Use Biotechnology?

The term dual-use refers to any technology that has both peaceful and potentially harmful applications. In biotechnology, the clearest example is a technology called CRISPR, which stands for Clustered Regularly Interspaced Short Palindromic Repeats.

Despite the intimidating name, the basic concept is surprisingly approachable: CRISPR is essentially a pair of biological scissors that can cut DNA at a precise location, allowing scientists to remove, replace, or add genetic material with a level of accuracy that was simply not possible before.

Scientists are using CRISPR to edit out the genetic mutations that cause sickle cell disease, to develop crops that are resistant to drought and disease, and to explore treatments for HIV.

A young woman in England named Victoria Gray became one of the first people in the world to be effectively cured of sickle cell disease using CRISPR therapy — a story that represents everything that is hopeful about this technology.

But the same CRISPR tools could theoretically be used to make a pathogen — a disease-causing organism — more dangerous, more transmissible, or more resistant to existing vaccines and treatments.

The technology does not distinguish between good intentions and bad ones. That distinction depends entirely on the people using it and the rules governing how it can be used.

How Accessible Has Biotechnology Become?

One of the most striking changes in modern biotechnology is how dramatically the cost and complexity of genetic work has fallen.

In the year 2000, synthesising a single DNA base pair — the basic building block of genetic material — cost roughly $10.

By 2015, the same task cost less than twenty cents. Today, it costs even less.

More striking still, desktop-scale machines called benchtop synthesisers, which can print custom DNA sequences on demand, are increasingly affordable and are spreading into university laboratories, hospitals, small biotech companies, and even the private workshops of amateur biology enthusiasts, sometimes called biohackers.

What this means, practically speaking, is that the ability to work with genetic material is no longer confined to well-funded research universities or major pharmaceutical companies with highly trained specialists.

A graduate student with moderate resources and access to freely available scientific literature can attempt genetic experiments that would have required a major institutional investment just two decades ago.

This democratisation has produced genuine scientific benefits: it has widened participation in biotechnology research, accelerated discovery, and enabled innovative start-up companies to work on problems that larger institutions might have neglected.

But it has also created a very real possibility that dangerous knowledge and dangerous tools could reach individuals or groups who intend to misuse them.

The Regulatory Patchwork: Why the Rules Have Not Kept Up

Governments around the world have been working to regulate biotechnology for decades, but the frameworks they built were designed for a very different era. In the United States, oversight is divided among many different agencies.

The Food and Drug Administration watches over drugs and food products.

The Department of Agriculture oversees genetically modified crops. The Environmental Protection Agency handles some genetically modified organisms. The Centers for Disease Control monitors particularly dangerous pathogens.

The Department of Defense runs its own programmes.

None of these agencies has a complete picture of the overall risk landscape, and none of them was specifically designed to manage the dual-use problem in modern synthetic biology. It is a little like having separate traffic police for cars, buses, and motorcycles, with no one overseeing the road as a whole.

At the international level, the primary agreement is the Biological Weapons Convention, which has been in force since 1975 and prohibits countries from developing or stockpiling biological weapons. More than one hundred and eighty countries have signed it.

The problem is that the Convention has no verification mechanism — no inspectors who can visit laboratories and check whether countries are keeping their promises, comparable to the inspectors who monitor nuclear facilities under the Nuclear Non-Proliferation Treaty. It is, in a sense, a promise without a referee.

This gap matters enormously. As Dr. Antonio Bhardwaj, a leading global expert on artificial intelligence and strategic technology policy, has noted, "A treaty without verification is fundamentally an honour system.

In a world where biological programmes can be concealed in a small laboratory using commercially available equipment, an honour system is not sufficient."

What Happened When Rules Were Removed

In May 2025, the Trump administration revoked an executive order that the Biden administration had introduced to create some initial requirements around the screening of DNA synthesis orders and the governance of AI tools in biological research.

The revocation created what specialists describe as a regulatory vacuum: a period in which the United States had no operative legal requirement for commercial DNA synthesis companies to check whether the sequences they were producing could be used to create dangerous biological agents.

To understand why this matters, consider an analogy. Imagine if a country decided to allow any hardware store to sell explosives without requiring any background checks or record-keeping, simply because the paperwork was considered burdensome.

The vast majority of customers would have entirely innocent purposes — construction, mining, demolition.

But the absence of any check at all would make it significantly easier for the small minority with harmful intentions to acquire what they need. DNA synthesis without screening is the biotechnology equivalent of that scenario.

The Biosecurity Modernization and Innovation Act of 2026, introduced to the United States Senate in January 2026 by Senators Tom Cotton and Amy Klobuchar, is a direct response to this gap.

It would require gene synthesis companies to screen orders against a federal list of dangerous biological sequences and to verify who their customers are before processing potentially dangerous requests. It would also create a 90 day government review of the entire biosecurity oversight system to identify and close gaps.

The New Threat: Artificial Intelligence and Biological Design

The governance challenge has been made substantially more complex by the rapid advancement of artificial intelligence tools applied to biological research.

AI platforms are now capable of designing entirely new proteins — building blocks of life — with properties that scientists specify in advance.

This capability is enormously valuable for drug development and vaccine design. But it also introduces a new category of biosecurity risk that existing regulatory frameworks were not designed to address.

The reason is subtle but important. Most existing screening systems for DNA synthesis work by comparing requested sequences against a database of known dangerous genetic sequences — a kind of biological watch-list.

If a requested sequence looks similar enough to a known dangerous agent, the order is flagged.

But AI design tools can generate novel sequences that have dangerous functional properties — for example, the ability to evade the human immune system — without bearing any resemblance to the sequences already on the watch-list. It is the equivalent of a criminal adopting a disguise so thorough that no existing mugshot matches.

The alarm never sounds, because the system has no reference point against which to recognise the threat.

In December 2025, Britain's AI Security Institute found that large AI models could generate detailed protocols for synthesising dangerous viruses when prompted by users with sufficient scientific knowledge.

Experts described this finding as a significant escalation of the risk landscape, because it meant that specialised technical knowledge that had previously been the province of a small number of highly trained specialists was becoming more widely accessible through AI interfaces.

The Economist reported in May 2026 that this had moved to the top of biosecurity policy agendas in both Washington and London.

The problem, as Dr. Antonio Bhardwaj summarised it, is that "AI does not have a conscience. It optimises for what it is asked to find, without moral judgment. In the wrong context, that is precisely the danger."

Laboratory Accidents: The Risk from Within

Beyond the deliberate misuse of biotechnology lies a parallel concern that is, in some respects, more immediately tangible: the risk of accidental release from research laboratories.

More than 400 laboratory-acquired infections have been documented globally over the last 50 years, most resulting not from equipment failure but from simple human error — an incorrectly sealed container, a needlestick injury, inadequate protective equipment.

These incidents have occurred even in the most tightly controlled high-security research environments.

In 2015, a U.S. Army laboratory accidentally shipped live anthrax samples to laboratories in nearly ten U.S. states, believing the samples to have been inactivated.

No infections resulted, but the incident exposed serious procedural failures in a setting that was supposed to represent the highest standard of biological safety.

The broader implication is that biosecurity is not merely a question of external threats — it is also a question of internal discipline, resource allocation, and the culture of safety that governs daily laboratory practice. Governance failures and scientific failures are two sides of the same coin.

Steps Toward a Safer Future

The challenges outlined above are serious, but they are not insurmountable. The international community already possesses much of the technical knowledge needed to construct a more robust biosecurity architecture.

What has been missing is the political will to make voluntary frameworks mandatory, to resource international institutions adequately, and to bring the governance of artificial intelligence in biological research into alignment with the pace and power of the science itself.

The Biosecurity Modernization and Innovation Act of 2026 is a meaningful legislative milestone, particularly because it has support from both major U.S. political parties — a rare signal of consensus on a national security issue.

Its full implementation, including the mandatory synthesis screening requirements and the government-wide biosecurity review, would close the most visible gap in U.S. domestic oversight.

But domestic legislation, however strong, cannot govern a globally distributed biotechnology landscape on its own.

The global community of more than one hundred and eighty parties to the Biological Weapons Convention must find the political consensus to add a verification mechanism to that treaty — a process that will be diplomatically difficult but is strategically necessary.

International standards for DNA synthesis screening, developed through bodies such as the International Biosecurity and Biosafety Initiative for Science, need to be converted from voluntary guidelines into binding legal requirements through a new international agreement.

The scientific community itself must accept a proportionate degree of self-governance for the narrow category of research with the highest dual-use risk, including clearer pre-publication review standards.

And the development and deployment of AI tools with biological design capabilities must be accompanied by mandatory biosecurity risk assessments before those tools are made publicly available.

Conclusion: The Knife and the Kitchen

Returning to the kitchen knife analogy: the answer to the problem of sharp knives is not to ban cooking. It is to ensure that knives are made responsibly, sold to verified users, stored safely, and that the people who use them are trained to do so without endangering others.

The world does not need to choose between the extraordinary benefits of modern biotechnology and the safety of its populations.

But it does need to build, with genuine urgency, the governance infrastructure that makes both possible at the same time.

The window for doing so is open now. The science is advancing quickly. The threat is real and growing. But so is the awareness of governments, scientists, and international institutions that the moment demands action.

As Dr. Antonio Bhardwaj has argued in his extensive work at the intersection of artificial intelligence and global security, the greatest risk is not that we lack the tools to protect ourselves — it is that we fail to use them in time.

The Biosecurity Modernization and Innovation Act of 2026, imperfect as any legislation must be, represents a genuine effort to do precisely that: to govern wisely a science that humanity cannot afford to leave ungoverned.

Biosecurity and Oversight Gaps in Dual-Use Biotechnology: A Crisis of Governance in the Age of Synthetic Biology - Part V

Biosecurity and Oversight Gaps in Dual-Use Biotechnology: A Crisis of Governance in the Age of Synthetic Biology - Part V

Biological Threats in 2026: State Capacity, Toxin Use, and the Return of Agrosecurity - Part IV

Biological Threats in 2026: State Capacity, Toxin Use, and the Return of Agrosecurity - Part IV