A National Security Insider Does the Math on the Dangers of AI

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

One type of risk you’ve been very interested in for a long time is “biorisk.” What’s the worst thing that could possibly happen? Take us through that.

I started out in public health before I worked in national security, working on infectious disease control—malaria and tuberculosis. In 2002, the first virus was synthesized from scratch on a Darpa project, and it was sort of an “oh crap” moment for the biosciences and the public health community, realizing biology is going to become an engineering discipline that could be potentially misused. I was working with veterans of the smallpox eradication campaign, and they thought, “Crap, we just spent decades eradicating a disease that now could be synthesized from scratch.”

I then moved to working in biosecurity, trying to figure out, How could we increase the security around biolabs so that they’re less likely to be used? How can we detect biological weapons programs? Which, unfortunately, still exist in significant numbers in a few places in the world. Also, how can we bake more security into society so that we’re more resilient when it comes not only to an engineered pandemic but also natural pandemics?

There’s a lot of vulnerability that remains in society. Covid was a demonstration of this. This was a relatively mild virus in historic terms—it had an infection fatality rate less than 1 percent—whereas there are some natural viruses that have fatality rates well above 50 percent. There are synthetic viruses that have close to 100 percent lethality while still being as transmissible as SARS-CoV-2. Even though we know how to design vaccines and manufacture them very quickly, getting them approved takes about as much time today as it did about 20 years ago. So the amount of time that you would need in order to vaccinate a population is about the same today as it was for our parents and even for our grandparents.

When I first started getting interested in biosecurity in 2002, it cost many millions of dollars to construct a poliovirus, a very, very small virus. It would’ve cost close to $1 billion to synthesize a pox virus, a very large virus. Today, the cost is less than $100,000, so it’s a 10,000-fold decrease over that period. Meanwhile, vaccines have actually tripled in cost over that period. The defense-offense asymmetry is moving in the wrong direction.

And what do you see as our greatest adversary in biorisks?

First is nature. The evolution of natural viruses continues. We’re going to have future viral pandemics. Some of them are going to be worse than Covid, some of them are going to be not as bad as Covid, but we’ve got to be resilient to both. Covid cost just the US economy more than $10 trillion, and yet what we invest in preventing the next pandemic is maybe $2 billion to $3 billion of federal investment.

Another category is intentional biological attacks. Aum Shinrikyo was a doomsday cult in Japan that had a biological weapons program. They believed that they would be fulfilling prophecy by killing everybody on the planet. Fortunately, they were working with 1990s biology, which wasn’t that sophisticated. Unfortunately, they then turned to chemical weapons and launched the Tokyo sarin gas attacks.

We have individuals and groups today that have mass-casualty intent and increasingly express interest in biology as a weapon. What’s preventing them from being able to use biology effectively are not controls on the tools or the raw materials, because those are all now available in many laboratories and on eBay—you can buy a DNA synthesizer for much less than $100,000 now. You can get all the materials and consumables that you need from most scientific supply stores.

What an apocalyptic group would lack is the know-how to turn those tools into a biological weapon. There’s a concern that AI makes the know-how more widely available. Some of the research done by [AI safety and research company] Anthropic has looked at risk assessments to see if these tools could be misused by somebody who didn’t have a strong bio background. Could they basically get graduate-level training from a digital tutor in the form of a large language model? Right now, probably not. But if you map the progress over the last couple of years, the barrier to entry for somebody who wants to carry out a biological attack is eroding.

source

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Recent News

Editor's Pick