News & Insights

AUTHOR

Biden’s executive order requires the industry to check its models for vulnerabilities to misuse. That’s a step in the right direction.

By: Klon Kitchen, November 2nd

Artificial intelligence poses threats to U.S. national security, and the Biden administration takes them seriously. On Oct. 30 the president signed a wide-ranging executive order on artificial intelligence. Among other things, it mandates that a significant portion of the nation’s AI industry must now check its models for national-security vulnerabilities and potential misuses. This means assembling a “red team” of experts to try to make their AIs do dangerous things—and then devising ways of protecting against similar threats from outside.

This isn’t a mere bureaucratic exercise. It is a clarion call for a new era of responsibility. The executive order defines dual-use AI as any model “that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.”

Under this definition, AI regulation is no longer confined to applications in defense or intelligence but encompasses a wide array of AI offerings and services. This means that if your company is dabbling in AI, it must pre-emptively game out potential misuses. The government has essentially declared that the AI industry is part of the national-security apparatus, whether it likes it or not.

This isn’t merely about preventing chatbots from answering dangerous questions like “How do I build a bomb?” As large-language models and other emerging AI technologies come into use in various sectors of the economy and public life, the risks that companies must pre-emptively discern will evolve dramatically.

Consider a chemical company that deploys a large-language AI model to manage its proprietary business and research data. The company must assess the likelihood that a disgruntled employee could misuse this AI to construct a chemical weapon—or, more likely, to publish the instructions online. But the company also needs to discern reputational risks. What would happen if hackers gain access to a company AI and use it to conduct other illegal actions? That would be catastrophic for the company’s reputation and stock price.

The problem is more about managing risk than solving risk. The landscape of AI and national security is constantly changing. Planning for misuse must be a continuing process that becomes a core business responsibility. The executive order on AI puts companies at the center of the U.S. government’s thinking about national security, foreign policy, economic prosperity and domestic stability. Perfection isn’t attainable, but rigorous and deliberate efforts to mitigate risks are demanded.

No company can do this alone. The spectrum of threats is too vast, and the complexities too numerous, for any single company to tackle. Business leaders must build teams optimized to take advantage of outside expertise in conjunction with internal capabilities. Collaboration with government agencies and other stakeholders is imperative.

Read More.