26 JUNE 2024 | THE FOREIGN SERVICE JOURNAL instance, that AI might help a nefarious actor invent the next deadly pathogen. Makers of LLMs continually test their models to prevent abuse, yet as with any adversarial pursuit, those intent on “breaking” the models have found inventive ways to bypass safeguards. Even here, security experts note that knowledge of this sort already exists on the dark web and in private forums, so new AI may not have any effect on fundamental capabilities. There is another, more basic type of risk—and we are not talking about LLMs here—that involves design errors. For example, a poorly designed AI tool to allocate medical staff led to worse outcomes in a hospital, according to researchers who published their findings in Science in 2019. The problem was traced to its design: The program’s creators, lacking accurate patient data, used billing records as a proxy for ill health. The higher a person’s hospital bills, the worse the system assumed their health must be. This meant that two patients with the same ailments, one wealthy and one poor, would receive a disparate allocation of resources. As a result, the gap in outcomes grew, the opposite of what hospital administrators had intended. The researchers examining this system recommended thorough audits before any AI is deployed in real-world settings. Are Global AI Rules the Answer? AI will almost certainly find its way into a host of illicit activities, just as every preceding technology has. But before we fear a world ruled by robots, Georgetown computer science professor Cal Newport reminds us that intelligence alone does not equal domination. Imagine putting the most intelligent person on earth in a cage with a tiger—despite all that intelligence, the tiger still wins. Humanity is the tiger—we may not be the most intelligent species on earth once we create computers that surpass us, but we certainly can be the most destructive. Still, the risks of AI must be balanced against the upsides. We’re only at the beginning of this current wave of AI-powered efficiency gains. Just as the internet changed how most businesses function and boosted economic productivity, so will LLMs. And just as the internet enabled criminals and rogue regimes, AI will likely unlock new capabilities for them too. Even with the potential for abuse by bad actors, deliberate and careful consideration focusing on trade-offs should form the foundation of any debate around AI regulation. We should ask: “What precisely are the risks of this technology, and what are the trade-offs from stricter regulation?” In the early years of the World Wide Web, Congress considered a proposal requiring anyone interested in creating a website to obtain a license from the government. This sounds incredible to us now, but there was a real fear of change then. Ultimately, regulators adopted a “first do no harm” policy for the internet, a wise decision that let the ecosystem mature and unlocked trillions in economic growth. An effective policy approach to AI could accelerate economic growth, promote pluralism and democracy, and make it harder for dictators to flourish. AI and Diplomacy One of the founding pioneers of today’s LLM technology, Stanford professor Andrew Ng, likens AI to a revolution. As he said in 2017: “Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years.” While Professor Ng sees AI changing everything, others in the field believe the technology will play out more like social media—overhyped, problematic, and of marginal utility. Diplomats need to be prepared for both eventualities. Should AI continue progressing at its current pace, it is easy to imagine dystopic scenarios. Situations that seem like science fiction could be executed using today’s technology. For example, a military could deploy machine-vision-enabled drones over a battlefield that loiter for hours, waiting for a human to step out into the open (there is some evidence Ukraine may be doing this now). Or consider a less-than-lethal turret mounted along a border that autonomously targets and fires on intruders. Beyond the battlefield, dictators could use new AI systems to suppress their populations by turbo-charging intelligence collection. A significant obstacle to effective signals intelligence has been the sheer volume of collected data—trillions of text messages, emails, and telephone conversations—too much for manual review. But with an LLM at the helm, a police agency could make natural-language queries and, within hours, have dossiers in their hands. Similarly, mass surveillance has meant the need for mass employment to review real-time CCTV footage. But now that AI can recognize people with better-than-human accuracy, we may soon see a future as depicted in the film “Minority Report,” in which the protagonist narrowly evades capture by authorities who use a sophisticated camera network (CIA Director Bill Burns notes this eventuality is already hampering human intelligence gathering). Then, there are the societal implications of yet more technology that promotes “para-social” relationships—those that are one-sided and lack the option of reciprocity. The term was coined in 1956 by sociologists Donald Horton and R. Richard Wohl to describe how audience members grew attached to
RkJQdWJsaXNoZXIy ODIyMDU=