An AI Primer for Policy Professionals

As international affairs professionals, we need to understand the fundamentals of new technology so we can better address its evolution and application.

BY ZED TARAR

Artificial intelligence (AI) is everywhere you look, seemingly infused into every conversation. But is this another tech hype cycle, or are we on the cusp of a step change in productivity, akin to the birth of the modern internet? And what should international affairs professionals make of the technology?

To thoroughly answer these questions, we need a brief primer on the history of AI, what it is today, and where it might go in the next few years. As we will see, breakthroughs in several different domains—from data collection to processing power and programming—have given way to a breakthrough in AI, allowing us to deploy the technology in ways that would have seemed impossible even five years ago. That means we must also reckon with AI’s sociopolitical ramifications and the technology’s military applications.

First: a working definition. The inside joke among AI researchers is that AI is whatever we can’t do yet. Applications that were once described as AI are now considered routine algorithms. The original spam filters that Google and Microsoft use are examples of AI, as is machine vision that makes character recognition possible. For our purposes, we can define AI as any computer program developed through iterative learning and not by manual coding of instructions. For example, a toy mouse programmed to follow a set path through a maze is not AI; but a mouse programmed to use sensors to “learn” a path by bumping into obstacles and iteratively adjusting trajectory is.

How to Think About AI

There are many kinds of AI applications, and they are designed to solve various tasks. Some models help search and prioritize knowledge; others are trained to identify and categorize sounds like human speech or images like cancerous tumors in X-rays.

A “model” is simply a particular architecture for an AI program—think of it as a recipe. Once you have a recipe, you train the model on a dataset, and then you deploy that model on a specific task. Much of AI development rests on the “recipes” (or hyperparameters) AI researchers use.

When poking around under the hood of these models, one might find a relatively simple, if vast, base architecture known as a neural network. We see layers of billions of neurons connected via weights and functions, trained using a set of algorithms on large sets of data. Diagrams of large models seem laughably simple—we see lines moving to nodes from left to right like a giant garden trellis, with simple math functions along the path. This simplicity led many AI researchers in the 1980s to deride neural networks as a dead end.

The type of AI getting the most attention right now is called a large language model (LLM). LLMs are used to predict the next word in a series, the next pixel in an image, or the next waveform in audio. They are sometimes called “stochastic parrots”—like our loquacious feathered friends, these models excel at mimicking their training data, giving us everything from haikus to convincing scientific jargon. And as with a real parrot, we know little of what goes on inside the LLM’s computational framework. That’s why an LLM is fantastic at summarizing a given document but untrustworthy if queried without constraints.

When we train large models like GPT4 on immense data sets and reduce error on “word prediction,” we train the model to build complex representations of the world. OpenAI co-founder Greg Brockman made this point in a TED talk in April: “We just show [the model] the whole world, the whole internet and say, ‘Predict what comes next in text you’ve never seen before,’ and this process imbues it with all sorts of wonderful skills. … If you are shown a math problem, the only way to complete that math problem, to say what comes next … is to actually solve the math problem.” An LLM’s ability to perform tasks the original programmers could not predict are known as emergent properties. For example, Google discovered a multilingual property in its model, Bard, when it returned responses in Bengali, even though the model hadn’t been explicitly programmed or trained for that function.

One way to think of LLMs is as the distillation of human knowledge compressed into machine language. The compression carries a loss in fidelity while allowing for faster processing. By compressing immense amounts of human knowledge, we can deploy these models in new ways, leveraging natural language properties that have, until now, existed in science fiction alone.

Is AI a National Security Threat?

The current debate can be illustrated by studying two feuding AI researchers—the pair responsible for much of the architecture that underpins AI today—Geoffrey Hinton and Yann LeCun. They are on opposite sides of the AI-threat debate. Hinton left Google in 2023 over concerns that AI could end humanity. LeCun, now with Meta (Facebook’s parent company), calls that notion preposterous, noting that today’s LLMs, while impressive in their ability to contain knowledge, lack common sense and are less intelligent than a cat. Models have a poor understanding of the world and often make basic mistakes.

LeCun notes it will take another breakthrough before AI is smarter than a cat, let alone a human. Even then, building safety into the model architecture will guarantee that none of us ever serve robot overlords. In LeCun’s view, any group that can create sophisticated AI would have apparent incentives to ensure the software performed as required. That means little risk of AI “breaking free” of its programmers and acting in ways that produce negative externalities. In any case, the fears of AI leading to catastrophe are primarily academic, by contrast with some safety issues AI developers are grappling with today.

One primary concern with the new LLMs, for instance, is the ability to create convincing synthetic media that could further erode public trust, leading to a “post-truth” era. We’ve already seen deepfakes deployed in the U.S. election and can expect more. Still, this concern may be overstated. As anyone who has worked on countering disinformation will attest, the larger societal context matters much more than the technical means of spreading propaganda. Studies confirm this: People will reject false information that does not accord with their worldview but will accept it if it aligns with their preconceived notions of reality. The latter pushes societies toward conspiracy theories, not technology alone. The Soviets used misinformation effectively long before chat rooms and encrypted messaging. And when ISIS leveraged new tools to recruit violent fanatics, the underlying societal currents are what made their radicalization attempts successful, not their use of private chatrooms alone.

Nonetheless, implementing AI carries real risks, just as implementing any technology might. In the case of LLMs, there is a risk that they could provide dangerous instructions when prompted. For example, an AI might offer directions for making explosives or biological weapons. There is concern, for instance, that AI might help a nefarious actor invent the next deadly pathogen. Makers of LLMs continually test their models to prevent abuse, yet as with any adversarial pursuit, those intent on “breaking” the models have found inventive ways to bypass safeguards. Even here, security experts note that knowledge of this sort already exists on the dark web and in private forums, so new AI may not have any effect on fundamental capabilities.

There is another, more basic type of risk—and we are not talking about LLMs here—that involves design errors. For example, a poorly designed AI tool to allocate medical staff led to worse outcomes in a hospital, according to researchers who published their findings in Science in 2019. The problem was traced to its design: The program’s creators, lacking accurate patient data, used billing records as a proxy for ill health. The higher a person’s hospital bills, the worse the system assumed their health must be. This meant that two patients with the same ailments, one wealthy and one poor, would receive a disparate allocation of resources. As a result, the gap in outcomes grew, the opposite of what hospital administrators had intended. The researchers examining this system recommended thorough audits before any AI is deployed in real-world settings.

Are Global AI Rules the Answer?

AI will almost certainly find its way into a host of illicit activities, just as every preceding technology has. But before we fear a world ruled by robots, Georgetown computer science professor Cal Newport reminds us that intelligence alone does not equal domination. Imagine putting the most intelligent person on earth in a cage with a tiger—despite all that intelligence, the tiger still wins. Humanity is the tiger—we may not be the most intelligent species on earth once we create computers that surpass us, but we certainly can be the most destructive.

Still, the risks of AI must be balanced against the upsides. We’re only at the beginning of this current wave of AI-powered efficiency gains. Just as the internet changed how most businesses function and boosted economic productivity, so will LLMs. And just as the internet enabled criminals and rogue regimes, AI will likely unlock new capabilities for them too.

Even with the potential for abuse by bad actors, deliberate and careful consideration focusing on trade-offs should form the foundation of any debate around AI regulation. We should ask: “What precisely are the risks of this technology, and what are the trade-offs from stricter regulation?” In the early years of the World Wide Web, Congress considered a proposal requiring anyone interested in creating a website to obtain a license from the government. This sounds incredible to us now, but there was a real fear of change then. Ultimately, regulators adopted a “first do no harm” policy for the internet, a wise decision that let the ecosystem mature and unlocked trillions in economic growth.

An effective policy approach to AI could accelerate economic growth, promote pluralism and democracy, and make it harder for dictators to flourish.

AI and Diplomacy

One of the founding pioneers of today’s LLM technology, Stanford professor Andrew Ng, likens AI to a revolution. As he said in 2017: “Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think AI will transform in the next several years.” While Professor Ng sees AI changing everything, others in the field believe the technology will play out more like social media—overhyped, problematic, and of marginal utility.

Diplomats need to be prepared for both eventualities. Should AI continue progressing at its current pace, it is easy to imagine dystopic scenarios. Situations that seem like science fiction could be executed using today’s technology. For example, a military could deploy machine-vision-enabled drones over a battlefield that loiter for hours, waiting for a human to step out into the open (there is some evidence Ukraine may be doing this now). Or consider a less-than-lethal turret mounted along a border that autonomously targets and fires on intruders.

Beyond the battlefield, dictators could use new AI systems to suppress their populations by turbo-charging intelligence collection. A significant obstacle to effective signals intelligence has been the sheer volume of collected data—trillions of text messages, emails, and telephone conversations—too much for manual review. But with an LLM at the helm, a police agency could make natural-language queries and, within hours, have dossiers in their hands.

Similarly, mass surveillance has meant the need for mass employment to review real-time CCTV footage. But now that AI can recognize people with better-than-human accuracy, we may soon see a future as depicted in the film “Minority Report,” in which the protagonist narrowly evades capture by authorities who use a sophisticated camera network (CIA Director Bill Burns notes this eventuality is already hampering human intelligence gathering).

Then, there are the societal implications of yet more technology that promotes “para-social” relationships—those that are one-sided and lack the option of reciprocity. The term was coined in 1956 by sociologists Donald Horton and R. Richard Wohl to describe how audience members grew attached to television personalities. Today, New York University psychologist Jonathan Haidt, author of The Anxious Generation (2024), argues that AI will make social media even more toxic for our most vulnerable members of society, including children and adolescents.

According to researchers, lifelike conversations with chatbots could widen social distance and increase loneliness. And this, in turn, could further polarize democratic societies. Diplomats tasked with forming a global consensus on a contentious issue may find it impossible to reconcile differences with polarized national governments.

Be Ready for Surprises

Another surprising cultural shift driven by AI is foreign language study. According to the Modern Language Association, total enrollment in courses teaching a language other than English at American universities was down more than 16 percent from 2016 to 2021. The causes are multifaceted; still, the ubiquity, accuracy, and ease of AI-based translation tools make foreign language study less appealing. How that trend could change diplomacy 10 or 20 years from now is anyone’s guess.

As an old Danish proverb states, it is difficult to make predictions, especially about the future. One thing we can be sure of is that, just as personal computers and the internet changed our day-to-day reality, this next technological leap will similarly transform much of our lives in ways we can scarcely imagine.

Another analogy that could help us navigate AI might be globalization. Economists are largely united in the net positive effect of global trade—but the key word for diplomats is “net.” Globalization produced winners and losers, and the latter may be driving a resurgence in right-wing politics among Western democracies. Similarly, AI may be a net positive for the global economy—yet managing its negative externalities could be vital to preventing another set of unforeseen consequences.

Ultimately, our responsibility as international affairs professionals is to understand the fundamentals of this new technology so we can better address its evolution. We must watch AI closely or risk being caught off guard when the ground shifts beneath us.

Zed Tarar recently completed an MBA at London Business School and currently works in the Bureau of Near Eastern Affairs. He joined the Foreign Service in 2010 and has served in Abu Dhabi, Karachi, Rome, Washington, D.C., and London.

 

When sharing or linking to FSJ articles online, which we welcome and encourage, please be sure to cite the magazine (The Foreign Service Journal) and the month and year of publication. Please check the permissions page for further details.

Read More...