THE FOREIGN SERVICE JOURNAL | JUNE 2024 25 How to Think About AI There are many kinds of AI applications, and they are designed to solve various tasks. Some models help search and prioritize knowledge; others are trained to identify and categorize sounds like human speech or images like cancerous tumors in X-rays. A “model” is simply a particular architecture for an AI program—think of it as a recipe. Once you have a recipe, you train the model on a dataset, and then you deploy that model on a specific task. Much of AI development rests on the “recipes” (or hyperparameters) AI researchers use. When poking around under the hood of these models, one might find a relatively simple, if vast, base architecture known as a neural network. We see layers of billions of neurons connected via weights and functions, trained using a set of algorithms on large sets of data. Diagrams of large models seem laughably simple—we see lines moving to nodes from left to right like a giant garden trellis, with simple math functions along the path. This simplicity led many AI researchers in the 1980s to deride neural networks as a dead end. The type of AI getting the most attention right now is called a large language model (LLM). LLMs are used to predict the next word in a series, the next pixel in an image, or the next waveform in audio. They are sometimes called “stochastic parrots”—like our loquacious feathered friends, these models excel at mimicking their training data, giving us everything from haikus to convincing scientific jargon. And as with a real parrot, we know little of what goes on inside the LLM’s computational framework. That’s why an LLM is fantastic at summarizing a given document but untrustworthy if queried without constraints. When we train large models like GPT4 on immense data sets and reduce error on “word prediction,” we train the model to build complex representations of the world. OpenAI co-founder Greg Brockman made this point in a TED talk in April: “We just show [the model] the whole world, the whole internet and say, ‘Predict what comes next in text you’ve never seen before,’ and this process imbues it with all sorts of wonderful skills. … If you are shown a math problem, the only way to complete that math problem, to say what comes next … is to actually solve the math problem.” An LLM’s ability to perform tasks the original programmers could not predict are known as emergent properties. For example, Google discovered a multilingual property in its model, Bard, when it returned responses in Bengali, even though the model hadn’t been explicitly programmed or trained for that function. One way to think of LLMs is as the distillation of human knowledge compressed into machine language. The compression carries a loss in fidelity while allowing for faster processing. By compressing immense amounts of human knowledge, we can deploy these models in new ways, leveraging natural language properties that have, until now, existed in science fiction alone. Is AI a National Security Threat? The current debate can be illustrated by studying two feuding AI researchers—the pair responsible for much of the architecture that underpins AI today—Geoffrey Hinton and Yann LeCun. They are on opposite sides of the AI-threat debate. Hinton left Google in 2023 over concerns that AI could end humanity. LeCun, now with Meta (Facebook’s parent company), calls that notion preposterous, noting that today’s LLMs, while impressive in their ability to contain knowledge, lack common sense and are less intelligent than a cat. Models have a poor understanding of the world and often make basic mistakes. LeCun notes it will take another breakthrough before AI is smarter than a cat, let alone a human. Even then, building safety into the model architecture will guarantee that none of us ever serve robot overlords. In LeCun’s view, any group that can create sophisticated AI would have apparent incentives to ensure the software performed as required. That means little risk of AI “breaking free” of its programmers and acting in ways that produce negative externalities. In any case, the fears of AI leading to catastrophe are primarily academic, by contrast with some safety issues AI developers are grappling with today. One primary concern with the new LLMs, for instance, is the ability to create convincing synthetic media that could further erode public trust, leading to a “post-truth” era. We’ve already seen deepfakes deployed in the U.S. election and can expect more. Still, this concern may be overstated. As anyone who has worked on countering disinformation will attest, the larger societal context matters much more than the technical means of spreading propaganda. Studies confirm this: People will reject false information that does not accord with their worldview but will accept it if it aligns with their preconceived notions of reality. The latter pushes societies toward conspiracy theories, not technology alone. The Soviets used misinformation effectively long before chat rooms and encrypted messaging. And when ISIS leveraged new tools to recruit violent fanatics, the underlying societal currents are what made their radicalization attempts successful, not their use of private chatrooms alone. Nonetheless, implementing AI carries real risks, just as implementing any technology might. In the case of LLMs, there is a risk that they could provide dangerous instructions when prompted. For example, an AI might offer directions for making explosives or biological weapons. There is concern, for
RkJQdWJsaXNoZXIy ODIyMDU=