AI Disruption and Responsible Use in Diplomacy

A vision of how AI will be integrated into our U.S. democratic society is needed. State can contribute to the discussion.

BY EVANNA HU

“Good first morning of 2030. You logged 3 hours, 41 minutes of total sleep with only 9 percent in REM and 12 percent in slow wave sleep, with a recovery score of 14 percent due to the consumption of 2 glasses of alcohol the night before. While you were asleep, I compiled an intel memo of the crisis in the Middle East from real-time verified OSINT [open-source intelligence] sources. The BLUF [bottom line up front] is that overnight, chatbot Netanyahu continues to shift policy positions to the orthodox right while chatbot Abu Mazen is writing the latest version of two-state solution legislation in real-time based on public comments and engagements on his Viber channel.”

“Thank you, AI. And what about our AI regulation on the Hill?”

“It continues to be deadlocked as lawmakers can’t agree on what American society should look like with the existence of AI. The tech evangelists want AI to be meshed in everyday life, close to singularity; the decelerationists want to limit the use of AI until they know exactly what the impact is; and the defense hawks want no regulations at all so that the U.S. can beat China. The American public is apathetic and only wants to use AI to simplify their lives.”

Although this is a made-up scenario, a version of which was used at a war-gaming exercise at a prominent think tank in Washington, D.C., it is not far-fetched from current reality; the technology already exists. With the explosion of ChatGPT into the consumer market a year and a half ago, AI adoption continues to grow at an exponential rate. This has forced governments, civil society, and the private sector to think about what roles AI should play and how it should be regulated.

Plenty has been published arguing for various positions, but the two common poles of the debate are the decelerationists, who want a pause in AI development and a minimal role for it in society, and the accelerationists, including OpenAI and Meta, who are gunning for AI to be embedded in every aspect of society and making huge investments toward the goal of achieving artificial general intelligence (AGI), or AI with capabilities that rival those of a human.

The State Department has dipped its toes into these discussions with the October 2023 publication of “Enterprise Artificial Intelligence Strategy FY 2024-2025: Empowering Diplomacy Through Responsible AI,” guidelines for the use of AI within the State Department. While it is a good start, the document is so focused on specific implementation steps that it misses the forest for the trees. It does not offer a vision of how statecraft will be affected by AI.

Just as a leader needs to have a vision with which to inspire and gain support, the State Department needs to have a vision so that its important stakeholders—both Americans and the international community—know where we are headed. Given the technical complexities and rapid development of tech tools, the American public will benefit from a shared understanding of how AI will be integrated into our society. Once the vision is in place, building the strategic blueprint and implementation plan can begin in earnest and be sustained in the long run.

With the establishment of State’s Bureau of Cyberspace and Digital Policy, the department has the platform to push not only for effective use of AI by a government agency but also, more vital to global leadership, for a more general vision of a society with AI in keeping with democratic values.

U.S. Foreign Policy and AI’s Scope

AI’s potential for disrupting the entire world order is one of the biggest reasons it should be treated separately from other technological advancements, including social media and quantum computing. Previous advancements have focused on automating and facilitating existing workflows, but AI will upend how we live, work, and function as a society. It will change our social contract nationally and with the international community.

Consider four of the core elements of America’s foreign policy, as Vice President Kamala Harris presented them at the 2024 Munich Security Conference. What are the implications of AI for each of them? Here are some questions we should be asking.

1. International Rules and Norms (as opposed to chaos). How much automation is safest? Who is in the driver’s seat, and to what extent? Who should govern the biases of AI algorithms when the algorithms are the intellectual property of private companies in an adversary country, or even in our own country? Who should hold parties accountable when, say, an autonomous weapon accidentally kills civilians instead of a military target? What are the limits of digital sovereignty and self-determination? What role should international organizations play in these debates?

Decisions determined by AI can be life-changing or life-and-death.

2. Democratic Values (as opposed to authoritarianism). Should AI be used to create and spread disinformation and deepfakes (freedom of speech), or should it be used to censor speech when truth can be political or nebulous? How can AI be used to break through echo chambers if they are anti-democratic and connect people online and offline? Is disinformation truly fake facts or simply politically opposing speech? What information should be public? What are the accountability mechanisms on AI systems?

3. Collective Action (as opposed to unilateralism). Can AI be used to get nations and the international community to work better together rather than being used to build more effective weapons for unilateral kinetic operations? Can the international community come together and create regulations as openly and broadly supported as the Geneva Convention? What measures of accountability should be put into place?

4. Global Engagement (as opposed to isolationism). Are we using algorithms to help us connect with people we normally would not connect with or people from different backgrounds? How can we use AI to deter isolationist tendencies?

Given its influence on the global stage and its mission to spread democratic values, the State Department needs to be in the critical conversations about the AI vision and help fill the narrative vacuum. Though this is a weighty task, State can offer valuable input.

So far, much of the discussion on ethics and values in the cyber realm has focused on what the State Department does not stand for: digital authoritarianism—that is, censorship of political speech and mass government surveillance and control without reasonable cause. But there are always limits to freedom of speech and regulations on the roles that private companies, such as social media companies, have in moderating speech. Moreover, there is the fact that anonymity online protects good (such as pro-democracy activists in authoritarian regimes) and malign (hacking into another country’s election) actors alike. The antithesis of digital authoritarianism is digital democracy, a nebulous concept that does not yet exist in practice.

Responsible AI

Developing a globally accepted vision for digital democracy might be a first step for the State Department. In any case, one approach to articulating the vision is to begin with the concept of Responsible AI (RAI). RAI was initiated by the private sector, which has led the way in terms of AI research and development for the past half century. It focuses on seven values: accuracy and reliability, accountability and transparency, fair and human-centric, safe and ethical, security and resiliency, interpretability and documented, and privacy-enhanced and data-governed. By choosing most, if not all, of these values, the State Department can clarify what digital democracy is—not only to the American public but also its international partners.

The second step is to understand all the stakeholders involved in any AI application: both the creator, whether that is an individual or organization, and the end users, as well as “third party” beneficiaries. For example, if a predictive policing algorithm is developed for State’s Bureau of International Narcotics and Law Enforcement Affairs, stakeholders might include diplomats and their international law enforcement partners as well as local community members and potential false positive communities who will be affected by the algorithm.

With AI development, the most affected stakeholders are often neglected and excluded from the process. Decisions determined by AI can be life-changing or life-and-death. Yet at present affected parties are rarely a part of any AI system evaluation. That should change.

The vision for responsible AI should be designed to survive well beyond the average two-year political cycle. If we think about America’s grand narrative during the Cold War period or during the Global War on Terror, these decades-long visions and policies not only displayed America’s identity and values, but they also showed what success would look like spanning generations. They were also easy to understand for our international partners and did not change regardless of who was in the White House and Congress.

To help design a durable vision for AI, State needs to work closely with the rest of the U.S. government with meaningful engagements from civil society and the private sector. The AI Safety Institute at the Commerce Department’s National Institute of Standards and Technology is a public-private consortium that can offer insights and collaboration. This requires a cultural mind shift because AI is one of the few fields driven first and foremost by nongovernmental organizations and companies. Then, once the vision and values are set, through public diplomacy efforts, they should be communicated strategically and be held as the North Star for subsequent strategy and implementation plans.

We can’t continue passing the buck down to the next generation, because if we don’t get this right from the get-go, it will be too late. We need to have these hard conversations now, which the State Department, through its unique position of influencing the world, should not shy away from.

Evanna Hu is the CEO of Omelas and a nonresident senior fellow at the Atlantic Council’s Scowcroft Center for Strategy and Security, specializing in the intersection of emerging technologies and national security. She is also a part of the Aspen Global Leaders Network and has won numerous awards for her work in using tech for good.

 

When sharing or linking to FSJ articles online, which we welcome and encourage, please be sure to cite the magazine (The Foreign Service Journal) and the month and year of publication. Please check the permissions page for further details.

Read More...