The Foreign Service Journal, June 2024

36 JUNE 2024 | THE FOREIGN SERVICE JOURNAL Just as a leader needs to have a vision with which to inspire and gain support, the State Department needs to have a vision so that its important stakeholders—both Americans and the international community—know where we are headed. Given the technical complexities and rapid development of tech tools, the American public will benefit from a shared understanding of how AI will be integrated into our society. Once the vision is in place, building the strategic blueprint and implementation plan can begin in earnest and be sustained in the long run. With the establishment of State’s Bureau of Cyberspace and Digital Policy, the department has the platform to push not only for effective use of AI by a government agency but also, more vital to global leadership, for a more general vision of a society with AI in keeping with democratic values. U.S. Foreign Policy and AI’s Scope AI’s potential for disrupting the entire world order is one of the biggest reasons it should be treated separately from other technological advancements, including social media and quantum computing. Previous advancements have focused on automating and facilitating existing workflows, but AI will upend how we live, work, and function as a society. It will change our social contract nationally and with the international community. Consider four of the core elements of America’s foreign policy, as Vice President Kamala Harris presented them at the 2024 Munich Security Conference. What are the implications of AI for each of them? Here are some questions we should be asking. 1. International Rules and Norms (as opposed to chaos). How much automation is safest? Who is in the driver’s seat, and to what extent? Who should govern the biases of AI algorithms when the algorithms are the intellectual property of private companies in an adversary country, or even in our own country? Who should hold parties accountable when, say, an autonomous weapon accidentally kills civilians instead of a military target? What are the limits of digital sovereignty and self-determination? What role should international organizations play in these debates? 2. Democratic Values (as opposed to authoritarianism). Should AI be used to create and spread disinformation and deepfakes (freedom of speech), or should it be used to censor speech when truth can be political or nebulous? How can AI be used to break through echo chambers if they are anti-democratic and connect people online and offline? Is disinformation truly fake facts or simply politically opposing speech? What information should be public? What are the accountability mechanisms on AI systems? 3. Collective Action (as opposed to unilateralism). Can AI be used to get nations and the international community to work better together rather than being used to build more effective weapons for unilateral kinetic operations? Can the international community come together and create regulations as openly and broadly supported as the Geneva Convention? What measures of accountability should be put into place? 4. Global Engagement (as opposed to isolationism). Are we using algorithms to help us connect with people we normally would not connect with or people from different backgrounds? How can we use AI to deter isolationist tendencies? Given its influence on the global stage and its mission to spread democratic values, the State Department needs to be in the critical conversations about the AI vision and help fill the narrative vacuum. Though this is a weighty task, State can offer valuable input. So far, much of the discussion on ethics and values in the cyber realm has focused on what the State Department does not stand for: digital authoritarianism—that is, censorship of political speech and mass government surveillance and control without reasonable cause. But there are always limits to freedom of speech and regulations on the roles that private companies, such as social media companies, have in moderating speech. Moreover, there is the fact that anonymity online protects good (such as pro-democracy activists in authoritarian regimes) and malign (hacking into another country’s election) actors alike. The antithesis of digital authoritarianism is digital democracy, a nebulous concept that does not yet exist in practice. Responsible AI Developing a globally accepted vision for digital democracy might be a first step for the State Department. In any case, one approach to articulating the vision is to begin with the concept of Responsible AI (RAI). RAI was initiated by the private sector, which has led the way in terms of AI research and development for the past half century. It focuses on seven values: accuracy and reliability, accountability and transparency, fair and human-centric, safe and ethical, security and resiliency, interpretability and documented, and privacy-enhanced and data-governed. By choosing most, if not all, of these values, the State Department can clarify what digital democracy is—not only to the American public but also its international partners. The second step is to understand all the stakeholders Decisions determined by AI can be life-changing or life-and-death.

RkJQdWJsaXNoZXIy ODIyMDU=