OpenAI recently said Russia & China Used Its AI in Clandestine Operation

OpenAI recently revealed that it had uncovered and disrupted five covert influence campaigns that exploited its generative AI technologies for deceptive manipulation of public opinion globally. These campaigns were orchestrated by state actors and private entities in Russia, China, Iran, and Israel.

Russian Campaigns

Two Russian operations, dubbed “Bad Grammar” and “Doppelganger,” utilized OpenAI’s AI models to generate social media content, translate articles, compose headlines, and develop automated bots. These campaigns targeted the Ukraine, the US, NATO, and the EU, portraying them negatively in relation to the Russia-Ukraine war.

Chinese Campaign

A Chinese network known as “Spamouflage” employed OpenAI’s technology to generate text in English, Chinese, Japanese, and Korean. The content criticized prominent Beijing critics, highlighted abuses against Native Americans, and was disseminated on platforms like Twitter and Medium.

Iranian Campaign

An Iranian group called the “International Union of Virtual Media” used OpenAI’s AI to create and translate articles criticizing the US and Israel into English and French, which were then published on their website.

Israeli Campaign

An Israeli political firm named “Stoic” (also known as “Zero Zeno”) generated articles and comments supporting Israel’s military actions in Gaza. These were targeted at users in Canada, the US, and Israel, often posing as pro-Israel college students or minority groups.Despite their efforts, none of these campaigns significantly increased audience engagement or reach due to OpenAI’s services. However, the report highlights the growing concerns over the potential misuse of generative AI for online disinformation, particularly during major election cycles

There is currently no single worldwide governing body or group that sets universal standards for all AI chatbot companies. However, there are some efforts and initiatives aimed at establishing guidelines and best practices for the responsible development and deployment of AI systems, including chatbots:

  1. The OECD (Organisation for Economic Co-operation and Development) has developed the OECD AI Principles, which provide a set of recommendations for the responsible development and use of AI systems, including transparency, robustness, and accountability.
  1. The European Union has proposed the AI Act, which aims to regulate AI systems based on their level of risk, with stricter requirements for high-risk AI applications.
  1. The IEEE (Institute of Electrical and Electronics Engineers) has established the Ethically Aligned Design guidelines, which provide recommendations for prioritizing ethical considerations in the development of autonomous and intelligent systems.
  1. The Partnership on AI, a multi-stakeholder organization, has developed best practices for the responsible development and deployment of AI systems, focusing on areas such as fairness, transparency, and accountability.
  • The National Institute of Standards and Technology (NIST) in the United States has developed the AI Risk Management Framework, which provides guidance for managing risks associated with AI systems.

While these initiatives provide valuable guidance and recommendations, they are not legally binding or universally adopted by all AI chatbot companies. Many companies also develop their own internal guidelines and policies for the responsible development and use of AI systems, including chatbots. The regulation and governance of AI systems, including chatbots, is an evolving area, and there is ongoing discussion and debate around the need for more comprehensive and enforceable standards and regulations at both national and international levels. The example above is just one example that governance and policies by these companies need to continue and enforceable standards need to be developed worldwide.

Leave a comment