OpenAI has identified and upset five influence operations using its artificial intelligence (AI) tools in one way or another.

The various operations — from China, Iran, Israel, and two from Russia — focused on spreading political messaging. As OpenAI reports, they primarily used AI to generate text such as social media posts and comments, as well as carried out some productivity tasks, like debugging code.

None of them were particularly effective, however. On the Brookings Breakout Scale, which measures the impact of influence operations on a scale of 1 to 6, none scored higher than a 2. A score of 1 means the campaign spread only within a single community or platform, while a 6 means triggering a policy response or some other form of concrete action, like violence. A 2 means the operation spread across multiple communities on one platform, or one community across multiple platforms.

The Current State of AI-Driven Influence Ops

The influence operations in question, while geographically diverse, ultimately were rather similar in nature:

  • Among the most notorious of them is Spamouflage, from China. It used OpenAI tooling to debug its code, research social media activity, and post content to X, Medium, and Blogspot in multiple languages.

  • Bad Grammar, a newly discovered threat from Russia, operated primarily on Telegram, targeting individuals in Eastern Europe and the United States. It also used AI to debug code it employed to run a Telegram bot and write political comments on Telegram in both Russian and English.

  • A second Russian group, Doppelganger, used AI to post comments on X and 9GAG in five European languages, plus generate headlines, and translate, edit, and convert news articles into Facebook posts.

  • An Iranian entity, known as the International Union of Virtual Media (IUVM), used AI for generating and translating articles, as well as headlines and website tags for its site.

  • Finally there’s Zero Zeno, an operation run by Stoic, a Tel Aviv-based political marketing and business intelligence company. Stoic used OpenAI to generate articles and comments for Instagram, Facebook, X, and other websites.

Stoic has also drawn attention lately from Meta. In its latest “Adversarial Threat Report,” Meta reported taking down 510 Facebook accounts, 32 Instagram accounts, 11 pages, and one group associated with the company. Only around 2,000 accounts followed its various Instagram accounts. About 500 accounts followed its Facebook pages, and less than 100 joined its Facebook group.

To combat AI misuse, OpenAI wrote in a more detailed report that it is collaborating with industry partners, and using threat activity to design more secure platforms for users. The company also “invest[s] in technology and teams to identify and disrupt actors like the ones we are discussing here, including leveraging AI tools to help combat abuses.”

The report does not go into any further detail, but Dark Reading has reached out to OpenAI to clarify what it does, precisely, to disrupt and combat malicious actors.

Source: www.darkreading.com

Leave a Reply

Your email address will not be published. Required fields are marked *