Connect with us

Cybersecurity

OpenAI has stopped five attempts to misuse its AI for ‘deceptive activity’

Avatar

Published

on

By Reuters | Updated: May 31, 2024

May 30 (Reuters) – Sam Altman-led OpenAI said on Thursday it had disrupted five covert influence operations that sought to use its artificial intelligence models for “deceptive activity” across the internet.

The artificial intelligence firm said the threat actors used its AI models to generate short comments, longer articles in a range of languages, made up names and bios for social media accounts over the last three months.

These campaigns, which included threat actors from Russia, China, Iran and Israel, also focused on issues including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, among others.

The deceptive operations were an “attempt to manipulate public opinion or influence political outcomes,” OpenAI said in a statement.

The San Francisco-based firm’s report is the latest to stir safety concerns about the potential misuse of the gen AI technology, which can quickly and easily produce human-like text, imagery and audio.

Microsoft-backed (MSFT.O) OpenAI said on Tuesday it formed a Safety and Security Committee that would be led by board members, including CEO Sam Altman, as it begins training its next AI model.

The deceptive campaigns have not benefited from increased audience engagement or reach due to the AI firm’s services, OpenAI said in the statement.

OpenAI said these operations did not solely use AI-generated material but included manually written texts or memes copied from across the internet.

Separately, Meta Platforms (META.O) in its quarterly security report on Wednesday, said it had found “likely AI-generated” content used deceptively on its Facebook and Instagram platforms, including comments praising Israel’s handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers.

@ Thomson Reuters 2024