Connect with us

Social Networking

Twitter Stops Enforcing COVID-19 Misinformation Policy, Experts Express Concerns Over False Claims

Avatar

Published

on

By Associated Press | Updated: 30 November 2022

Twitter will no longer enforce its policy against COVID-19 misinformation, raising concerns among public health experts and social media researchers that the change could have serious consequences if it discourages vaccination and other efforts to combat the still-spreading virus.

Eagle-eyed users spotted the change Monday night, noting that a one-sentence update had been made to Twitter’s online rules: “Effective November 23, 2022, Twitter is no longer enforcing the COVID-19 misleading information policy.”

By Tuesday, some Twitter accounts were testing the new boundaries and celebrating the platform’s hands-off approach, which comes after Twitter was purchased by Elon Musk.

“This policy was used to silence people across the world who questioned the media narrative surrounding the virus and treatment options,” tweeted Dr. Simone Gold, a physician and leading purveyor of COVID-19 misinformation. “A win for free speech and medical freedom!”

Twitter’s decision to no longer remove false claims about the safety of COVID-19 vaccines disappointed public health officials, however, who said it could lead to more false claims about the virus, or the safety and effectiveness of vaccines.

“Bad news,” tweeted epidemiologist Eric Feigl-Ding, who urged people not to flee Twitter but to keep up the fight against bad information about the virus. “Stay folks — do NOT cede the town square to them!”

While Twitter’s efforts to stop false claims about COVID weren’t perfect, the company’s decision to reverse course is an abdication of its duty to its users, said Paul Russo, a social media researcher and dean of the Katz School of Science and Health at Yeshiva University in New York.

Russo added that it’s the latest of several recent moves by Twitter that could ultimately scare away some users and even advertisers. Some big names in business have already paused their ads on Twitter over questions about its direction under Musk.

“It is 100% the responsibility of the platform to protect its users from harmful content,” Russo said. “This is absolutely unacceptable.”

The virus, meanwhile, continues to spread. Nationally, new COVID cases averaged nearly 38,800 a day as of Monday, according to data from Johns Hopkins University — far lower than last winter but a vast undercount because of reduced testing and reporting. About 28,100 people with COVID were hospitalized daily and about 313 died, according to the most recent federal daily averages.

Cases and deaths were up from two weeks earlier. Yet a fifth of the U.S. population hasn’t been vaccinated, most Americans haven’t gotten the latest boosters, and many have stopped wearing masks.

Musk, who has himself spread COVID misinformation on Twitter, has signalled an interest in rolling back many of the platform’s previous rules meant to combat misinformation.

Last week, Musk said he would grant “amnesty” to account holders who had been kicked off Twitter. He’s also reinstated the accounts for several people who spread COVID misinformation, including that of Rep. Marjorie Taylor Greene, whose personal account was suspended this year for repeatedly violating Twitter’s COVID rules.

Greene’s most recent tweets include ones questioning the effectiveness of masks and making baseless claims about the safety of COVID vaccines.

Since the pandemic began, platforms like Twitter and Facebook have struggled to respond to a torrent of misinformation about the virus, its origins and the response to it.

Under the policy enacted in January 2020, Twitter prohibited false claims about COVID-19 that the platform determined could lead to real-world harms. More than 11,000 accounts were suspended for violating the rules, and nearly 100,000 pieces of content were removed from the platform, according to Twitter’s latest numbers.

Despite its rules prohibiting COVID misinformation, Twitter has struggled with enforcement. Posts making bogus claims about home remedies or vaccines could still be found, and it was difficult on Tuesday to identify exactly how the platform’s rules may have changed.

Messages left with San Francisco-based Twitter seeking more information about its policy on COVID-19 misinformation were not immediately returned Tuesday.

A search for common terms associated with COVID misinformation on Tuesday yielded lots of misleading content, but also automatic links to helpful resources about the virus as well as authoritative sources like the Centers for Disease Control and Prevention.

Dr. Ashish Jha, the White House COVID-19 coordinator, said Tuesday that the problem of COVID-19 misinformation is far larger than one platform, and that policies prohibiting COVID misinformation weren’t the best solution anyway.

Speaking at a Knight Foundation forum Tuesday, Jha said misinformation about the virus spread for a number of reasons, including legitimate uncertainty about a deadly illness. Simply prohibiting certain kinds of content isn’t going to help people find good information, or make them feel more confident about what they’re hearing from their medical providers, he said.

“I think we all have a collective responsibility,” Jha said of combating misinformation about COVID. “The consequences of not getting this right — of spreading that misinformation — is literally tens of thousands of people dying unnecessarily.”

Social Networking

Meta Used Public Instagram, Facebook Posts to Train Its New AI Assistant

Avatar

Published

on

Meta also said it did not use private chats on its messaging services as training data for the AI model.
By Reuters | Updated: 29 September 2023

Meta Platforms used public Facebook and Instagram posts to train parts of its new Meta AI virtual assistant, but excluded private posts shared only with family and friends in an effort to respect consumers’ privacy, the company’s top policy executive told Reuters in an interview.

Meta also did not use private chats on its messaging services as training data for the model and took steps to filter private details from public datasets used for training, said Meta President of Global Affairs Nick Clegg, speaking on the sidelines of the company’s annual Connect conference this week.

“We’ve tried to exclude datasets that have a heavy preponderance of personal information,” Clegg said, adding that the “vast majority” of the data used by Meta for training was publicly available.

He cited LinkedIn as an example of a website whose content Meta deliberately chose not to use because of privacy concerns.

Clegg’s comments come as tech companies including Meta, OpenAI and Alphabet’s Google have been criticized for using information scraped from the internet without permission to train their AI models, which ingest massive amounts of data in order to summarize information and generate imagery.

The companies are weighing how to handle the private or copyrighted materials vacuumed up in that process that their AI systems may reproduce, while facing lawsuits from authors accusing them of infringing copyrights.

Meta AI was the most significant product among the company’s first consumer-facing AI tools unveiled by CEO Mark Zuckerberg on Wednesday at Meta’s annual products conference, Connect. This year’s event was dominated by talk of artificial intelligence, unlike past conferences which focused on augmented and virtual reality.

Meta made the assistant using a custom model based on the powerful Llama 2 large language model that the company released for public commercial use in July, as well as a new model called Emu that generates images in response to text prompts, it said.

The product will be able to generate text, audio and imagery and will have access to real-time information via a partnership with Microsoft’s Bing search engine.

The public Facebook and Instagram posts that were used to train Meta AI included both text and photos, Clegg said.

Those posts were used to train Emu for the image generation elements of the product, while the chat functions were based on Llama 2 with some publicly available and annotated datasets added, a Meta spokesperson told Reuters.

Interactions with Meta AI may also be used to improve the features going forward, the spokesperson said.

Clegg said Meta imposed safety restrictions on what content the Meta AI tool could generate, like a ban on the creation of photo-realistic images of public figures.

On copyrighted materials, Clegg said he was expecting a “fair amount of litigation” over the matter of “whether creative content is covered or not by existing fair use doctrine,” which permits the limited use of protected works for purposes such as commentary, research and parody.

“We think it is, but I strongly suspect that’s going to play out in litigation,” Clegg said.

Some companies with image-generation tools facilitate the reproduction of iconic characters like Mickey Mouse, while others have paid for the materials or deliberately avoided including them in training data.

OpenAI, for instance, signed a six-year deal with content provider Shutterstock this summer to use the company’s image, video and music libraries for training.

Asked whether Meta had taken any such steps to avoid the reproduction of copyrighted imagery, a Meta spokesperson pointed to new terms of service barring users from generating content that violates privacy and intellectual property rights.

© Thomson Reuters 2023

Continue Reading

Social Networking

Meta to Offer Paid Versions of Facebook and Instagram in Europe to Avoid Ads: Report

Avatar

Published

on

Meta would reportedly continue to offer free versions of the apps with ads in the EU.
By Reuters | Updated: 2 September 2023

Meta Platforms is considering paid versions of Facebook and Instagram with no advertisements for users residing in the European Union (EU) as a response to scrutiny from regulators, the New York Times reported on Friday.

Those who pay for the subscriptions would not see ads while Meta would also continue to offer free versions of the apps with ads in the EU, the report said, citing three people with knowledge of the plans.

The report added that the possible move may help Meta combat privacy concerns and other scrutiny from the EU as it would give users an alternative to the company’s ad-based services, which rely on analyzing people’s data.

Meta did not immediately respond to a Reuters request for comment.

The social media behemoth has been in the crosshairs of EU antitrust regulators and lost a fight in July against a 2019 German order that barred it from collecting users’ data without consent.

It is unclear how much the paid versions of the app would cost, the NYT report said.

The social media giant has been in the spotlight of EU antitrust regulators and has been fined NOK 1 million (roughly Rs. 77,51,000) per day since August 14 for breaching users’ privacy by harvesting user data and using it to target advertising at them. The company is seeking a temporary injunction against the order by Norway’s data protection authority, which imposes a daily fine for the next three months. The regulator, Datatilsynet, had said on July 17 that the company would be fined if it did not address privacy breaches the regulator had identified.


© Thomson Reuters 2023

Continue Reading

Social Networking

Elon Musk’s X Adds 5-Second Delay to Links for NY Times, Reuters and Social Media Rivals Like Facebook: Report

Avatar

Published

on

Musk, who bought Twitter in October, has previously lashed out at news organizations and journalists who have reported critically on his companies.
By Reuters | Updated: 16 August 2023

Social media company X, formerly known as Twitter, delayed access to links to content on the Reuters and New York Times websites as well as rivals like Bluesky, Facebook, and Instagram, according to a Washington Post report on Tuesday.

Clicking a link on X to one of the affected websites resulted in a delay of about five seconds before the webpage loaded, the Washington Post reported, citing tests it conducted on Tuesday. Reuters also saw a similar delay in the tests it ran.

By late Tuesday afternoon, X appeared to have eliminated the delay. When contacted for comment, X confirmed the delay was removed but did not elaborate.

Billionaire Elon Musk, who bought Twitter in October, has previously lashed out at news organizations and journalists who have reported critically on his companies, which include Tesla and SpaceX. Twitter has previously prevented users from posting links to competing social media platforms.

Reuters could not establish the precise time when X began delaying links to some websites.

A user on Hacker News, a tech forum, posted about the delay earlier on Tuesday and wrote that X began delaying links to the New York Times on August 4. On that day, Musk criticized the publication’s coverage of South Africa and accused it of supporting calls for genocide. Reuters has no evidence that the two events are related.

A spokesperson for the New York Times said it has not received an explanation from X about the link delay.

“While we don’t know the rationale behind the application of this time delay, we would be concerned by targeted pressure applied to any news organization for unclear reasons,” the spokesperson said on Tuesday.

A Reuters spokesperson said: “We are aware of the report in the Washington Post of a delay in opening links to Reuters stories on X. We are looking into the matter.”

Bluesky, an X rival that has Twitter co-founder Jack Dorsey on its board, did not reply to a request for comment.

Meta, which owns Facebook and Instagram, did not immediately respond to a request for comment.

© Thomson Reuters 2023

Continue Reading

Social Networking

Snapchat Said to Be Under Scrutiny From UK Watchdog Over Underage Users

Avatar

Published

on

Under UK data protection law, social media companies need parental consent before processing data of children under 13.
By Reuters | Updated: 8 August 2023

Britain’s data regulator is gathering information on Snapchat to establish whether the US instant messaging app is doing enough to remove underage users from its platform, two people familiar with the matter said.

Reuters reported exclusively in March that Snapchat owner Snap had only removed a few dozen children aged under-13 from its platform in Britain last year, while UK media regulator Ofcom estimates it has thousands of underage users.

Under UK data protection law, social media companies need parental consent before processing data of children under 13. Social media firms generally require users to be 13 or over but have had mixed success in keeping children off their platforms.

Snapchat declined to give details of any measures it might have taken to reduce the number of underage users.

“We share the goals of the ICO (Information Commissioner’s Office) to ensure digital platforms are age appropriate and support the duties set out in the Children’s Code,” a Snap spokesperson said.

“We continue to have constructive conversations with them on the work we’re doing to achieve this,” they added.

Before launching any official investigation, the ICO generally gathers information related to an alleged breach. It may issue an information notice, a formal request for internal data that may aid the investigation, before deciding whether to fine the individual or organisation being investigated.

Last year, Ofcom found 60 percent of children aged between eight and 11 had at least one social media account, often created by supplying a false date of birth. It also found Snapchat was the most popular app for underage social media users.

The ICO received a number of complaints from the public concerning Snap’s handling of children’s data after the Reuters report, a source familiar with the matter said.

Some of the complaints related to Snapchat not doing enough to keep young children off its platform, the source said.

The ICO has spoken to users and other regulators to assess whether there has been any breach by Snap, the sources said.

An ICO spokesperson told Reuters it continued to monitor and assess the approaches Snap and other social media platforms were taking to prevent underage children from accessing their platforms.

A decision on whether to launch a formal investigation into Snapchat will be made in the coming months, the sources said.

PLATFORM PRESSURE

If the ICO found Snap to be in breach of its rules, the firm could face a fine equivalent to up to 4 percent of its annual global turnover, which according to a Reuters calculation would equate to $184 million (roughly Rs. 1,522 crore) based on its most recent financial results.

Snapchat and other social media firms are under pressure globally to better police content on their platforms.

The NSPCC (National Society for the Prevention of Cruelty to Young Children), said that figures it had obtained showed that Snapchat accounted for 43 percent of cases in which social media was used to distribute indecent images of children.

Snapchat has previously not responded to this report when asked to comment on it by Reuters.

Earlier this year, the ICO fined TikTok 12.7 million pounds ($16.2 million) for misusing children’s data, saying the Snap competitor did not “take sufficient action” to remove them.

A TikTok spokesperson said at the time that it “invested heavily” to keep under-13s off the platform and that its 40,000-strong safety team worked “around the clock” to keep it safe.

Snapchat does block users from signing up with a date of birth that puts them under the age of 13? However, other apps take more proactive measures to prevent underage children from accessing their platforms.

For example, if an under-13-year-old has failed to sign up to TikTok using their real date of birth, the app continues blocking them from creating an account.

© Thomson Reuters 2023

Continue Reading

Social Networking

Elon Musk’s X Sues Nonprofit That Tracks Hate Speech, Disinformation

Avatar

Published

on

The lawsuit stems from a media report that stated findings from CCDH's research saying that hate speech had increased towards minority communities on the X.
By Reuters | Updated: 1 August 2023

Social media platform X, formerly known as Twitter, on Monday sued a nonprofit that fights hate speech and disinformation, accusing it of asserting false claims and encouraging advertisers to pause investment on the platform.

US media reported earlier that X, owned by Elon Musk, had sent a letter to the Center for Countering Digital Hate (CCDH) and threatened to sue the non-profit for unspecified damages.

In response to that letter, lawyers for the CCDH accused X of “intimidating those who have the courage to advocate against incitement, hate speech and harmful content online.” They also said that X’s allegations had no factual basis.

The lawsuit stems from a media report published in July that stated findings from CCDH’s research saying that hate speech had increased towards minority communities on the platform.

X and its CEO Linda Yaccarino labeled the report false and said it was based on “a collection of incorrect, misleading, and outdated metrics, mostly from the period shortly after Twitter’s acquisition.”

In a blog post on Monday, X said the CCDH had gained access to its data without authorisation and accused it of scraping data from its platform, violating X’s terms.

It reiterated that the metrics contained in the research were used out of context to make unsubstantiated assertions about X.

The CCDH did not respond to a request for comment outside regular business hours.

X recently filed lawsuits against four unnamed entities in Texas and Israel’s Bright Data for scraping data.

© Thomson Reuters 2023

Continue Reading

Social Networking

Meta to Launch AI-Powered Chatbots With Different Personalities by September: Report

Avatar

Published

on

Meta has been reportedly designing prototypes for chatbots that can have humanlike discussions with its users.
By Reuters | Updated: 1 August 2023

Meta Platforms is preparing to launch a range of artificial intelligence (AI) powered chatbots that exhibit different personalities as soon as September, the Financial Times reported on Tuesday.

Meta has been designing prototypes for chatbots that can have humanlike discussions with its users, as the company attempts to boost its engagement with its social media platforms, according to the report, citing people with knowledge of the plans.

The Menlo Park, California-based social media giant is even exploring a chatbot that speaks like Abraham Lincoln and another that advises on travel options in the style of a surfer, the report added. The purpose of these chatbots will be to provide a new search function as well as offer recommendations.

The report comes as Meta executives are focusing on boosting retention on its new text-based app Threads, after the app lost more than half of its users in the weeks following its launch on July 5.
Meta did not immediately respond to a Reuters request for comment.

The Facebook parent reported a strong rise in advertising revenue in its earnings last week, forecasting third-quarter revenue above market expectations.

The company has been climbing back from a bruising 2022, buoyed by hype around emerging AI technology and an austerity drive in which it has shed around 21,000 employees since last fall.

Bloomberg News reported in July that Apple is working on AI offerings similar to OpenAI’s ChatGPT and Google’s Bard, adding that it has built its own framework, known as ‘Ajax’, to create large language models and is also testing a chatbot that some engineers call ‘Apple GPT’.

© Thomson Reuters 2023

Continue Reading

Trending