Connect with us

Social Networking

COVID-19 Misinformation at US Public Forums Vexes Social Media Platforms, Big Tech

Published

on

By Associated Press | Updated: 16 August 2021

There are plenty of places to turn for accurate information about COVID-19. Your physician. Local health departments. The US Centers for Disease Control.

But not, perhaps, your local government’s public comment session.

During a meeting of the St. Louis County Council earlier this month, opponents of a possible mask mandate made so many misleading comments about masks, vaccines, and COVID-19 that YouTube removed the video for violating its policies against false claims about the virus.

“I hope no one is making any medical decisions based on what they hear at our public forums,” said County Councilwoman Lisa Clancy, who supports mask wearing and said she believes most of her constituents do too. The video was restored, but Clancy’s worries about the impact of that misinformation remain.

Videos of local government meetings have emerged as the latest vector of COVID-19 misinformation, broadcasting misleading claims about masks and vaccines to millions and creating new challenges for Internet platforms trying to balance the potential harm against the need for government openness.

The latest video to go viral features a local physician who made several misleading claims about COVID-19 while addressing the Mount Vernon Community School Corporation in Fortville, Indiana, on August 6. In his 6-minute remarks, Dr. Dan Stock tells the board that masks don’t work, vaccines don’t prevent infection, and state and federal health officials don’t follow the science.

The video has amassed tens of millions of online views, and prompted the Indiana State Department of Health to push back. Stock did not return multiple messages seeking comment.

“Here comes a doctor in suspenders who goes in front of the school board and basically says what some people are thinking: the masks are B.S., vaccines don’t work and the CDC is lying — it can be very compelling to laypeople,” said Dr. Zubin Damania, a California physician who received so many messages about the Indiana clip that he created his own video debunking Stock’s claims.

Damania hosts a popular online medical show under the name ZDoggMD. His video debunking Stock’s comments has been viewed more than 400,000 times so far. He said that while there are legitimate questions about the effectiveness of mask requirements for children, Stock’s broad criticism of masks and vaccines went too far.

YouTube removed several similar videos of local government meetings in North Carolina, Missouri, Kansas, and Washington state. In Bellingham, Washington, officials responded by temporarily suspending public comment sessions.

The false claims in those videos were made during the portion of the meeting devoted to public comment. Local officials have no control over what is said at these forums, and say that’s part of the point.

In Kansas, YouTube pulled video of the May school board meeting in the 27,000-student Shawnee Mission district in which parents and a state lawmaker called for the district to remove its mask mandate, citing “medical misinformation.”

The district, where a mask mandate remains in effect, responded by ending livestreaming of the public comment period. District spokesman David Smith acknowledged that it has been challenging to balance making the board meetings accessible and not spreading fallacies.

“It was hard for me to hear things in the board meeting that weren’t true and to know that those were going out without contradiction,” Smith said. “I am all about free speech, but when that free speech endangers people’s lives, it is hard to sit through that.”

After hearing from local officials, YouTube reversed its decision and put the videos back up. Earlier this month the company, which is owned by Google, announced a change to its COVID misinformation policy to allow exceptions for local government meetings — though YouTube may still remove content that uses remarks from public forums in an attempt to mislead.

“While we have clear policies to remove harmful COVID-19 misinformation, we also recognize the importance of organizations like school districts and city councils using YouTube to share recordings of open public forums, even when comments at those forums may violate our policies,” company spokeswoman Elena Hernandez said.

The deluge of false claims about the virus has challenged other platforms too. Twitter and Facebook each have their own policies on COVID-19 misinformation, and say that like YouTube they attach labels to misleading content and remove the worst of it.

Public comment sessions preceding local government meetings have long been known for sometimes colorful remarks from local residents. But before the Internet, if someone were to drone on about fluoride in the drinking water, for instance, their comments weren’t likely to become national news.

Now, thanks to the Internet and social media, the misleading musings of a local doctor speaking before a school board can compete for attention with the recommendations of the CDC.

It was only a matter of time before misleading comments at these local public forums went viral, according to Jennifer Grygiel, a communications professor at Syracuse University who studies social media platforms.

Grygiel suggested a few possible ways to minimize the impact of misinformation without muzzling local governments. Grygiel said clear labels on government broadcasts would help viewers understand what they’re watching. Keeping the video on the government’s website, instead of making it shareable on YouTube, could allow local residents to watch without enabling the spread of videos more widely.

“Anytime there is a public arena – a city council hearing, a school board meeting, a public park – the public has the opportunity to potentially spread misinformation,” Grygiel said. “What’s changed is it used to stay local.”

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Social Networking

Twitter Says Two Security Team Leaders Leaving Company

Published

on

By Reuters | Updated: 22 January 2022

Twitter said on Friday its head of security is no longer at the company and its chief information security officer will depart in the coming weeks.

The shakeup comes after Twitter co-founder Jack Dorsey stepped down as chief executive in November, handing the reins to top deputy Parag Agrawal, who has since reorganized the leadership structure of the social media company.

Twitter did not specify whether the departures were voluntary.

Peiter Zatko, a famed hacker more widely known as “Mudge,” was appointed head of security in 2020 after Twitter suffered a security breach that allowed hackers to tweet from verified accounts for public figures including billionaire Bill Gates and Tesla CEO Elon Musk.

The New York Times first reported Zatko’s departure and Rinki Sethi’s upcoming departure as CISO earlier on Friday. Zatko and Sethi could not be immediately reached for comment.

© Thomson Reuters 2021

Continue Reading

Social Networking

Twitter Must Reveal Measures on Online Hate, French Court Rules

Published

on

By Agence France-Presse | Updated: 20 January 2022

A Paris court on Thursday ruled that Twitter must reveal its measures for fighting hate speech, in one of several cases thrashing out whether the French justice system has jurisdiction over the US social media giant.

Ireland-based Twitter International had appealed a July decision ordering it to share documents and details about its French moderation team and data on their activities against hate speech.

That case had been brought by several anti-discrimination groups over what they said was the company’s longstanding failure to properly moderate posts.

The appeals court on Thursday confirmed the first judgement and further ordered Twitter to pay EUR 1,500 (roughly Rs. 1.2 lakh) to the groups, including SOS Racisme, SOS Homophobie and the International League against racism and anti-Semitism (Licra).

In another Paris case, three victims of terrorist attacks who have suffered online harassment are suing Twitter France.

They argue it was the company’s fault that their cases against their harassers failed, as it did not provide identifying information that investigators had asked for.

In that case, Twitter France chief Damien Viel told a court last week that “I’m in charge of Twitter’s business development and nothing more”.

Providing data to the authorities was “up to the good will of Twitter International, which is outside French jurisdiction and can decide whether to cooperate or not,” his lawyer Karim Beylouni added.

In still another case in Versailles, just outside Paris, Twitter France has said it is unable to comply with a police request for information on people who sent insults and threats to a public official.

The local office says it does not store any information, with all data handled by the group’s European mothership based in Ireland.

But prosecutors have asked for fines as high as EUR 75,000 (roughly Rs. 63 lakh) against both Twitter France and manager Viel personally.

Continue Reading

Social Networking

Facebook-Parent Meta’s VR Oculus Business Said to Be Probed by US States Over Potential Violations

Published

on

By Reuters | Updated: 15 January 2022

Multiple states have begun investigating potential violations in how Facebook, now known as Meta Platforms, runs its virtual-reality Oculus business, according to three sources familiar with the matter.

Two of the sources said the US Federal Trade Commission was also involved in the antitrust investigation. Meta did not immediately respond to a request for comment.

New York, North Carolina and Tennessee were among the states involved in the inquiry, one source said. A group of almost 50 states also asked an appeals court on Friday to reinstate their antitrust lawsuit, filed in December 2020, against Facebook.

The inquiries into Facebook’s Oculus business are part of the larger probe, one of the sources said.

The offices of the New York, North Carolina and Tennessee attorneys general did not immediately respond to requests for comments.

The inquiry was first reported by Bloomberg News.

© Thomson Reuters 2022

Continue Reading

Social Networking

Facebook Faces GBP 2.3-Billion UK Class Action Over Market Dominance

Published

on

By Reuters | Updated: 14 January 2022

Facebook, now known as Meta Platforms, faces a GBP 2.3 billion (roughly Rs. 23,420 crore)-plus class action in Britain over allegations it abused its market dominance by exploiting the personal data of 44 million users.

Liza Lovdahl Gormsen, a senior adviser to Britain’s Financial Conduct Authority (FCA) watchdog and a competition law academic, said she was bringing the case on behalf of people in Britain who had used Facebook between 2015 and 2019.

The lawsuit, which will be heard by London’s Competition Appeal Tribubal, alleges Facebook made billions of pounds by imposing unfair terms and conditions that demanded consumers surrender valuable personal data to access the network.

Quinn Emanuel Urquhart & Sullivan, the law firm representing Lovdahl Gormsen, has notified Facebook of the claim.

Facebook said people used its services because it delivered value for them and “they have meaningful control of what information they share on Meta’s platforms and who with.”

The case comes days after Facebook lost an attempt to strike out an antitrust lawsuit by the Federal Trade Commission (FTC), one of the biggest challenges by the US government against a tech company in decades as Washington attempts to tackle Big Tech’s extensive market power.

“In the 17 years since it was created, Facebook became the sole social network in the UK where you could be sure to connect with friends and family in one place,” Lovdahl Gornsen said.

“Yet, there was a dark side to Facebook; it abused its market dominance to impose unfair terms and conditions on ordinary Britons, giving it the power to exploit their personal data.”

Lovdahl Gormsen alleges Facebook collected data within its platform and through mechanisms like the Facebook Pixel, allowing it to build an “all-seeing picture” of Internet usage and generate valuable, deep data profiles of users.

Opt-out class actions, like Lovdahl Gormsen’s, bind a defined group automatically into a lawsuit unless individuals opt out.

© Thomson Reuters 2022

Continue Reading

Social Networking

Twitter, Meta, YouTube Among Tech Giants Subpoenaed by January 6 US Capitol Riot Panel

Published

on

By Associated Press | Updated: 14 January 2022

Months after requesting documents from more than a dozen social platforms, the House committee investigating the US Capitol insurrection has issued subpoenas targeting Twitter, Meta, Reddit and YouTube after lawmakers said the companies’ initial responses were inadequate.

The committee chairman, Rep. Bennie Thompson, demanded records Thursday from the companies relating to their role in allegedly spreading misinformation about the 2020 election and promoting domestic violent extremism on their platforms in the lead-up to the insurrection on January 6, 2021.

“Two key questions for the Select Committee are how the spread of misinformation and violent extremism contributed to the violent attack on our democracy, and what steps — if any — social media companies took to prevent their platforms from being breeding grounds for radicalisng people to violence,” Thompson, D-Miss., said in the letter.

Thompson added that it’s “disappointing that after months of engagement,” the four companies have not voluntarily turned over the necessary information and documents that would help lawmakers answer the questions at the heart of their investigation.

In his letter, Thompson outlined the way the companies were complicit in the deadly insurrection perpetrated by supporters of Donald Trump and far-right groups.

YouTube, owned by Alphabet, was the platform where a significant amount of communication took place “relevant to the planning and execution” of the siege against the Capitol, “including livestreams of the attack as it was taking place,” the letter stated.

In a statement to the Associated Press, a YouTube spokesperson said it is “actively cooperating” with the committee and is committed to stopping content that incites violence or undermines faith in elections.

”We enforced these policies in the run-up to January 6 and continue to do so today,” the spokesperson wrote.

The committee also outlined how how Meta, formerly known as Facebook, was reportedly used to exchange hateful, violent and inciting messages between users as well as spread misinformation that the 2020 presidential election was fraudulent in an attempt to coordinate the “Stop the Steal” movement.

In response, Meta said it too was working with the committee to get lawmakers the information they requested.

On Reddit, the r/The_Donald “subreddit” community grew significantly, the letter said, before members migrated to an official website where investigators believe discussions around the planning of the attack were hosted. A spokesperson for Reddit said Thursday that the company had received the subpoena and “will continue to work with the committee on their requests.”

The letter further detailed how Twitter was warned about the potential violence that was being planned on its platform in advance of the attack and how its users engaged in “communications amplifying allegations of election fraud, including by the former President himself.”

One specific tweet from Trump on December 19, 2020 was highlighted: “Statistically impossible to have lost the 2020 Election” as he urged followers to come to Washington to engage in a “wild” protest on January 6, 2021.

A spokesperson for Twitter declined to comment on the subpoenas.

The committee made its initial request for the documents from 15 social media companies in August, which also included TikTok, Parler, Telegram, 4chan, and 8kun.

The subpoenas come as the nine-member committee continues its wide-reaching investigation into how a mob was able to infiltrate the Capitol and disrupt the certification of Democrat Joe Biden’s presidential victory, in what was the most serious assault on Congress in two centuries.

The committee of seven Democrats and two Republicans has interviewed more than 340 people and issued dozens of subpoenas to those in Trump’s inner circle, including his former chief of staff, as well as requests to their own colleagues in the House.

On Wednesday, the committee requested an interview with House Minority Leader Kevin McCarthy, R-Calif.

McCarthy as well as GOP Reps. Jim Jordan of Ohio and Scott Perry of Pennsylvania have denied the committee’s request to sit down for interviews or turn over documents related to their conversations on January 6, 2021, with Trump or those close to him as hundreds of his supporters beat police, stormed the building and interrupted the certification of the 2020 election.

Continue Reading

Social Networking

Twitter Ban Lifted in Nigeria After Seven Months, Company to Open Office in the Country

Published

on

By Associated Press | Updated: 13 January 2022

The Nigerian government has lifted its ban on Twitter in the West African country, seven months after the country’s more than 200 million people were shut out of the social media network.

Nigerian President Muhammadu Buhari directed that Twitter’s operations will resume in the country on Thursday, according to the director-general of the country’s National Information Technology Development Agency. Kashifu Inuwa Abdullahi said that was only after Twitter agreed to meet some conditions, including opening an office in Nigeria.

Nigeria suspended Twitter’s operation on June 4, citing “the persistent use of the platform for activities that are capable of undermining Nigeria’s corporate existence.” The action triggered criticisms as it came shortly after the social media network deleted a post by Buhari in which he threatened to treat separatists “in the language they will understand.”

“Our action is a deliberate attempt to recalibrate our relationship with Twitter to achieve the maximum mutual benefits for our nation without jeopardising the justified interests of the company. Our engagement has been very respectful, cordial, and successful,” Abdullahi said in a statement.

A spokesperson for Twitter did not immediately respond to a request for comment.

In addition to registering in Nigeria during the first quarter of 2022, Abdullahi said Twitter has also agreed to other conditions including appointing a designated country representative, complying with tax obligations, and acting “with a respectful acknowledgement of Nigerian laws and the national culture and history on which such legislation has been built.”

Continue Reading

Trending