Connect with us

Social Networking

Facebook Offers Up First-Ever Estimate of Hate Speech Prevalence on Its Platform




By Reuters | Updated: 20 November 2020

Facebook for the first time on Thursday disclosed numbers on the prevalence of hate speech on its platform, saying that out of every 10,000 content views in the third quarter, 10 to 11 included hate speech.

The world’s largest social media company, under scrutiny over its policing of abuses, particularly around November’s US presidential election, released the estimate in its quarterly content moderation report.

Facebook said it took action on 22.1 million pieces of hate speech content in the third quarter, about 95 percent of which was proactively identified, compared to 22.5 million in the previous quarter.

The company defines ‘taking action’ as removing content, covering it with a warning, disabling accounts, or escalating it to external agencies.

This summer, civil rights groups organised a widespread advertising boycott to try to pressure Facebook to act against hate speech.

The company agreed to disclose the hate speech metric, calculated by examining a representative sample of content seen on Facebook, and submit itself to an independent audit of its enforcement record.

On a call with reporters, Facebook’s head of safety and integrity Guy Rosen said the audit would be completed “over the course of 2021.”

The Anti-Defamation League, one of the groups behind the boycott, said Facebook’s new metric still lacked sufficient context for a full assessment of its performance.

“We still don’t know from this report exactly how many pieces of content users are flagging to Facebook — whether or not action was taken,” said ADL spokesman Todd Gutnick. That data matters, he said, as “there are many forms of hate speech that are not being removed, even after they’re flagged.”

Rivals Twitter and YouTube, owned by Alphabet’s Google, do not disclose comparable prevalence metrics.

Facebook’s Rosen also said that from March 1 to the November 3 election, the company removed more than 2,65,000 pieces of content from Facebook and Instagram in the United States for violating its voter interference policies.

In October, Facebook said it was updating its hate speech policy to ban content that denies or distorts the Holocaust, a turnaround from public comments Facebook’s Chief Executive Mark Zuckerberg had made about what should be allowed.

Facebook said it took action on 19.2 million pieces of violent and graphic content in the third quarter, up from 15 million in the second. On Instagram, it took action on 4.1 million pieces of violent and graphic content.

Earlier this week, Zuckerberg and Twitter CEO Jack Dorsey were grilled by Congress on their companies’ content moderation practices, from Republican allegations of political bias to decisions about violent speech.

Last week, Reuters reported that Zuckerberg told an all-staff meeting that former Trump White House adviser Steve Bannon had not violated enough of the company’s policies to justify suspension when he urged the beheading of two US officials.

The company has also been criticised in recent months for allowing large Facebook groups sharing false election claims and violent rhetoric to gain traction.

Facebook said its rates for finding rule-breaking content before users reported it were up in most areas due to improvements in artificial intelligence tools and expanding its detection technologies to more languages.

In a blog post, Facebook said the COVID-19 pandemic continued to disrupt its content-review workforce, though some enforcement metrics were returning to pre-pandemic levels.

An open letter from more than 200 Facebook content moderators published on Wednesday accused the company of forcing these workers back to the office and ‘needlessly risking’ lives during the pandemic.

© Thomson Reuters 2020

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Social Networking

Facebook Bans False Claims About COVID-19 Vaccines Debunked by Public Health Experts




By Reuters | Updated: 4 December 2020

Facebook on Thursday said it would remove false claims about COVID-19 vaccines that have been debunked by public health experts, following a similar announcement by Alphabet’s YouTube in October.

The move expands Facebook’s current rules against falsehoods and conspiracy theories about the pandemic. The social media company says it takes down coronavirus misinformation that poses a risk of “imminent” harm, while labeling and reducing distribution of other false claims that fail to reach that threshold.

Facebook said in a blog post that the global policy change came in response to news that COVID-19 vaccines will soon be rolling out around the world.

Two drug companies, Pfizer and Moderna, have asked US authorities for emergency use authorisation of their vaccine candidates. Britain approved the Pfizer vaccine on Wednesday, jumping ahead of the rest of the world in the race to begin the most crucial mass inoculation programme in history.

Misinformation about the new coronavirus vaccines has proliferated on social media during the pandemic, including through viral anti-vaccine posts shared across multiple platforms and by different ideological groups, according to researchers.

A November report by the nonprofit First Draft found that 84 percent of interactions generated by vaccine-related conspiracy content it studied came from Facebook pages and Facebook-owned Instagram.

Facebook said it would remove debunked COVID-19 vaccine conspiracies, such as that the vaccines’ safety is being tested on specific populations without their consent, and misinformation about the vaccines.

“This could include false claims about the safety, efficacy, ingredients or side effects of the vaccines. For example, we will remove false claims that COVID-19 vaccines contain microchips,” the company said in a blog post. It said it would update the claims it removes based on evolving guidance from public health authorities.

Facebook did not specify when it would begin enforcing the updated policy, but acknowledged it would “not be able to start enforcing these policies overnight.”

The social media company has rarely removed misinformation about other vaccines under its policy of deleting content that risks imminent harm. It previously removed vaccine misinformation in Samoa where a measles outbreak killed dozens late last year, and it removed false claims about a polio vaccine drive in Pakistan that were leading to violence against health workers.

Facebook, which has taken steps to surface authoritative information about vaccines, said in October that it would also ban advertisements that discourage people from getting vaccines. In recent weeks, Facebook removed a prominent anti-vaccine page and a large private group, one for repeatedly breaking COVID misinformation rules and the other for promoting the QAnon conspiracy theory.

© Thomson Reuters 2020

Continue Reading

Social Networking

Facebook Sued by Trump Administration for Favouring Immigrants Over US Workers




By Agence France-Presse | Updated: 4 December 2020

The Trump administration on Thursday sued Facebook, accusing it of discriminating against American workers by favoring immigrant applicants for thousands of high-paying jobs.

The Department of Justice’s lawsuit opens a new front in the administration’s push against tech companies, and in its clampdown on immigration, as President Donald Trump enters his final weeks in office.

The suit concerns more than 2,600 positions with an average salary of some $1,56,000 (roughly Rs. 1 crore), offered from January 2018 to September 2019.

“Facebook engaged in intentional and widespread violations of the law, by setting aside positions for temporary visa holders instead of considering interested and qualified US workers,” assistant attorney general Eric Dreiband, of the Justice Department’s Civil Rights Division, said in a statement outlining the department’s allegations.

The Internet giant reserved positions for candidates with H1-B “skilled worker” visas or other temporary work visas, the department said.

Facebook “channeled” jobs to visa holders by avoiding advertising on its careers website, accepting only physically mailed applications for some posts, or refusing to consider US workers at all, according to the suit.

The unusual move to file a lawsuit, with the Justice Department pivoting suddenly away from simply discussing their concerns with Facebook, could be seen as a rush to hit the courts before Trump leaves the White House in January.

The California-based social network planned to continue cooperating with the department as the case plays out.

Restrictions rejected
The lawsuit was filed just two days after a US federal judge blocked rule changes ordered by Trump that made it harder for people outside the country to get skilled-worker visas.

The US Chamber of Commerce, the Bay Area Council in Facebook’s home state of California and others had sued the Department of Homeland Security arguing that the changes rushed new restrictions through without a proper public review process.

Skilled-worker visas are precious to Silicon Valley tech firms hungry for engineers and other highly-trained talent, with Asia home to many keenly sought workers.

US District Court Judge Jeffrey White granted a motion to set aside two rules by the Departments of Labor and Homeland Security that would have compelled companies to pay H1-B visa workers higher wages and restricted job types that qualify for the visas.

The Trump administration had cited the COVID-19 pandemic and its toll on the economy as reasons for skipping the required public notice and review processes for their new rules, according to court documents.

But White said in his ruling that the administration did not demonstrate “that the impact of the COVID-19 pandemic on domestic unemployment justified dispensing with the due deliberation that normally accompanies” making changes to the H-1B visa program.

Animosity toward immigration has been a hallmark of the Trump administration.

Facebook uses hiring practices standard in Silicon Valley, and US prosecutors were also eyeing other tech firms regarding H1-B visas employment, according to a person familiar with the matter.

Antitrust as well?
On another legal front, federal regulators and US states are poised to hit Facebook with antitrust cases, US media reported Thursday, amid concerns that its practice of buying up rivals has harmed competition.

The company said earlier this year its executives were fielding questions from the US Federal Trade Commission (FTC) on an antitrust fact-finding mission.

The FTC declined to comment Thursday on reports in multiple US outlets including The New York Times and Washington Post that it is likely to file an antitrust suit against the social media giant.

An FTC review of acquisitions dating back to 2010 could potentially “unwind” some of the company’s deals.

Facebook is the leading Internet social network, reaching close to three billion people worldwide with its core platform, along with Instagram and messaging services WhatsApp and Messenger.

An estimated seven in 10 US adults use Facebook, and its reach allows it to play an outsized role in digital advertising and news delivery.

That influence means the network regularly faces complaints over its handling of political misinformation and hate speech.

Continue Reading

Social Networking

Facebook Hate Speech Policy Revised to Target Slurs Against Blacks, Muslims: Report




By Agence France-Presse | Updated: 4 December 2020

Facebook on Thursday said it is revising its systems to prioritise blocking slurs against Black people, gays and other groups historically targeted by vitriol, no longer automatically filtering out barbs aimed broadly at whites, men or Americans.

The change in Facebook’s algorithm is a shift from the social network’s ethnicity and gender-neutral system that removed anti-white comments and posts such as “Men are dumb” or “Americans are stupid.”

“We know that hate speech targeted towards under-represented groups can be the most harmful, which is why we have focused our technology on finding the hate speech that users and experts tell us is the most serious,” said Facebook spokeswoman Sally Aldous.

The changes are to the leading social network’s automated systems, meaning hateful posts about whites, men or Americans that are reported by users will still be deleted if they violate Facebook policies.

Over the past year, Facebook has also updated its policies to catch more implicit hate speech, such as depictions of blackface and stereotypes about Jewish people, Aldous noted.

“Thanks to significant investments in our technology we proactively detect 95 percent of the content we remove and we continue to improve how we enforce our rules as hate speech evolves over time,” Aldous said.

The software tweak will initially target the most blatant slurs, including those against Black people, Muslims, people of more than one race, the LGBTQ community and Jews, Facebook said.

‘Long overdue’
The move comes as the company faces pressure from civil rights groups that have long complained it does too little to police hate speech.

Earlier this year, more than 1,000 advertisers boycotted Facebook to protest its handling of hate speech and misinformation.

“This is an important and long overdue step forward,” Anti-Defamation League chief executive Jonathan Greenblatt said.

The ADL and other groups have advocated for Facebook to better fight anti-Semitism, racism, xenophobia and “all forms of extremism,” according to Greenblatt.

“While we are encouraged that Facebook is attacking the most serious symptoms of the disease that it permitted to spread for so many years, we need to see additional steps to cure the sickness of hate on social media,” Greenblatt said.

Facebook and other social platforms have been condemned for failing to stop abusive and hateful content including organised violence such as the massacre of the Rohingya minority in Myanmar and the beheading of French schoolteacher Samuel Paty near Paris.

Facebook has been adamant that it is vigilant when it comes to policing hate speech, calls for violence and misinformation.

The company said that since August it identified more 600 militarised social movements, and removed their pages or accounts, part of an effort that took down 22.1 million posts containing “hate speech.”

Social problem?
Critics of Facebook and other social networks argue they should be held accountable for violence organised on their platforms, calling for reforms of a law that shields Internet services from liability for content posted by third parties.

But some analysts argue the platforms can’t bear full responsibility for deep social problems which have led to extremism and violence in the streets.

Facebook and others have long grappled with how to purge toxic content while fending off accusations they are stifling free expression.

The Internet giant and its rival Twitter have been taken to task on Capitol Hill by Republicans who say the platforms are biased against conservatives.

On Wednesday, Twitter said it was expanding its definition of hateful content to ban language which “dehumanises” people on the basis of race, ethnicity or national origin.

Twitter said it would remove offending tweets when they are reported, and offered examples such as describing a particular ethnic group as “scum” or “leeches.”

Continue Reading

Social Networking

WeChat Blocks Australia Prime Minister Scott Morrison’s Message in Doctored Image Dispute With China




By Reuters | Updated: 3 December 2020

China’s WeChat social media platform blocked a message by Australia Prime Minister Scott Morrison amid a dispute between Canberra and Beijing over the doctored tweeted image of an Australian soldier.

China rebuffed Morrison’s calls for an apology after its foreign ministry spokesman Zhao Lijian posted the picture of an Australian soldier holding a bloodied knife to the throat of an Afghan child on Monday.

The United States called China’s use of the digitally manipulated image a “new low” in disinformation.

Morrison took to WeChat on Tuesday to criticise the “false image”, while offering praise to Australia’s Chinese community.

In his message, Morrison defended Australia’s handling of a war crimes investigation into the actions of special forces in Afghanistan, and said Australia would deal with “thorny issues” in a transparent manner.

But that message appeared to be blocked by Wednesday evening, with a note appearing from the “Weixin Official Accounts Platform Operation Center” saying the content was unable to be viewed because it violated regulations, including distorting historical events and confusing the public.

Tencent, the parent company of WeChat, did not immediately respond to a request for comment.

Australian special forces allegedly killed 39 unarmed prisoners and civilians in Afghanistan, with senior commandos reportedly forcing junior soldiers to kill defenceless captives in order to “blood” them for combat, a four-year investigation found.

Australia said last week that 19 current and former soldiers would be referred for potential criminal prosecution.

China’s embassy has said the “rage and roar” from Australian politicians and media over the soldier image was an overreaction.

‘Hypocrisy is obvious to all’

Australia was seeking to “deflect public attention from the horrible atrocities by certain Australian soldiers”, it said.

Other nations, including the United States, New Zealand, and France, and the self-ruled island of Taiwan which China claims as its own, have expressed concern at the Chinese foreign ministry’s use of the manipulated image on an official Twitter account.

“The CCP’s latest attack on Australia is another example of its unchecked use of disinformation and coercive diplomacy. Its hypocrisy is obvious to all,” the US State Department said on Wednesday, referring to the Chinese Communist Party.

Jake Sullivan, tapped as national security adviser in the incoming administration of US President-elect Joe Biden, tweeted support for Australia without reference to China.

“America will stand shoulder to shoulder with our ally Australia and rally fellow democracies to advance our shared security, prosperity, and values,” he wrote.

France’s foreign affairs spokesman said on Tuesday the tweeted image was “especially shocking” and the comments by Zhao “insulting for all countries whose armed forces are currently engaged in Afghanistan”.

China’s embassy in Paris hit back on Wednesday, saying the soldier image was a caricature, adding that France has previously loudly defended the right to caricature.

It was an apparent reference to France’s row with the Muslim world over its defence of the publication of cartoons depicting the Prophet Mohammad.

WeChat has 6,90,000 active daily users in Australia, and in September told an Australian government inquiry it would prevent foreign interference in Australian public debate through its platform.

Morrison’s message had been read by 57,000 WeChat users by Wednesday.

Zhao’s tweet, pinned to the top of his Twitter account, had been “liked” by 60,000 followers, after Twitter labelled it as sensitive content but declined Canberra’s request to remove the image.

Twitter is blocked in China, but has been used by Chinese diplomats.

China on Friday imposed dumping tariffs of up to 200 percent on Australian wine imports, effectively shutting off the largest export market for the Australian wine industry.

© Thomson Reuters 2020

Continue Reading

Social Networking

TikTok US Ban: Appeals Court Schedules December 14 Hearing on App Store Block




By Reuters | Updated: 3 December 2020

A federal appeals court said on Wednesday it will hear oral arguments on December 14 on the government’s appeal of an order that blocked a ban on Apple and Alphabet’s Google offering TikTok for download in US app stores.

US District Judge Carl Nichols in Washington on September 27 blocked the Commerce Department order hours before it was to prohibit new downloads of the Chinese-owned short video-sharing app.

The appeals panel consists of Judge Judith Rogers, Patricia Millett and Robert Wilkins. All three were nominated by previous Democratic presidents.

The Trump administration last week extended to Friday a deadline for Chinese TikTok parent ByteDance to sell TikTok’s US assets. The Trump administration contends TikTok poses national security concerns as the personal data of US users could be obtained by China’s government. TikTok, which has over 100 million US users, denies the allegation.

The administration previously granted ByteDance a 15-day extension of the order issued in August. President Donald Trump on August 14 directed ByteDance to divest the app’s US assets within 90 days.

Under pressure from the US government, ByteDance has been in talks for months to finalise a deal with Walmart and Oracle to shift TikTok’s US assets into a new entity.

ByteDance made a new proposal aimed at addressing the US government’s concerns, Reuters reported last week.

The US Treasury said last week the extension was granted to review a recently received “revised submission.”

ByteDance made the proposal after disclosing on November 10 that it submitted four prior proposals, including one in November, that sought to address US concerns by “creating a new entity, wholly owned by Oracle, Walmart and existing US investors in ByteDance, that would be responsible for handling TikTok’s US user data and content moderation.”

US District Judge Wendy Beetlestone on October 30 blocked another aspect of a Commerce Department order scheduled to take effect November 12 that would have effectively barred TikTok from operating in the United States.

Beetlestone enjoined the agency from barring data hosting within the United States for TikTok, content delivery services and other technical transactions.

© Thomson Reuters 2020

Continue Reading

Social Networking

Twitter Expands Hate Speech Rules to Include Race, Ethnicity and National Origin




By Reuters | Updated: 3 December 2020

Twitter on Wednesday expanded its policy barring hateful speech to include “language that dehumanises people on the basis of race, ethnicity and national origin,” it said in a statement.

The company banned speech that dehumanises others based on religion or caste last year and updated the rule in March to add age, disability and disease to the list of protected categories.

Civil rights group Color of Change, part of a coalition of advocacy organisations that have been pushing tech companies to reduce hate speech online, called the changes “essential concessions” following years of outside pressure.

A Twitter spokeswoman said the company had planned from the start to add new categories to the policy over time after testing to ensure it can consistently enforce updated rules.

In a statement, Color Of Change Vice President Arisha Hatch criticised Twitter for failing to update the policy before November’s presidential election, despite repeated warnings by the advocacy groups about violent and dehumanising speech.

Hatch also said Twitter has declined to provide transparency into how its content moderators are trained and the efficacy of its artificial intelligence in identifying content that violates the policy.

“The jury is still out for a company with a spotty track record of policy implementation and enforcing its rules with far-right extremist users,” she said.

“Void of hard evidence the company will follow through, this announcement will fall into a growing category of too little, too late PR stunt offerings.”

© Thomson Reuters 2020

Continue Reading