Connect with us

Social Networking

Facebook Knew About Abusive Content Globally but Failed to Police: Former Employees

Published

on

By Reuters | Updated: 26 October 2021

Facebook employees have warned for years that as the company raced to become a global service it was failing to police abusive content in countries where such speech was likely to cause the most harm, according to interviews with five former employees and internal company documents viewed by Reuters.

For over a decade, Facebook has pushed to become the world’s dominant online platform. It currently operates in more than 190 countries and boasts more than 2.8 billion monthly users who post content in more than 160 languages. But its efforts to prevent its products from becoming conduits for hate speech, inflammatory rhetoric and misinformation – some which has been blamed for inciting violence – have not kept pace with its global expansion.

Internal company documents viewed by Reuters show Facebook has known that it hasn’t hired enough workers who possess both the language skills and knowledge of local events needed to identify objectionable posts from users in a number of developing countries. The documents also showed that the artificial intelligence systems Facebook employs to root out such content frequently aren’t up to the task, either; and that the company hasn’t made it easy for its global users themselves to flag posts that violate the site’s rules.

Those shortcomings, employees warned in the documents, could limit the company’s ability to make good on its promise to block hate speech and other rule-breaking posts in places from Afghanistan to Yemen.

In a review posted to Facebook’s internal message board last year regarding ways the company identifies abuses on its site, one employee reported “significant gaps” in certain countries at risk of real-world violence, especially Myanmar and Ethiopia.

The documents are among a cache of disclosures made to the US Securities and Exchange Commission and Congress by Facebook whistleblower Frances Haugen, a former Facebook product manager who left the company in May. Reuters was among a group of news organisations able to view the documents, which include presentations, reports, and posts shared on the company’s internal message board. Their existence was first reported by The Wall Street Journal.

Facebook spokesperson Mavis Jones said in a statement that the company has native speakers worldwide reviewing content in more than 70 languages, as well as experts in humanitarian and human rights issues. She said these teams are working to stop abuse on Facebook’s platform in places where there is a heightened risk of conflict and violence.

“We know these challenges are real and we are proud of the work we’ve done to date,” Jones said.

Still, the cache of internal Facebook documents offers detailed snapshots of how employees in recent years have sounded alarms about problems with the company’s tools – both human and technological – aimed at rooting out or blocking speech that violated its own standards. The material expands upon Reuters’ previous reporting on Myanmar and other countries, where the world’s largest social network has failed repeatedly to protect users from problems on its own platform and has struggled to monitor content across languages.

Among the weaknesses cited were a lack of screening algorithms for languages used in some of the countries Facebook has deemed most “at-risk” for potential real-world harm and violence stemming from abuses on its site.

The company designates countries “at-risk” based on variables including unrest, ethnic violence, the number of users and existing laws, two former staffers told Reuters. The system aims to steer resources to places where abuses on its site could have the most severe impact, the people said.

Facebook reviews and prioritises these countries every six months in line with United Nations guidelines aimed at helping companies prevent and remedy human rights abuses in their business operations, spokesperson Jones said.

In 2018, United Nations experts investigating a brutal campaign of killings and expulsions against Myanmar’s Rohingya Muslim minority said Facebook was widely used to spread hate speech toward them. That prompted the company to increase its staffing in vulnerable countries, a former employee told Reuters. Facebook has said it should have done more to prevent the platform being used to incite offline violence in the country.

Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa, who left in 2017, said the company’s approach to global growth has been “colonial,” focused on monetisation without safety measures.

More than 90 percent of Facebook’s monthly active users are outside the United States or Canada.

Language issues

Facebook has long touted the importance of its artificial-intelligence (AI) systems, in combination with human review, as a way of tackling objectionable and dangerous content on its platforms. Machine-learning systems can detect such content with varying levels of accuracy.

But languages spoken outside the United States, Canada and Europe have been a stumbling block for Facebook’s automated content moderation, the documents provided to the government by Haugen show. The company lacks AI systems to detect abusive posts in a number of languages used on its platform. In 2020, for example, the company did not have screening algorithms known as “classifiers” to find misinformation in Burmese, the language of Myanmar, or hate speech in the Ethiopian languages of Oromo or Amharic, a document showed.

These gaps can allow abusive posts to proliferate in the countries where Facebook itself has determined the risk of real-world harm is high.

Reuters this month found posts in Amharic, one of Ethiopia’s most common languages, referring to different ethnic groups as the enemy and issuing them death threats. A nearly year-long conflict in the country between the Ethiopian government and rebel forces in the Tigray region has killed thousands of people and displaced more than 2 million.

Facebook spokesperson Jones said the company now has proactive detection technology to detect hate speech in Oromo and Amharic and has hired more people with “language, country and topic expertise,” including people who have worked in Myanmar and Ethiopia.

In an undated document, which a person familiar with the disclosures said was from 2021, Facebook employees also shared examples of “fear-mongering, anti-Muslim narratives” spread on the site in India, including calls to oust the large minority Muslim population there. “Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned,” the document said. Internal posts and comments by employees this year also noted the lack of classifiers in the Urdu and Pashto languages to screen problematic content posted by users in Pakistan, Iran and Afghanistan.

Jones said Facebook added hate speech classifiers for Hindi in 2018 and Bengali in 2020, and classifiers for violence and incitement in Hindi and Bengali this year. She said Facebook also now has hate speech classifiers in Urdu but not Pashto.

Facebook’s human review of posts, which is crucial for nuanced problems like hate speech, also has gaps across key languages, the documents show. An undated document laid out how its content moderation operation struggled with Arabic-language dialects of multiple “at-risk” countries, leaving it constantly “playing catch up.” The document acknowledged that, even within its Arabic-speaking reviewers, “Yemeni, Libyan, Saudi Arabian (really all Gulf nations) are either missing or have very low representation.”

Facebook’s Jones acknowledged that Arabic language content moderation “presents an enormous set of challenges.” She said Facebook has made investments in staff over the last two years but recognises “we still have more work to do.”

Three former Facebook employees who worked for the company’s Asia Pacific and Middle East and North Africa offices in the past five years told Reuters they believed content moderation in their regions had not been a priority for Facebook management. These people said leadership did not understand the issues and did not devote enough staff and resources.

Facebook’s Jones said the California company cracks down on abuse by users outside the United States with the same intensity applied domestically.

The company said it uses AI proactively to identify hate speech in more than 50 languages. Facebook said it bases its decisions on where to deploy AI on the size of the market and an assessment of the country’s risks. It declined to say in how many countries it did not have functioning hate speech classifiers.

Facebook also says it has 15,000 content moderators reviewing material from its global users. “Adding more language expertise has been a key focus for us,” Jones said.

In the past two years, it has hired people who can review content in Amharic, Oromo, Tigrinya, Somali, and Burmese, the company said, and this year added moderators in 12 new languages, including Haitian Creole.

Facebook declined to say whether it requires a minimum number of content moderators for any language offered on the platform.

Lost in translation

Facebook’s users are a powerful resource to identify content that violates the company’s standards. The company has built a system for them to do so, but has acknowledged that the process can be time consuming and expensive for users in countries without reliable Internet access. The reporting tool also has had bugs, design flaws and accessibility issues for some languages, according to the documents and digital rights activists who spoke with Reuters.

Next Billion Network, a group of tech civic society groups working mostly across Asia, the Middle East and Africa, said in recent years it had repeatedly flagged problems with the reporting system to Facebook management. Those included a technical defect that kept Facebook’s content review system from being able to see objectionable text accompanying videos and photos in some posts reported by users. That issue prevented serious violations, such as death threats in the text of these posts, from being properly assessed, the group and a former Facebook employee told Reuters. They said the issue was fixed in 2020.

Facebook said it continues to work to improve its reporting systems and takes feedback seriously.

Language coverage remains a problem. A Facebook presentation from January, included in the documents, concluded “there is a huge gap in the Hate Speech reporting process in local languages” for users in Afghanistan. The recent pullout of US troops there after two decades has ignited an internal power struggle in the country. So-called “community standards” – the rules that govern what users can post – are also not available in Afghanistan’s main languages of Pashto and Dari, the author of the presentation said.

A Reuters review this month found that community standards weren’t available in about half the more than 110 languages that Facebook supports with features such as menus and prompts.

Facebook said it aims to have these rules available in 59 languages by the end of the year, and in another 20 languages by the end of 2022.

© Thomson Reuters 2021

Social Networking

Facebook Owner Meta Launches New Platform, Safety Hub to Protect Women in India

Published

on

By Press Trust of India | Updated: 3 December 2021

Meta (formerly Facebook) on Thursday announced a slew of steps to protect woman users on its platform, including the launch of StopNCII.org in India that aims to combat the spread of non-consensual intimate images (NCII).

Meta has also launched the Women’s Safety Hub, which will be available in Hindi and 11 other Indian languages, that will enable more women users in India to access information about tools and resources that can help them make the most of their social media experience, while staying safe online.

This initiative by Meta will ensure women do not face a language barrier in accessing information Karuna Nain, director (global safety policy) at Meta Platforms, told reporters here.

“Safety is an integral part of Meta’s commitment to building and offering a safe online experience across the platforms and over the years the company has introduced several industry leading initiatives to protect users online.

“Furthering our effort to bolster the safety of users, we are bringing in a number of initiatives to ensure online safety of women on our platforms,” she added.

StopNCII.org is a platform that aims to combat the spread of non-consensual intimate images (NCII).

“It gives victims control. People can come to this platform proactively, hash their intimate videos and images, share their hashes back with the platform and participating companies,” Nain said.

She explained that the platform doesn’t receive any photos and videos, and instead what they get is the hash or unique digital fingerprint/unique identifier that tells the company that this is a known piece of content that is violating. “We can proactively keep a lookout for that content on our platforms and once it”s uploaded, our review team check what”s really going on and take appropriate action if it violates our policies,” she added.

In partnership with UK Revenge Porn Helpline, StopNCII.org builds on Meta’s NCII Pilot, an emergency programme that allows potential victims to proactively hash their intimate images so they can”t be proliferated on its platforms.

The first-of-its-kind platform, has partnered with global organisations to support the victims of NCII. In India, the platform has partnered with organisations such as Social Media Matters, Centre for Social Research, and Red Dot Foundation.

Nain added that the company is hopeful that this becomes an industrywide initiative, so that victims can just come to this one central place to get help and support and not have to go to each and every tech platform, one by one to get help and support.

Also, Bishakha Datta (executive editor of Point of View) and Jyoti Vadehra from Centre for Social Research are the first Indian members in Meta”s Global Women”s Safety Expert Advisors. The group comprises 12 other non-profit leaders, activists, and academic experts from different parts of the world and consults Meta in the development of new policies, products and programmes to better support women on its apps.

“We are confident that with our ever-growing safety measures, women will be able to enjoy a social experience which will enable them to learn, engage and grow without any challenges.

“India is an important market for us and bringing Bishakha and Jyoti onboard to our Women”s Safety Expert Advisory Group will go a long way in further enhancing our efforts to make our platforms safer for women in India,” Nain said.

Continue Reading

Social Networking

Facebook, Instagram Remove Chinese Accounts Over Fake ‘Swiss Biologist’ COVID-19 Origin Claims

Published

on

By Reuters | Updated: 2 December 2021

Facebook owner Meta Platforms said on Wednesday it had removed accounts used by an influence operation originating in China that promoted claims of a fake “Swiss biologist” saying the United States was interfering in the search for COVID-19’s origins.

Meta said in a report the social media campaign was “largely unsuccessful” and targeted English-speaking audiences in the United States and Britain and Chinese-speaking audiences in Taiwan, Hong Kong, and Tibet.

Claims by “Swiss biologist” Wilson Edwards were widely quoted by Chinese state media in July. In August, several Chinese newspapers removed comments and deleted articles quoting him after the Swiss embassy in Beijing said it had found no evidence of him as a Swiss citizen.

Meta said Facebook removed the Wilson Edwards account in August and has since removed 524 Facebook accounts, 20 Pages, four Groups and 86 Instagram accounts as part of its investigation. Such removals also take down content that these entities have posted.

“We…were able to link the activity to individuals in mainland China, including employees of a particular company in China, the Sichuan Silence Information Technology Company Limited, as well as some individuals associated with Chinese state infrastructure companies around the world,” Meta’s head of global threat disruption David Agranovich told Reuters.

Sichuan Silence Information Technology Co did not immediately respond to a request for comment. The Chinese foreign ministry and internet regulator Cyberspace Administration of China also did not immediately respond to requests for comment.

Meta said it had not found any connection between Sichuan Silence Information Technology and the Chinese government.

Silence Information’s website describes itself as a network and information security company that provides network security services to China’s Ministry of Public Security activities and China’s CNCERT, the key coordination team for China’s cybersecurity emergency response.

On July 24, 10 hours after its creation, the “Wilson Edwards” Facebook account uploaded a post saying he had been informed the United States was seeking to discredit the qualifications of World Health Organization scientists working with China to probe the origins of COVID-19.

Meta said the account’s operators used virtual private network (VPN) infrastructure to conceal its origin and made efforts to give Edwards a rounded personality.

The persona’s original post was initially shared and liked by fake Facebook accounts, and later forwarded by authentic users, most of which belonged to employees of Chinese state infrastructure companies in over 20 countries, Meta said.

“This is the first time we have observed an operation that included a coordinated cluster of state employees to amplify itself in this way,” the report said. Meta said it did not find evidence that the network gained any traction among authentic communities.

China’s state-run media, from China Daily to TV news service CGTN, cited the July post widely as evidence that US President Joe Biden’s administration was politicising the WHO. The administration had said the joint WHO-China investigation lacked transparency.

The origin of the SARS-CoV-2 virus that causes COVID-19 remains a mystery and a source of tension between China, the United States and other countries.

© Thomson Reuters 2021

Continue Reading

Social Networking

Jack Dorsey-Led Square Rebrands to Block After Facebook’s Meta Change

Published

on

By Reuters | Updated: 2 December 2021

Square, the payments company led by Twitter Inc co-founder Jack Dorsey, said on Wednesday it was changing its name to Block Inc, as it looks to expand beyond its payment business and into new technologies like blockchain.

The San Francisco-based company said the name Square had become synonymous with it’s seller business. The new name would distinguish the corporate entity from its businesses, Square added, a strategy similar to Meta Platforms’s rebrand last month.

The company said there would be no organisational changes and its different business units – Square, peer-to-peer payment service Cash App, music streaming service Tidal and its bitcoin-focused financial services segment – will continue to maintain their respective brands. Shares were up nearly 1 percent in extended trading.

“The name has many associated meanings for the company — building blocks, neighbourhood blocks, and their local businesses, communities coming together at block parties full of music, a blockchain, a section of code, and obstacles to overcome,” Square said in a statement.

The move comes days after Dorsey stepped down from his role as chief executive officer at Twitter. The digital payments giant’s Square Crypto, a team “dedicated to advancing Bitcoin”, will also change its name to Spiral. Bitcoin price in India stood at Rs. 45.21 lakh as of 10am IST on December 2.

Under Dorsey, who has frequently expressed his interest in the cryptocurrency, Square bought $50 million (roughly Rs. 375 crore) worth of Bitcoin even before the wave of institutional interest that propelled the digital currency’s price to record highs this year. In February, it further raised its wager and invested another $170 million (roughly Rs. 1,275 crore) in it.

Square has also been weighing the creation of a hardware wallet for Bitcoin to make its custody more mainstream.

The new name would become effective on or about December 10, Square said, but the “SQ” ticker symbol on the New York Stock Exchange would not change at this time.

© Thomson Reuters 2021

Continue Reading

Social Networking

Twitter Bans Sharing Personal Photos, Videos of Other People Without Consent

Published

on

By Agence France-Presse | Updated: 1 December 2021

Twitter launched new rules Tuesday blocking users from sharing private images of other people without their consent, in a tightening of the network’s policy just a day after it changed CEOs.

Under the new rules, people who are not public figures can ask Twitter to take down pictures or video of them that they report were posted without permission.

Twitter said this policy does not apply to “public figures or individuals when media and accompanying tweet text are shared in the public interest or add value to public discourse.”

“We will always try to assess the context in which the content is shared and, in such cases, we may allow the images or videos to remain on the service,” the company added.

The right of Internet users to appeal to platforms when images or data about them are posted by third parties, especially for malicious purposes, has been debated for years.

Twitter already prohibited the publication of private information such as a person’s phone number or address, but there are “growing concerns” about the use of content to “harass, intimidate, and reveal the identities of individuals,” Twitter said.

The company noted a “disproportionate effect on women, activists, dissidents, and members of minority communities.”

High-profile examples of online harassment include the barrages of racist, sexist,and homophobic abuse on Twitch, the world’s biggest video game streaming site.

But instances of harassment abound, and victims must often wage lengthy fights to see hurtful, insulting or illegally produced images of themselves removed from the online platforms.

Some Twitter users pushed the company to clarify exactly how the tightened policy would work.

“Does this mean that if I take a picture of, say, a concert in Central Park, I need the permission of everyone in it? We diminish the sense of the public to the detriment of the public,” tweeted Jeff Jarvis, a journalism professor at the City University of New York.

The change came the day after Twitter co-founder Jack Dorsey announced he was leaving the company, and handed CEO duties to company executive Parag Agrawal.

The platform, like other social media networks, has struggled against bullying, misinformation, and hate-fuelled content.

Continue Reading

Social Networking

Twitter’s Former CEO Jack Dorsey’s Journey: From Microblogging Pioneer to Billionaire

Published

on

By Reuters | Updated: 30 November 2021

Jack Dorsey on Monday stepped down as the chief executive officer of Twitter, the social media firm he helped found in 2006 and steered through a high-profile hack and the controversial banning of former US President Donald Trump.

Dorsey, who also helms fintech firm Square, will be succeeded by Chief Technology Officer Parag Agrawal.

Here is a timeline of milestones in Dorsey tenure at Twitter:

2006: Typed out the microblogging platform’s first post: “just setting up my twttr”.

2008: Co-founder Evan Williams took over as CEO after the board pushed Dorsey out. Dorsey assumed the role of chairman.

2013: Twitter went public at a valuation of $31 billion (roughly Rs. 2,32,400 crore).

2015: Dorsey returned as CEO after Dick Costolo stepped down.

2017: A Twitter employee on his last day deactivated then US President Donald Trump’s account which was restored 11 minutes later.

2018: Twitter increased the character limit of tweets to 280 from 140, sparking a mixed reaction in twitterverse.

2020: Activist hedge fund Elliott Management pushed for changes, including the removal of Dorsey as CEO.

2020: Twitter reached an agreement with Elliott to add three new directors for letting Dorsey stay on as CEO.

2021: In the wake of the riots at the Capitol, Twitter permanently suspended Trump’s account, with the company citing a risk of further incitement of violence.

2021: Twitter outlined plans in February to attain at least $7.5 billion (roughly Rs. 56,230 crore) in annual revenue and 315 million monetisable daily active users, or those who see ads, by the end of 2023.

2021: In March, Dorsey sold his first tweet as a non-fungible token (NFT) – a kind of unique digital asset – for just over $2.9 million (roughly Rs. 21,745 crore).

2021: Former US President Donald Trump in July filed lawsuits against Twitter, Facebook, and Alphabet’s Google, as well as their chief executives, alleging they unlawfully silence conservative viewpoints.

2021: The company said it had 211 million average monetisable daily active users, as of the three months ended September 30.

2021: Dorsey’s net worth is $11.8 billion (roughly Rs. 88,500 crore) as of November 29, according to Forbes.

© Thomson Reuters 2021

Continue Reading

Social Networking

Who Is Twitter’s New CEO Parag Agrawal?

Published

on

By Reuters | Updated: 30 November 2021

Twitter on Monday promoted company insider and technology head Parag Agrawal to replace Chief Executive Officer Jack Dorsey. The social media networking platform joins tech giants Apple, Amazon, and Alphabet in tapping a company insider for the top job.

Here are some facts about Agrawal:

Decade with Twitter

Agrawal joined Twitter as a software engineer and has been with the company for over a decade. He was appointed chief technology officer in October 2017.

He oversaw Twitter’s technical strategy and was responsible for improving the pace of software development, while advancing the use of machine learning across the company.

Project Bluesky

Since December 2019, Agrawal has also been working on Project Bluesky, an independent team of open source architects, engineers and designers to combat abusive and misleading information on Twitter.

Bluesky is seeking to introduce a new decentralised technology, the idea being that Twitter and others will become clients of Bluesky and rebuild their platforms on top of the standard, Dorsey has said previously.

Ex-Microsoft, Yahoo employee

Before joining Twitter, Agrawal worked at Microsoft, Yahoo, and AT&T Labs in their research units, according to his LinkedIn profile.

Stanford graduate

Agrawal has a Ph.D. in computer science from Stanford University and a bachelor’s degree in computer science and engineering from Indian Institute of Technology, Bombay.

© Thomson Reuters 2021

Continue Reading

Trending