Connect with us

Social Networking

Facebook Said to Be Shut Down in Vietnam Over Censorship Requests




By Reuters | Updated: 20 November 2020

Vietnam has threatened to shut down Facebook in the country if it does not bow to government pressure to censor more local political content on its platform, a senior official at the US social media giant told Reuters.

Facebook complied with a government request in April to significantly increase its censorship of “anti-state” posts for local users, but Vietnam asked the company again in August to step up its restrictions of critical posts, the official said.

“We made an agreement in April. Facebook has upheld our end of the agreement, and we expected the government of Vietnam to do the same,” said the official, who spoke on condition of anonymity citing the sensitivity of the subject.

“They have come back to us and sought to get us to increase the volume of content that we’re restricting in Vietnam. We’ve told them no. That request came with some threats about what might happen if we didn’t.”

The official said the threats included shutting down Facebook altogether in Vietnam, a major market for the social media company where it earns revenue of nearly $1 billion (roughly Rs. 7,400 crores), according to two sources familiar with the numbers.

Facebook has faced mounting pressure from governments over its content policies, including threats of new regulations and fines. But it has avoided a ban in all but the few places where it has never been allowed to operate, such as China.

In Vietnam, despite sweeping economic reform and increasing openness to social change, the ruling Communist Party retains tight control of media and tolerates little opposition. The country ranks fifth from bottom in a global ranking of press freedom compiled by Reporters Without Borders.

Vietnam’s foreign ministry said in response to questions from Reuters that Facebook should abide by local laws and cease “spreading information that violates traditional Vietnamese customs and infringes upon state interests”.

A spokeswoman for Facebook said it had faced additional pressure from Vietnam to censor more content in recent months.

In its biannual transparency report released on Friday, Facebook said it had restricted access to 834 items in Vietnam in the first six months of this year, following requests from the government of Vietnam to remove anti-state content.

‘Clear responsibility’

Facebook, which serves about 60 million users in Vietnam as the main platform for both e-commerce and expressions of political dissent, is under constant government scrutiny.

Reuters exclusively reported in April that Facebook’s local servers in Vietnam were taken offline early this year until it complied with the government’s demands.

Facebook has long faced criticism from rights group for being too compliant with government censorship requests.

“However, we will do everything we can to ensure that our services remain available so people can continue to express themselves,” the spokeswoman said.

Vietnam has tried to launch home-grown social media networks to compete with Facebook, but none has reached any meaningful level of popularity. The Facebook official said the company had not seen an exodus of Vietnamese users to the local platforms.

The official said Facebook had been subject to a “14-month-long negative media campaign” in state-controlled Vietnamese press before arriving at the current impasse.

Asked about Vietnam’s threat to shut down Facebook, rights group Amnesty International said the fact it had not yet been banned after defying the Vietnamese government’s threats showed that the company could do more to resist Hanoi’s demands.

“Facebook has a clear responsibility to respect human rights wherever they operate in the world and Vietnam is no exception,” Ming Yu Hah, Amnesty’s deputy regional director for campaigns, said. “Facebook are prioritising profits in Vietnam, and failing to respect human rights”.

© Thomson Reuters 2020

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Social Networking

Facebook Bans False Claims About COVID-19 Vaccines Debunked by Public Health Experts




By Reuters | Updated: 4 December 2020

Facebook on Thursday said it would remove false claims about COVID-19 vaccines that have been debunked by public health experts, following a similar announcement by Alphabet’s YouTube in October.

The move expands Facebook’s current rules against falsehoods and conspiracy theories about the pandemic. The social media company says it takes down coronavirus misinformation that poses a risk of “imminent” harm, while labeling and reducing distribution of other false claims that fail to reach that threshold.

Facebook said in a blog post that the global policy change came in response to news that COVID-19 vaccines will soon be rolling out around the world.

Two drug companies, Pfizer and Moderna, have asked US authorities for emergency use authorisation of their vaccine candidates. Britain approved the Pfizer vaccine on Wednesday, jumping ahead of the rest of the world in the race to begin the most crucial mass inoculation programme in history.

Misinformation about the new coronavirus vaccines has proliferated on social media during the pandemic, including through viral anti-vaccine posts shared across multiple platforms and by different ideological groups, according to researchers.

A November report by the nonprofit First Draft found that 84 percent of interactions generated by vaccine-related conspiracy content it studied came from Facebook pages and Facebook-owned Instagram.

Facebook said it would remove debunked COVID-19 vaccine conspiracies, such as that the vaccines’ safety is being tested on specific populations without their consent, and misinformation about the vaccines.

“This could include false claims about the safety, efficacy, ingredients or side effects of the vaccines. For example, we will remove false claims that COVID-19 vaccines contain microchips,” the company said in a blog post. It said it would update the claims it removes based on evolving guidance from public health authorities.

Facebook did not specify when it would begin enforcing the updated policy, but acknowledged it would “not be able to start enforcing these policies overnight.”

The social media company has rarely removed misinformation about other vaccines under its policy of deleting content that risks imminent harm. It previously removed vaccine misinformation in Samoa where a measles outbreak killed dozens late last year, and it removed false claims about a polio vaccine drive in Pakistan that were leading to violence against health workers.

Facebook, which has taken steps to surface authoritative information about vaccines, said in October that it would also ban advertisements that discourage people from getting vaccines. In recent weeks, Facebook removed a prominent anti-vaccine page and a large private group, one for repeatedly breaking COVID misinformation rules and the other for promoting the QAnon conspiracy theory.

© Thomson Reuters 2020

Continue Reading

Social Networking

Facebook Sued by Trump Administration for Favouring Immigrants Over US Workers




By Agence France-Presse | Updated: 4 December 2020

The Trump administration on Thursday sued Facebook, accusing it of discriminating against American workers by favoring immigrant applicants for thousands of high-paying jobs.

The Department of Justice’s lawsuit opens a new front in the administration’s push against tech companies, and in its clampdown on immigration, as President Donald Trump enters his final weeks in office.

The suit concerns more than 2,600 positions with an average salary of some $1,56,000 (roughly Rs. 1 crore), offered from January 2018 to September 2019.

“Facebook engaged in intentional and widespread violations of the law, by setting aside positions for temporary visa holders instead of considering interested and qualified US workers,” assistant attorney general Eric Dreiband, of the Justice Department’s Civil Rights Division, said in a statement outlining the department’s allegations.

The Internet giant reserved positions for candidates with H1-B “skilled worker” visas or other temporary work visas, the department said.

Facebook “channeled” jobs to visa holders by avoiding advertising on its careers website, accepting only physically mailed applications for some posts, or refusing to consider US workers at all, according to the suit.

The unusual move to file a lawsuit, with the Justice Department pivoting suddenly away from simply discussing their concerns with Facebook, could be seen as a rush to hit the courts before Trump leaves the White House in January.

The California-based social network planned to continue cooperating with the department as the case plays out.

Restrictions rejected
The lawsuit was filed just two days after a US federal judge blocked rule changes ordered by Trump that made it harder for people outside the country to get skilled-worker visas.

The US Chamber of Commerce, the Bay Area Council in Facebook’s home state of California and others had sued the Department of Homeland Security arguing that the changes rushed new restrictions through without a proper public review process.

Skilled-worker visas are precious to Silicon Valley tech firms hungry for engineers and other highly-trained talent, with Asia home to many keenly sought workers.

US District Court Judge Jeffrey White granted a motion to set aside two rules by the Departments of Labor and Homeland Security that would have compelled companies to pay H1-B visa workers higher wages and restricted job types that qualify for the visas.

The Trump administration had cited the COVID-19 pandemic and its toll on the economy as reasons for skipping the required public notice and review processes for their new rules, according to court documents.

But White said in his ruling that the administration did not demonstrate “that the impact of the COVID-19 pandemic on domestic unemployment justified dispensing with the due deliberation that normally accompanies” making changes to the H-1B visa program.

Animosity toward immigration has been a hallmark of the Trump administration.

Facebook uses hiring practices standard in Silicon Valley, and US prosecutors were also eyeing other tech firms regarding H1-B visas employment, according to a person familiar with the matter.

Antitrust as well?
On another legal front, federal regulators and US states are poised to hit Facebook with antitrust cases, US media reported Thursday, amid concerns that its practice of buying up rivals has harmed competition.

The company said earlier this year its executives were fielding questions from the US Federal Trade Commission (FTC) on an antitrust fact-finding mission.

The FTC declined to comment Thursday on reports in multiple US outlets including The New York Times and Washington Post that it is likely to file an antitrust suit against the social media giant.

An FTC review of acquisitions dating back to 2010 could potentially “unwind” some of the company’s deals.

Facebook is the leading Internet social network, reaching close to three billion people worldwide with its core platform, along with Instagram and messaging services WhatsApp and Messenger.

An estimated seven in 10 US adults use Facebook, and its reach allows it to play an outsized role in digital advertising and news delivery.

That influence means the network regularly faces complaints over its handling of political misinformation and hate speech.

Continue Reading

Social Networking

Facebook Hate Speech Policy Revised to Target Slurs Against Blacks, Muslims: Report




By Agence France-Presse | Updated: 4 December 2020

Facebook on Thursday said it is revising its systems to prioritise blocking slurs against Black people, gays and other groups historically targeted by vitriol, no longer automatically filtering out barbs aimed broadly at whites, men or Americans.

The change in Facebook’s algorithm is a shift from the social network’s ethnicity and gender-neutral system that removed anti-white comments and posts such as “Men are dumb” or “Americans are stupid.”

“We know that hate speech targeted towards under-represented groups can be the most harmful, which is why we have focused our technology on finding the hate speech that users and experts tell us is the most serious,” said Facebook spokeswoman Sally Aldous.

The changes are to the leading social network’s automated systems, meaning hateful posts about whites, men or Americans that are reported by users will still be deleted if they violate Facebook policies.

Over the past year, Facebook has also updated its policies to catch more implicit hate speech, such as depictions of blackface and stereotypes about Jewish people, Aldous noted.

“Thanks to significant investments in our technology we proactively detect 95 percent of the content we remove and we continue to improve how we enforce our rules as hate speech evolves over time,” Aldous said.

The software tweak will initially target the most blatant slurs, including those against Black people, Muslims, people of more than one race, the LGBTQ community and Jews, Facebook said.

‘Long overdue’
The move comes as the company faces pressure from civil rights groups that have long complained it does too little to police hate speech.

Earlier this year, more than 1,000 advertisers boycotted Facebook to protest its handling of hate speech and misinformation.

“This is an important and long overdue step forward,” Anti-Defamation League chief executive Jonathan Greenblatt said.

The ADL and other groups have advocated for Facebook to better fight anti-Semitism, racism, xenophobia and “all forms of extremism,” according to Greenblatt.

“While we are encouraged that Facebook is attacking the most serious symptoms of the disease that it permitted to spread for so many years, we need to see additional steps to cure the sickness of hate on social media,” Greenblatt said.

Facebook and other social platforms have been condemned for failing to stop abusive and hateful content including organised violence such as the massacre of the Rohingya minority in Myanmar and the beheading of French schoolteacher Samuel Paty near Paris.

Facebook has been adamant that it is vigilant when it comes to policing hate speech, calls for violence and misinformation.

The company said that since August it identified more 600 militarised social movements, and removed their pages or accounts, part of an effort that took down 22.1 million posts containing “hate speech.”

Social problem?
Critics of Facebook and other social networks argue they should be held accountable for violence organised on their platforms, calling for reforms of a law that shields Internet services from liability for content posted by third parties.

But some analysts argue the platforms can’t bear full responsibility for deep social problems which have led to extremism and violence in the streets.

Facebook and others have long grappled with how to purge toxic content while fending off accusations they are stifling free expression.

The Internet giant and its rival Twitter have been taken to task on Capitol Hill by Republicans who say the platforms are biased against conservatives.

On Wednesday, Twitter said it was expanding its definition of hateful content to ban language which “dehumanises” people on the basis of race, ethnicity or national origin.

Twitter said it would remove offending tweets when they are reported, and offered examples such as describing a particular ethnic group as “scum” or “leeches.”

Continue Reading

Social Networking

WeChat Blocks Australia Prime Minister Scott Morrison’s Message in Doctored Image Dispute With China




By Reuters | Updated: 3 December 2020

China’s WeChat social media platform blocked a message by Australia Prime Minister Scott Morrison amid a dispute between Canberra and Beijing over the doctored tweeted image of an Australian soldier.

China rebuffed Morrison’s calls for an apology after its foreign ministry spokesman Zhao Lijian posted the picture of an Australian soldier holding a bloodied knife to the throat of an Afghan child on Monday.

The United States called China’s use of the digitally manipulated image a “new low” in disinformation.

Morrison took to WeChat on Tuesday to criticise the “false image”, while offering praise to Australia’s Chinese community.

In his message, Morrison defended Australia’s handling of a war crimes investigation into the actions of special forces in Afghanistan, and said Australia would deal with “thorny issues” in a transparent manner.

But that message appeared to be blocked by Wednesday evening, with a note appearing from the “Weixin Official Accounts Platform Operation Center” saying the content was unable to be viewed because it violated regulations, including distorting historical events and confusing the public.

Tencent, the parent company of WeChat, did not immediately respond to a request for comment.

Australian special forces allegedly killed 39 unarmed prisoners and civilians in Afghanistan, with senior commandos reportedly forcing junior soldiers to kill defenceless captives in order to “blood” them for combat, a four-year investigation found.

Australia said last week that 19 current and former soldiers would be referred for potential criminal prosecution.

China’s embassy has said the “rage and roar” from Australian politicians and media over the soldier image was an overreaction.

‘Hypocrisy is obvious to all’

Australia was seeking to “deflect public attention from the horrible atrocities by certain Australian soldiers”, it said.

Other nations, including the United States, New Zealand, and France, and the self-ruled island of Taiwan which China claims as its own, have expressed concern at the Chinese foreign ministry’s use of the manipulated image on an official Twitter account.

“The CCP’s latest attack on Australia is another example of its unchecked use of disinformation and coercive diplomacy. Its hypocrisy is obvious to all,” the US State Department said on Wednesday, referring to the Chinese Communist Party.

Jake Sullivan, tapped as national security adviser in the incoming administration of US President-elect Joe Biden, tweeted support for Australia without reference to China.

“America will stand shoulder to shoulder with our ally Australia and rally fellow democracies to advance our shared security, prosperity, and values,” he wrote.

France’s foreign affairs spokesman said on Tuesday the tweeted image was “especially shocking” and the comments by Zhao “insulting for all countries whose armed forces are currently engaged in Afghanistan”.

China’s embassy in Paris hit back on Wednesday, saying the soldier image was a caricature, adding that France has previously loudly defended the right to caricature.

It was an apparent reference to France’s row with the Muslim world over its defence of the publication of cartoons depicting the Prophet Mohammad.

WeChat has 6,90,000 active daily users in Australia, and in September told an Australian government inquiry it would prevent foreign interference in Australian public debate through its platform.

Morrison’s message had been read by 57,000 WeChat users by Wednesday.

Zhao’s tweet, pinned to the top of his Twitter account, had been “liked” by 60,000 followers, after Twitter labelled it as sensitive content but declined Canberra’s request to remove the image.

Twitter is blocked in China, but has been used by Chinese diplomats.

China on Friday imposed dumping tariffs of up to 200 percent on Australian wine imports, effectively shutting off the largest export market for the Australian wine industry.

© Thomson Reuters 2020

Continue Reading

Social Networking

TikTok US Ban: Appeals Court Schedules December 14 Hearing on App Store Block




By Reuters | Updated: 3 December 2020

A federal appeals court said on Wednesday it will hear oral arguments on December 14 on the government’s appeal of an order that blocked a ban on Apple and Alphabet’s Google offering TikTok for download in US app stores.

US District Judge Carl Nichols in Washington on September 27 blocked the Commerce Department order hours before it was to prohibit new downloads of the Chinese-owned short video-sharing app.

The appeals panel consists of Judge Judith Rogers, Patricia Millett and Robert Wilkins. All three were nominated by previous Democratic presidents.

The Trump administration last week extended to Friday a deadline for Chinese TikTok parent ByteDance to sell TikTok’s US assets. The Trump administration contends TikTok poses national security concerns as the personal data of US users could be obtained by China’s government. TikTok, which has over 100 million US users, denies the allegation.

The administration previously granted ByteDance a 15-day extension of the order issued in August. President Donald Trump on August 14 directed ByteDance to divest the app’s US assets within 90 days.

Under pressure from the US government, ByteDance has been in talks for months to finalise a deal with Walmart and Oracle to shift TikTok’s US assets into a new entity.

ByteDance made a new proposal aimed at addressing the US government’s concerns, Reuters reported last week.

The US Treasury said last week the extension was granted to review a recently received “revised submission.”

ByteDance made the proposal after disclosing on November 10 that it submitted four prior proposals, including one in November, that sought to address US concerns by “creating a new entity, wholly owned by Oracle, Walmart and existing US investors in ByteDance, that would be responsible for handling TikTok’s US user data and content moderation.”

US District Judge Wendy Beetlestone on October 30 blocked another aspect of a Commerce Department order scheduled to take effect November 12 that would have effectively barred TikTok from operating in the United States.

Beetlestone enjoined the agency from barring data hosting within the United States for TikTok, content delivery services and other technical transactions.

© Thomson Reuters 2020

Continue Reading

Social Networking

Twitter Expands Hate Speech Rules to Include Race, Ethnicity and National Origin




By Reuters | Updated: 3 December 2020

Twitter on Wednesday expanded its policy barring hateful speech to include “language that dehumanises people on the basis of race, ethnicity and national origin,” it said in a statement.

The company banned speech that dehumanises others based on religion or caste last year and updated the rule in March to add age, disability and disease to the list of protected categories.

Civil rights group Color of Change, part of a coalition of advocacy organisations that have been pushing tech companies to reduce hate speech online, called the changes “essential concessions” following years of outside pressure.

A Twitter spokeswoman said the company had planned from the start to add new categories to the policy over time after testing to ensure it can consistently enforce updated rules.

In a statement, Color Of Change Vice President Arisha Hatch criticised Twitter for failing to update the policy before November’s presidential election, despite repeated warnings by the advocacy groups about violent and dehumanising speech.

Hatch also said Twitter has declined to provide transparency into how its content moderators are trained and the efficacy of its artificial intelligence in identifying content that violates the policy.

“The jury is still out for a company with a spotty track record of policy implementation and enforcing its rules with far-right extremist users,” she said.

“Void of hard evidence the company will follow through, this announcement will fall into a growing category of too little, too late PR stunt offerings.”

© Thomson Reuters 2020

Continue Reading