Connect with us

Social Networking

Twitter, Facebook, Others Will Have to Abide by Local Laws, Says Minister

Avatar

Published

on

By Press Trust of India | Updated: 31 October 2022

The new amendments to IT rules impose a legal obligation on social media companies to take all out efforts to prevent barred content and misinformation, the government said on Saturday making it clear that platforms such as Twitter and Facebook operating in India will have to abide by local laws and constitutional rights of Indian users.

The new rules provide for setting up appellate committees which can overrule decisions of the big tech firms on takedown or blocking requests.

The hardening of stance against the big tech companies comes at a time when discontent has been brewing over alleged arbitrary acts of social media platforms on flagged content, or not responding fast enough to grievances.

Amid concerns over the rising clout of Big Tech globally, the CEO of electric car maker Tesla, Elon Musk, on Friday completed his $44 billion (roughly Rs. 3,62,300 crore) takeover of Twitter, placing the world’s richest man at the helm of one of the most influential social media apps in the world. Incidentally, the microblogging platform has had multiple run-ins with the government in the past.

India’s tweaking of IT rules allow formation of Centre-appointed panels, that will settle often-ignored user grievances against content decision of social media companies, Minister of State for IT Rajeev Chandrasekhar said, adding that this was necessitated due to the “casual” and “tokenism” approach of digital platforms towards user complaints so far.

“That is not acceptable,” Chandrasekhar said at a media briefing explaining the amended rules.

The minister said that lakhs of messages around unresolved user complaints reflected the “broken” grievance redressal mechanism currently being offered by platforms, and added that while it will partner with social media companies towards common goal of ensuring Internet remains open, safe and trusted for Indians, the government will not hesitate to act, crackdown, where public interest is compromised.

On whether penalties will be imposed on platforms for not complying, he said the government would not like to bring punitive action at this stage but warned that if the situation demands in future, that could be considered too. The internet is evolving, as will the laws.

“We are not getting to the business of punity, but there is an opinion that there should be punitive penalties for those platforms not following rules…it is an area we have steered clear of, but that is not to say it is not on our mind,” he cautioned.

The tighter IT norms raises due diligence and accountability of platforms to fight illegal content proactively (government has added deliberate misinformation to that list too), with a 72-hour window to take down flagged content. So far, intermediaries were only required to inform users about not uploading certain categories of harmful or unlawful content.

“The obligations of intermediaries earlier was limited to notifying users of the rules but now there will be much more definite obligation on platforms. Intermediaries have to make efforts that no unlawful content is posted on platform,” the minister said.

These amendments impose a legal obligation on intermediaries to take reasonable efforts to prevent users from uploading such content, an official release said.

Simply put, the new provision will ensure that the intermediary’s obligation is not a “mere formality”.

“In the category of obligation we have added misinformation…intermediary should not be party to not just illegal content, but they can’t be party to any deliberate misinformation as content on platforms. Misinformation not just about media it is about advertising…illegal products and services, online betting, misinformation can be in fintech community, misrepresenting products and services. Misinformation also refers to false information about person or entity,” the minister said.

For effective outreach, communication of the rules and regulations will have to be done in regional Indian languages by platforms.

The government has, in the new rules, added objectionable religious content (with intent to incite violence) alongside pornography, trademark infringements, fake information and something that could be a threat to sovereignty of the nation that users can flag to social media platforms.

The words ‘defamatory’ and ‘libellous’ have been removed; whether any content is defamatory or libellous will be determined through judicial review.

Some of the content categories have been rephrased to deal particularly with misinformation, and content that could incite violence between different religious/caste groups (that is information promoting enmity between different groups on the grounds of religion or caste with the intent to incite violence).

The rules come in the backdrop of complaints regarding the action/inaction on the part of the intermediaries on user grievances regarding objectionable content or suspension of their accounts.

“The intermediaries now will be expected to ensure that there is no uploading of content that intentionally communicates any misinformation or information that is patently false or untrue hence entrusting an important responsibility on intermediaries,” the official release said.

The rules also have made it explicit for the intermediary to respect the rights accorded to the Indian citizens under the Articles 14 (non-discrimination), 19 (freedom of speech, subject to certain restrictions) and 21 (right to privacy) of the Indian Constitution.

In a strong message to Big Tech companies, the minister asserted that community guidelines of platforms – regardless of whether they are headquartered in the US, Europe, or elsewhere – cannot undermine constitutional rights of Indians, when such platforms operate in India. Chandrasekhar said platforms will have obligation to remove within 72 hours of flagging, any “misinformation” or illegal content or content that promotes enmity between different groups on the grounds of religion or caste with the intent to incite violence. He said that effort should be to take down illegal content “as fast as possible”.

The complaints around illegal content could range from child sexual abuse material to nudity to trademark and patent infringements, misinformation, impersonation of another person, content threatening the unity and integrity of the country as well as “objectionable” content that promotes “enmity between different groups on the grounds of religion or caste with the intent to incite violence”.

The modalities defining the structure and scope of Grievance Appellate Committees will be worked out soon, he promised adding that the process will start with 1-2 such panels, which will be expanded based on requirements. The panels will not have suo moto powers.

“Government is not interested in playing role of ombudsman. It is a responsibility we are taking reluctantly, because the grievance mechanism is not functioning properly,” the minister said. The idea is not to target any company or intermediary or make things difficult for them. The government sees internet and online safety as a shared responsibility of all, the minister noted.

It is pertinent to mention that big social media platforms have drawn flak in the past over hate speech, misinformation and fake news circulating on their platforms, and there have been persistent calls to make them more accountable. Microblogging platform Twitter has had several confrontations with the government over a slew of issues.

The government, in February 2021, notified IT rules that provided for social media platforms to appoint a grievance officer. Non compliance with IT rules result in these social media companies losing their intermediary status that provides them exemptions from liabilities for any third party information and data hosted by them.

Social Networking

Biden Administration Tells US Supreme Court Section 230 of Communications Decency Act Has Limits

Avatar

Published

on

Section 230 of the US Communications Decency Act holds that social media firms can't be treated as the publisher of information posted by users.
By Reuters | Updated: 8 December 2022

The Biden administration argued to the US Supreme Court on Wednesday that social media giants like Google could in some instances have responsibility for user content, adopting a stance that could potentially undermine a federal law shielding companies from liability.

Lawyers for the US Department of Justice made their argument in the high-profile lawsuit filed by the family of Nohemi Gonzalez, a 23-year-old American citizen killed in 2015 when Islamist militants opened fire on the Paris bistro where she was eating.

The family argued that Google was in part liable for Gonzalez’ death because YouTube, which is owned by the tech giant, essentially recommended videos by the Islamic State group to some users through its algorithms. Google and YouTube are part of Alphabet (GOOGL.O).

The case reached the Supreme Court after the San Francisco-based 9th US Circuit Court of Appeals sided with Google, saying they were protected from such claims because of Section 230 of the Communications Decency Act of 1996.

Section 230 holds that social media companies cannot be treated as the publisher or speaker of any information provided by other users.

The law has been sharply criticised across the political spectrum. Democrats claim it gives social media companies a pass for spreading hate speech and misinformation.

The case reached the Supreme Court after the San Francisco-based 9th US Circuit Court of Appeals sided with Google, saying they were protected from such claims because of Section 230 of the Communications Decency Act of 1996.

Section 230 holds that social media companies cannot be treated as the publisher or speaker of any information provided by other users.

The law has been sharply criticised across the political spectrum. Democrats claim it gives social media companies a pass for spreading hate speech and misinformation.

Continue Reading

Social Networking

Meta in Big Tech Club but Dwarfed by ‘Giant Tech’ Company Apple, Nick Clegg Says

Avatar

Published

on

Apple's tracking protection for iPhone introduced last year has contributed to a halving of Meta's third-quarter profits this year.
By Agence France-Presse | Updated: 8 December 2022

Facebook parent Meta may be in the Big Tech club but it sees itself as being dwarfed by “Giant Tech” company — and corporate foe — Apple, a top executive, Nick Clegg, said Wednesday.

“There’s Big Tech and there’s Giant Tech,” Clegg told an audience in Brussels, where Meta was courting policymakers with its latest virtual reality (VR) gear.

“I mean Apple is now, what, eight times the size of Meta” in terms of stock market capitalisation, he said.

“I mean, it’s just there is very, very, very, very big” in the Big Tech sector and Apple is it, added Clegg.

The comparison underlines Meta’s steep market slide over the past 16 months — and the bad blood with Apple, which has eviscerated Meta’s data collection strategy.

Apple last year introduced a data privacy option on its hugely popular iPhones that prevents Meta and other online data collectors from getting user tracking information they previously relied upon to target advertising.

That has contributed to a halving of Meta’s third-quarter profits this year.

The US company’s costly focus on the metaverse, a virtual world where users appearing as digital avatars can interact, has also played a role.

Meta — re-branded to reflect its focus — has spent a staggering $100 billion (roughly Rs. 8.2 lakh crore) to date on building that technology, whose widespread adoption is forecast to be many years away.

Meta last month announced it was axing 11,000 employees — 13 percent of its workforce — in a general tech belt-tightening that has also seen jobs shed at Twitter, Amazon, and Hewlett-Packard (HP).

Challenge from China

Meta’s stock market capitalisation has slid from an all-time high of $1.07 trillion (roughly Rs. 88 lakh crore) in August 2021 to just over $300 billion (roughly Rs. 25 lakh crore) today — a 72 percent drop.

Apple’s over the same period has stayed steadily above $2 trillion (roughly Rs. 165 lakh crore) since late 2020, and is currently around $2.3 trillion (roughly Rs. 190 lakh crore).

Meta has long complained that Apple is building a “walled garden”, with its users locked into its devices, operating system and app store, at the expense of Meta and other online players.

Both Meta and Apple, as well as other Big Tech ones, have repeatedly come under the regulatory microscope in the European Union and the United States as commercial strategies butt up against anti-trust and data privacy concerns.

But Clegg said China was increasingly challenging the US domination of the online world.

“You’ve got US and Chinese big tech now really kind of looming over the whole scene,” he said.

“And don’t, by the way, underestimate how aggressively Chinese big tech is investing in the metaverse,” he added, pointing to the Pico VR headsets being marketed by ByteDance, the Chinese owner of the popular social app TikTok.

Meta’s own investment into VR and Augmented Reality — collectively known as XR, or extended reality — showed its belief that “the biggest bets are the bets which are furthest away… and they’re also the ones where the technology is most expensive,” Clegg said.

Investor criticism of that focus, and a “narrative of pessimism” about Meta’s focus on it, “profoundly underestimates the very, very strong health of the underlying business” of the company, he said.

Continue Reading

Social Networking

Twitter Blue Pricing to Be Lowered for Web Users to $7, App Store Subscribers to Pay $11: Report

Avatar

Published

on

Twitter Blue is yet to be relaunched by the microblogging platform, weeks after it was halted by Elon Musk.
By Reuters |  Updated: 8 December 2022

Twitter plans to change the pricing of its Twitter Blue subscription product to $7 (roughly Rs. 600) from $7.99 (roughly Rs. 700) if users pay for it through the website, and $11 (roughly Rs. 900) if they do so through its iPhone app, the Information reported on Wednesday, citing a person briefed on the plans.

The move was likely a pushback against the 30 percent cut that Apple takes on revenues from apps on its operating system, the report said, with lower pricing for the website likely to drive more users to that platform as opposed to signing up on their iPhones.

It did not mention whether pricing would change for the Android platform as well.

Last week, Musk accused Apple of threatening to block Twitter from its App Store without saying why in a series of tweets that also said it had stopped advertising on the social media platform.

In the first quarter of 2022, Apple was the top advertiser on Twitter, spending $48 million (roughly Rs. 390 crore) and accounting for more than 4 percent of total revenue for the period, the Washington Post reported, citing an internal Twitter document.

Among the list of grievances tweeted by Musk was the up to 30 percent fee Apple charges software developers for in-app purchases.

He also posted a meme suggesting he was willing to “go to war” with Apple rather than paying the commission.

The fee has drawn criticism and lawsuits from companies such as Epic Games, the maker of Fortnite, while attracting the scrutiny of regulators globally.

The commission could weigh on Musk’s attempts to boost subscription revenue at Twitter, in part to make up for the exodus of advertisers over content moderation concerns.

Musk later met Apple chief executive Tim Cook at the company’s headquarters and later tweeted that the misunderstanding about Twitter being removed from Apple’s App Store was resolved.

Twitter and Apple did not immediately respond to a request for comment.

© Thomson Reuters 2022

Continue Reading

Social Networking

EU Said to Prepare to Bar Meta From Running Ads Based on Personal Data: All Details

Avatar

Published

on

Austrian privacy activist Max Schrems filed a complaint against Meta with Ireland's data protection agency in 2018.
By Reuters | Updated: 7 December 2022

Meta will only be able to run advertising based on personal data with users’ consent, according to a confidential EU privacy watchdog decision, a person familiar with the matter said on Tuesday, in a blow to the US social network.

The Irish data protection agency, which oversees Meta because its European headquarters is located in Dublin, has been given a month to issue a ruling based on the European Data Protection Board’s (EDPB) binding decision.

The EDPB will likely require the Irish body to hand out fines, the person said, asking not to be named because of the senstivity of the issue.

Big Tech’s targeted ad model and how data is collected and used has drawn regulatory scrutiny around the world.

Shares of the company were down 6.2 percent in mid-session trade. Google, Snap and Pinterest which are reliant on digital advertising, fell 2.2 percent, 8 percent and 4 percent respectively.

The Irish case against Meta was triggered by a complaint by Austrian privacy activist Max Schrems in 2018.

“Instead of having a yes/no option for personalised ads, they just moved the consent clause in the terms and conditions. This is not just unfair but clearly illegal. We are not aware of any other company that has tried to ignore the GDPR in such an arrogant way,” Schrems said in a statement.

He said the EDPB’s ruling means that Meta must allow users to have a version of all apps that do not use personal data for ads while the company would still be allowed to use non-personal data to personalise ads or simply ask users for consent.

The 27-country bloc’s landmark privacy rules known as the General Data Protection Regulation went into effect in 2018.

Meta is engaging with the Irish body, a Meta spokesperson said.

“GDPR allows for a range of legal bases under which data can be processed, beyond consent or performance of a contract. Under the GDPR there is no hierarchy between these legal bases, and none should be considered better than any other,” the spokesperson said.

Apple’s new privacy rules, which limit digital advertisers from tracking iPhone users, have also been a blow to the Facebook parent.

An EDPB spokeswoman declined to provide details of the decisions made. The agency said it stepped in after other national watchdogs disagreed with the Irish agency’s draft decision.

Its draft decisions on Meta’s parent Facebook and Instagram focus on the lawfulness and transparency of processing for behavioural advertising, while its decision on WhatsApp concerns the lawfulness of processing for the purpose of the improvement of services.

“The DPC cannot comment on the contents of the decisions at this point. We have one month to adopt the EDPB’s binding decisions and will publish details then,” the Irish Data Protection Commission said.

Meta may have to change its business model, said Helena Brown, head of data & privacy at London-based law firm Addleshaw Goddard.

“The direction of travel seems to be that the European regulators will not allow Meta to hide behind “provision of services” as its basis for using personal data for behavioural advertising,” she said.

“Instead, Meta may need to change its approach to seeking clear, explicit consent instead. It will be a challenge for Meta to be able to explain its practices in a way that such consent can be lawful and well-informed,” Brown said.

The Wall Street Journal first reported on the EDPB ruling.

© Thomson Reuters 2022

Continue Reading

Social Networking

Twitter Fired Deputy General Counsel Over Concerns About Role in Information Suppression, Elon Musk Says

Avatar

Published

on

Journalist Matt Taibbi in collaboration with Elon Musk last week published the "Twitter Files" alleging suppression of information on Twitter.
By ANI | Updated: 7 December 2022

Twitter CEO Elon Musk has said that he had fired the microblogging website’s deputy general counsel James Baker over concerns about his role in information suppression under the previous management.

“In light of concerns about (James) Baker’s possible role in suppression of information important to the public dialogue, he was exited from Twitter today,” Musk tweeted on Tuesday.

In light of concerns about Baker’s possible role in suppression of information important to the public dialogue, he was exited from Twitter today— Elon Musk (@elonmusk) December 6, 2022

Last week, journalist Matt Taibbi in collaboration with Musk published “Twitter Files”.

These set of documents were mainly Twitter’s internal communications to disclose links with political actors and with a focus on how the social network blocked stories related to Hunter Biden’s laptop in the lead-up to the 2020 US Presidential election.

The published files alleged that the previous Twitter management took steps to suppress reporting regarding Hunter Biden’s laptop ahead of the 2020 US Presidential Election.

According to the Twitter Files published by Taibbi, Twitter deputy general counsel Baker played a role in the discussion about whether the laptop story fell under Twitter’s “hacked materials” policy.

“I support the conclusion that we need more facts to assess whether the materials were hacked,” the documents published by Taibbi cited Baker as saying in one of the emails. “At this stage, however, it’s reasonable for us to assume that they may have been and that caution is warranted.”

Hunter Biden reportedly abandoned his laptop at Isaac’s repair shop in 2019, while his father, Joe Biden, was running to become US president. The contents of the laptop were later made public. Emails obtained by Western media from the laptop proved Russia’s claims that the US president’s son helped fund bioweapon research in Ukraine.

The Bidens have faced scrutiny and criticism from Republicans and others for their alleged misconduct in Hunter Biden’s foreign business dealings, which came into the public spotlight following the release of the emails.

On Monday, the White House dismissed the Twitter Files as “full of old news”.

“By Twitter on — okay. So, look, we see this as a — an interesting or a coincidence, if I may, that he would so haphazardly — Twitter would so haphazardly push this distraction that is a — that is full of old news, if you think about it,” White House Press Secretary Karine Jean-Pierre said during a press briefing.

Continue Reading

Social Networking

Facebook Dating Will Allow Users to Verify Their Age Using AI Face Scanning, Meta Says

Avatar

Published

on

Meta says the new age verification systems will help stop children from accessing features meant for adults.
By ANI | Updated: 6 December 2022

Meta on Monday announced that it has introduced a new method for users to verify their age on its Facebook Dating service. Facebook is experimenting with methods, such as using an AI face scanner, to allow users of the platform’s dating service to verify their age.

Meta announced in a blog post that it would start prompting users on Facebook Dating to verify that they’re over 18 if the platform suspects a user is underage.

Users can then verify their age by sharing a selfie video that Facebook shares with a third-party business or by uploading a copy of their ID. According to Meta, the company, Yoti, uses facial cues to determine a user’s age without identifying them.

Meta says the new age verification systems will help stop children from accessing features meant for adults. It doesn’t appear that there are any requirements for adults to verify their age on Facebook Dating.

The US social media giant has used Yoti for other age verification purposes, including vetting Instagram users who attempt to change their birthdate to make them 18 or older.

However, according to a report by The Verge, the system isn’t equally accurate for all people: Yoti’s data shows that its accuracy is worse for “female” faces and people with darker complexions.

Last year, Instagram announced that it had started prompting users to fill in their birthday details. The prompts could initially be dismissed but the social media giant eventually made it compulsory for users who wanted to continue using Instagram. The prompts were designed to ascertain how old users were on Instagram and prevent content that isn’t suitable for young people to appear on their feed. At the time, Instagram had stated that the information is necessary for new features it was developing to protect young people.

Continue Reading

Trending