Connect with us

Social Networking

Twitter, Facebook, Others Will Have to Abide by Local Laws, Says Minister




By Press Trust of India | Updated: 31 October 2022

The new amendments to IT rules impose a legal obligation on social media companies to take all out efforts to prevent barred content and misinformation, the government said on Saturday making it clear that platforms such as Twitter and Facebook operating in India will have to abide by local laws and constitutional rights of Indian users.

The new rules provide for setting up appellate committees which can overrule decisions of the big tech firms on takedown or blocking requests.

The hardening of stance against the big tech companies comes at a time when discontent has been brewing over alleged arbitrary acts of social media platforms on flagged content, or not responding fast enough to grievances.

Amid concerns over the rising clout of Big Tech globally, the CEO of electric car maker Tesla, Elon Musk, on Friday completed his $44 billion (roughly Rs. 3,62,300 crore) takeover of Twitter, placing the world’s richest man at the helm of one of the most influential social media apps in the world. Incidentally, the microblogging platform has had multiple run-ins with the government in the past.

India’s tweaking of IT rules allow formation of Centre-appointed panels, that will settle often-ignored user grievances against content decision of social media companies, Minister of State for IT Rajeev Chandrasekhar said, adding that this was necessitated due to the “casual” and “tokenism” approach of digital platforms towards user complaints so far.

“That is not acceptable,” Chandrasekhar said at a media briefing explaining the amended rules.

The minister said that lakhs of messages around unresolved user complaints reflected the “broken” grievance redressal mechanism currently being offered by platforms, and added that while it will partner with social media companies towards common goal of ensuring Internet remains open, safe and trusted for Indians, the government will not hesitate to act, crackdown, where public interest is compromised.

On whether penalties will be imposed on platforms for not complying, he said the government would not like to bring punitive action at this stage but warned that if the situation demands in future, that could be considered too. The internet is evolving, as will the laws.

“We are not getting to the business of punity, but there is an opinion that there should be punitive penalties for those platforms not following rules…it is an area we have steered clear of, but that is not to say it is not on our mind,” he cautioned.

The tighter IT norms raises due diligence and accountability of platforms to fight illegal content proactively (government has added deliberate misinformation to that list too), with a 72-hour window to take down flagged content. So far, intermediaries were only required to inform users about not uploading certain categories of harmful or unlawful content.

“The obligations of intermediaries earlier was limited to notifying users of the rules but now there will be much more definite obligation on platforms. Intermediaries have to make efforts that no unlawful content is posted on platform,” the minister said.

These amendments impose a legal obligation on intermediaries to take reasonable efforts to prevent users from uploading such content, an official release said.

Simply put, the new provision will ensure that the intermediary’s obligation is not a “mere formality”.

“In the category of obligation we have added misinformation…intermediary should not be party to not just illegal content, but they can’t be party to any deliberate misinformation as content on platforms. Misinformation not just about media it is about advertising…illegal products and services, online betting, misinformation can be in fintech community, misrepresenting products and services. Misinformation also refers to false information about person or entity,” the minister said.

For effective outreach, communication of the rules and regulations will have to be done in regional Indian languages by platforms.

The government has, in the new rules, added objectionable religious content (with intent to incite violence) alongside pornography, trademark infringements, fake information and something that could be a threat to sovereignty of the nation that users can flag to social media platforms.

The words ‘defamatory’ and ‘libellous’ have been removed; whether any content is defamatory or libellous will be determined through judicial review.

Some of the content categories have been rephrased to deal particularly with misinformation, and content that could incite violence between different religious/caste groups (that is information promoting enmity between different groups on the grounds of religion or caste with the intent to incite violence).

The rules come in the backdrop of complaints regarding the action/inaction on the part of the intermediaries on user grievances regarding objectionable content or suspension of their accounts.

“The intermediaries now will be expected to ensure that there is no uploading of content that intentionally communicates any misinformation or information that is patently false or untrue hence entrusting an important responsibility on intermediaries,” the official release said.

The rules also have made it explicit for the intermediary to respect the rights accorded to the Indian citizens under the Articles 14 (non-discrimination), 19 (freedom of speech, subject to certain restrictions) and 21 (right to privacy) of the Indian Constitution.

In a strong message to Big Tech companies, the minister asserted that community guidelines of platforms – regardless of whether they are headquartered in the US, Europe, or elsewhere – cannot undermine constitutional rights of Indians, when such platforms operate in India. Chandrasekhar said platforms will have obligation to remove within 72 hours of flagging, any “misinformation” or illegal content or content that promotes enmity between different groups on the grounds of religion or caste with the intent to incite violence. He said that effort should be to take down illegal content “as fast as possible”.

The complaints around illegal content could range from child sexual abuse material to nudity to trademark and patent infringements, misinformation, impersonation of another person, content threatening the unity and integrity of the country as well as “objectionable” content that promotes “enmity between different groups on the grounds of religion or caste with the intent to incite violence”.

The modalities defining the structure and scope of Grievance Appellate Committees will be worked out soon, he promised adding that the process will start with 1-2 such panels, which will be expanded based on requirements. The panels will not have suo moto powers.

“Government is not interested in playing role of ombudsman. It is a responsibility we are taking reluctantly, because the grievance mechanism is not functioning properly,” the minister said. The idea is not to target any company or intermediary or make things difficult for them. The government sees internet and online safety as a shared responsibility of all, the minister noted.

It is pertinent to mention that big social media platforms have drawn flak in the past over hate speech, misinformation and fake news circulating on their platforms, and there have been persistent calls to make them more accountable. Microblogging platform Twitter has had several confrontations with the government over a slew of issues.

The government, in February 2021, notified IT rules that provided for social media platforms to appoint a grievance officer. Non compliance with IT rules result in these social media companies losing their intermediary status that provides them exemptions from liabilities for any third party information and data hosted by them.

Social Networking

Meta Used Public Instagram, Facebook Posts to Train Its New AI Assistant




Meta also said it did not use private chats on its messaging services as training data for the AI model.
By Reuters | Updated: 29 September 2023

Meta Platforms used public Facebook and Instagram posts to train parts of its new Meta AI virtual assistant, but excluded private posts shared only with family and friends in an effort to respect consumers’ privacy, the company’s top policy executive told Reuters in an interview.

Meta also did not use private chats on its messaging services as training data for the model and took steps to filter private details from public datasets used for training, said Meta President of Global Affairs Nick Clegg, speaking on the sidelines of the company’s annual Connect conference this week.

“We’ve tried to exclude datasets that have a heavy preponderance of personal information,” Clegg said, adding that the “vast majority” of the data used by Meta for training was publicly available.

He cited LinkedIn as an example of a website whose content Meta deliberately chose not to use because of privacy concerns.

Clegg’s comments come as tech companies including Meta, OpenAI and Alphabet’s Google have been criticized for using information scraped from the internet without permission to train their AI models, which ingest massive amounts of data in order to summarize information and generate imagery.

The companies are weighing how to handle the private or copyrighted materials vacuumed up in that process that their AI systems may reproduce, while facing lawsuits from authors accusing them of infringing copyrights.

Meta AI was the most significant product among the company’s first consumer-facing AI tools unveiled by CEO Mark Zuckerberg on Wednesday at Meta’s annual products conference, Connect. This year’s event was dominated by talk of artificial intelligence, unlike past conferences which focused on augmented and virtual reality.

Meta made the assistant using a custom model based on the powerful Llama 2 large language model that the company released for public commercial use in July, as well as a new model called Emu that generates images in response to text prompts, it said.

The product will be able to generate text, audio and imagery and will have access to real-time information via a partnership with Microsoft’s Bing search engine.

The public Facebook and Instagram posts that were used to train Meta AI included both text and photos, Clegg said.

Those posts were used to train Emu for the image generation elements of the product, while the chat functions were based on Llama 2 with some publicly available and annotated datasets added, a Meta spokesperson told Reuters.

Interactions with Meta AI may also be used to improve the features going forward, the spokesperson said.

Clegg said Meta imposed safety restrictions on what content the Meta AI tool could generate, like a ban on the creation of photo-realistic images of public figures.

On copyrighted materials, Clegg said he was expecting a “fair amount of litigation” over the matter of “whether creative content is covered or not by existing fair use doctrine,” which permits the limited use of protected works for purposes such as commentary, research and parody.

“We think it is, but I strongly suspect that’s going to play out in litigation,” Clegg said.

Some companies with image-generation tools facilitate the reproduction of iconic characters like Mickey Mouse, while others have paid for the materials or deliberately avoided including them in training data.

OpenAI, for instance, signed a six-year deal with content provider Shutterstock this summer to use the company’s image, video and music libraries for training.

Asked whether Meta had taken any such steps to avoid the reproduction of copyrighted imagery, a Meta spokesperson pointed to new terms of service barring users from generating content that violates privacy and intellectual property rights.

© Thomson Reuters 2023

Continue Reading

Social Networking

Meta to Offer Paid Versions of Facebook and Instagram in Europe to Avoid Ads: Report




Meta would reportedly continue to offer free versions of the apps with ads in the EU.
By Reuters | Updated: 2 September 2023

Meta Platforms is considering paid versions of Facebook and Instagram with no advertisements for users residing in the European Union (EU) as a response to scrutiny from regulators, the New York Times reported on Friday.

Those who pay for the subscriptions would not see ads while Meta would also continue to offer free versions of the apps with ads in the EU, the report said, citing three people with knowledge of the plans.

The report added that the possible move may help Meta combat privacy concerns and other scrutiny from the EU as it would give users an alternative to the company’s ad-based services, which rely on analyzing people’s data.

Meta did not immediately respond to a Reuters request for comment.

The social media behemoth has been in the crosshairs of EU antitrust regulators and lost a fight in July against a 2019 German order that barred it from collecting users’ data without consent.

It is unclear how much the paid versions of the app would cost, the NYT report said.

The social media giant has been in the spotlight of EU antitrust regulators and has been fined NOK 1 million (roughly Rs. 77,51,000) per day since August 14 for breaching users’ privacy by harvesting user data and using it to target advertising at them. The company is seeking a temporary injunction against the order by Norway’s data protection authority, which imposes a daily fine for the next three months. The regulator, Datatilsynet, had said on July 17 that the company would be fined if it did not address privacy breaches the regulator had identified.

© Thomson Reuters 2023

Continue Reading

Social Networking

Elon Musk’s X Adds 5-Second Delay to Links for NY Times, Reuters and Social Media Rivals Like Facebook: Report




Musk, who bought Twitter in October, has previously lashed out at news organizations and journalists who have reported critically on his companies.
By Reuters | Updated: 16 August 2023

Social media company X, formerly known as Twitter, delayed access to links to content on the Reuters and New York Times websites as well as rivals like Bluesky, Facebook, and Instagram, according to a Washington Post report on Tuesday.

Clicking a link on X to one of the affected websites resulted in a delay of about five seconds before the webpage loaded, the Washington Post reported, citing tests it conducted on Tuesday. Reuters also saw a similar delay in the tests it ran.

By late Tuesday afternoon, X appeared to have eliminated the delay. When contacted for comment, X confirmed the delay was removed but did not elaborate.

Billionaire Elon Musk, who bought Twitter in October, has previously lashed out at news organizations and journalists who have reported critically on his companies, which include Tesla and SpaceX. Twitter has previously prevented users from posting links to competing social media platforms.

Reuters could not establish the precise time when X began delaying links to some websites.

A user on Hacker News, a tech forum, posted about the delay earlier on Tuesday and wrote that X began delaying links to the New York Times on August 4. On that day, Musk criticized the publication’s coverage of South Africa and accused it of supporting calls for genocide. Reuters has no evidence that the two events are related.

A spokesperson for the New York Times said it has not received an explanation from X about the link delay.

“While we don’t know the rationale behind the application of this time delay, we would be concerned by targeted pressure applied to any news organization for unclear reasons,” the spokesperson said on Tuesday.

A Reuters spokesperson said: “We are aware of the report in the Washington Post of a delay in opening links to Reuters stories on X. We are looking into the matter.”

Bluesky, an X rival that has Twitter co-founder Jack Dorsey on its board, did not reply to a request for comment.

Meta, which owns Facebook and Instagram, did not immediately respond to a request for comment.

© Thomson Reuters 2023

Continue Reading

Social Networking

Snapchat Said to Be Under Scrutiny From UK Watchdog Over Underage Users




Under UK data protection law, social media companies need parental consent before processing data of children under 13.
By Reuters | Updated: 8 August 2023

Britain’s data regulator is gathering information on Snapchat to establish whether the US instant messaging app is doing enough to remove underage users from its platform, two people familiar with the matter said.

Reuters reported exclusively in March that Snapchat owner Snap had only removed a few dozen children aged under-13 from its platform in Britain last year, while UK media regulator Ofcom estimates it has thousands of underage users.

Under UK data protection law, social media companies need parental consent before processing data of children under 13. Social media firms generally require users to be 13 or over but have had mixed success in keeping children off their platforms.

Snapchat declined to give details of any measures it might have taken to reduce the number of underage users.

“We share the goals of the ICO (Information Commissioner’s Office) to ensure digital platforms are age appropriate and support the duties set out in the Children’s Code,” a Snap spokesperson said.

“We continue to have constructive conversations with them on the work we’re doing to achieve this,” they added.

Before launching any official investigation, the ICO generally gathers information related to an alleged breach. It may issue an information notice, a formal request for internal data that may aid the investigation, before deciding whether to fine the individual or organisation being investigated.

Last year, Ofcom found 60 percent of children aged between eight and 11 had at least one social media account, often created by supplying a false date of birth. It also found Snapchat was the most popular app for underage social media users.

The ICO received a number of complaints from the public concerning Snap’s handling of children’s data after the Reuters report, a source familiar with the matter said.

Some of the complaints related to Snapchat not doing enough to keep young children off its platform, the source said.

The ICO has spoken to users and other regulators to assess whether there has been any breach by Snap, the sources said.

An ICO spokesperson told Reuters it continued to monitor and assess the approaches Snap and other social media platforms were taking to prevent underage children from accessing their platforms.

A decision on whether to launch a formal investigation into Snapchat will be made in the coming months, the sources said.


If the ICO found Snap to be in breach of its rules, the firm could face a fine equivalent to up to 4 percent of its annual global turnover, which according to a Reuters calculation would equate to $184 million (roughly Rs. 1,522 crore) based on its most recent financial results.

Snapchat and other social media firms are under pressure globally to better police content on their platforms.

The NSPCC (National Society for the Prevention of Cruelty to Young Children), said that figures it had obtained showed that Snapchat accounted for 43 percent of cases in which social media was used to distribute indecent images of children.

Snapchat has previously not responded to this report when asked to comment on it by Reuters.

Earlier this year, the ICO fined TikTok 12.7 million pounds ($16.2 million) for misusing children’s data, saying the Snap competitor did not “take sufficient action” to remove them.

A TikTok spokesperson said at the time that it “invested heavily” to keep under-13s off the platform and that its 40,000-strong safety team worked “around the clock” to keep it safe.

Snapchat does block users from signing up with a date of birth that puts them under the age of 13? However, other apps take more proactive measures to prevent underage children from accessing their platforms.

For example, if an under-13-year-old has failed to sign up to TikTok using their real date of birth, the app continues blocking them from creating an account.

© Thomson Reuters 2023

Continue Reading

Social Networking

Elon Musk’s X Sues Nonprofit That Tracks Hate Speech, Disinformation




The lawsuit stems from a media report that stated findings from CCDH's research saying that hate speech had increased towards minority communities on the X.
By Reuters | Updated: 1 August 2023

Social media platform X, formerly known as Twitter, on Monday sued a nonprofit that fights hate speech and disinformation, accusing it of asserting false claims and encouraging advertisers to pause investment on the platform.

US media reported earlier that X, owned by Elon Musk, had sent a letter to the Center for Countering Digital Hate (CCDH) and threatened to sue the non-profit for unspecified damages.

In response to that letter, lawyers for the CCDH accused X of “intimidating those who have the courage to advocate against incitement, hate speech and harmful content online.” They also said that X’s allegations had no factual basis.

The lawsuit stems from a media report published in July that stated findings from CCDH’s research saying that hate speech had increased towards minority communities on the platform.

X and its CEO Linda Yaccarino labeled the report false and said it was based on “a collection of incorrect, misleading, and outdated metrics, mostly from the period shortly after Twitter’s acquisition.”

In a blog post on Monday, X said the CCDH had gained access to its data without authorisation and accused it of scraping data from its platform, violating X’s terms.

It reiterated that the metrics contained in the research were used out of context to make unsubstantiated assertions about X.

The CCDH did not respond to a request for comment outside regular business hours.

X recently filed lawsuits against four unnamed entities in Texas and Israel’s Bright Data for scraping data.

© Thomson Reuters 2023

Continue Reading

Social Networking

Meta to Launch AI-Powered Chatbots With Different Personalities by September: Report




Meta has been reportedly designing prototypes for chatbots that can have humanlike discussions with its users.
By Reuters | Updated: 1 August 2023

Meta Platforms is preparing to launch a range of artificial intelligence (AI) powered chatbots that exhibit different personalities as soon as September, the Financial Times reported on Tuesday.

Meta has been designing prototypes for chatbots that can have humanlike discussions with its users, as the company attempts to boost its engagement with its social media platforms, according to the report, citing people with knowledge of the plans.

The Menlo Park, California-based social media giant is even exploring a chatbot that speaks like Abraham Lincoln and another that advises on travel options in the style of a surfer, the report added. The purpose of these chatbots will be to provide a new search function as well as offer recommendations.

The report comes as Meta executives are focusing on boosting retention on its new text-based app Threads, after the app lost more than half of its users in the weeks following its launch on July 5.
Meta did not immediately respond to a Reuters request for comment.

The Facebook parent reported a strong rise in advertising revenue in its earnings last week, forecasting third-quarter revenue above market expectations.

The company has been climbing back from a bruising 2022, buoyed by hype around emerging AI technology and an austerity drive in which it has shed around 21,000 employees since last fall.

Bloomberg News reported in July that Apple is working on AI offerings similar to OpenAI’s ChatGPT and Google’s Bard, adding that it has built its own framework, known as ‘Ajax’, to create large language models and is also testing a chatbot that some engineers call ‘Apple GPT’.

© Thomson Reuters 2023

Continue Reading