From Google to Whatsapp, and Twitter to Koo, assessing the compliance status of Intermediaries

Despite media reports, most intermediaries are not in compliance with the new Intermediary Rules, a bulk of which came into effect on February 25

Aditi Agrawal
Published: Jun 17, 2021 03:13:05 PM IST
Updated: Jun 21, 2021 11:31:08 AM IST

Image: Shutterstock

In the last week of May, most of the Indian media was practically palpitating over whether or not social media companies such as Facebook, WhatsApp and Twitter could be banned if they did not comply with the newly notified Intermediary Rules. Here’s a fun fact: For a bulk of the obligations under the new rules, including the already under litigation traceability requirement, the deadline was not May 25, but February 25, the day the rules were notified. And a smaller intermediary with fewer than 50 lakh users also had to comply by February 25 itself. 

Let that sink in.  

This means that if you run a blog about crocheting and you let your readers comment on it, you had to be in compliance on February 25. Or if you are a cyber café operator, you had to have a privacy policy, rules and regulations on February 25. Or if you are a content delivery network like Akamai, you must have appointed a grievance officer who was capable of acknowledging all complaints within 24 hours of receipt and dispose them of within 15 days. Or if you are a news publisher that managed to get 50 lakh accounts, you will have to publish monthly transparency reports about reader comments that you may have taken down and hire a nodal contact person to coordinate with law enforcement agencies. 

Most public attention has been drawn away by the compliance burdens placed upon significant social media intermediaries (SSMI with more than 50 lakh registered users in India that enable online interaction between two users and allow them to share, create, and disseminate content) and that too primarily on hiring or appointing different kinds of officers. But, fact of the matter is that apart from the appointment of three senior officers—chief compliance officer, resident grievance officer and nodal contact person—and publication of monthly transparency reports about content takedown orders, all other obligations of SSMIs came into effect on February 25 itself. 

This means that when Facebook and WhatsApp sued the government against the traceability requirement on May 25, they might have been three months into non-compliance. 

Read More

Sneha Jain, partner at Saikrishna & Associates, agrees that most obligations for intermediaries kicked into effect on February 25 and thus, when SSMIs filed their lawsuits, they may have potentially been non-compliant. Nikhil Narendran, partner at law firm Trilegal, Udbhav Tiwari (public policy advisor, Mozilla, and Supreme Court advocate Priyadarshi Banerjee agree. 

“A literal reading of the rules reveals that the three-month window for compliance, which ended on May 25, applies only to obligations for appointing resident employees for compliance, coordination with law enforcement, and grievance redressal, and to publishing monthly compliance reports. This creates confusion around the compliance windows for other obligations as otherwise, they came into force on the day of notification itself, that is, February 25. It seems illogical, and potentially arbitrary, to make them apply immediately without any time for entities to react," independent legal expert Malavika Raghavan tells Forbes India.  

To understand the impact of Intermediary Rules on social media users and platforms, watch this Forbes India Disrupting the Discourse conversation

What does compliance look like?

There are three levels of due diligence obligations within the Intermediary Rules depending on the size of the intermediary and the function performed by it. 

All intermediaries, which is practically the entire gamut of internet service providers that allow us to access internet as we know it, irrespective of size, must comply with the following set of obligations: Publish their rules and regulations, privacy policy and user agreement on their app or website, notify users about it at least once a year.  

Content that must be forbidden in the user agreement or rules and regulations includes content that is “patently false and untrue” and is published with the “intent to mislead or harass a person”. For Torsha Sarkar, policy officer at Bengaluru-based Centre for Internet and Society, this could be used to target journalists and journalistic work. “How do you prove whether something is intended to harass? If I write an investigative piece on how you are illegally acquiring land, some people might find it problematic? And as journalist, since this is a part of my job, I will be paid for it.” 

The Shreya Singhal judgment, as per Sarkar, made it clear that it was not the intermediary’s job to adjudicate such things. The 2015 Shreya Singhal judgment struck down Section 66A for stifling free speech and said that an intermediary is obligated to take down content only on receipt of valid order from courts or from an authorised government agency.  

All intermediaries must also appoint a grievance officer who would acknowledge all received grievances within 24 hours and dispose them of within 15 days of receipt.  

Furthermore, on receipt of a court order or notice by any of the 10 authorised government agencies (notified by the home ministry in December 2018) block or remove the content in question within 36 hours of receipt. Similarly, for investigation purposes, or to prevent any crimes, the intermediary must turn over information that it has within 72 hours of the receipt of the order.  

In case of any content that shows nudity (full or partial), sexual content or morphed images of such nature, the intermediary must remove such content within 24 hours. In such cases, a judicial order or notice from a law enforcement agency is not necessary; a complaint by an individual will suffice.  

If any content is taken down—either because of a judicial order or a notice from an authorised agency, or in response to the grievance process—all information related to the content must be retained by the intermediary for 180 days, or longer, for investigation purposes.  

Even if a user deletes their account with an intermediary, their data must be retained for 180 days after account deletion. Brijesh Singh, inspector general of Maharashtra police (cyber), says this is important for investigations.  

However, Tiwari says there must be a compelling reason to justify retaining data for such a long duration after cancellation. “It should not be a permanent obligation but taken on a case-by-case basis,” he explains. He agrees that it may make sense for investigative purposes but there must be a judicial order for it. Most large platforms, for instance, allow law enforcement agencies to place account preservation requests to preserve records about users who are suspects in investigations.  

All intermediaries were meant to be in compliance with these obligations on February 25 itself. 

There is a sub-class of intermediaries—social media intermediaries—that has been defined. It includes an intermediary which “primarily or solely enables online interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services”. Mainstream social media companies such as Facebook, Twitter and YouTube are definitely covered as are email services, search engines which allow user comments, encyclopaedias, business-oriented transactions and collaborative office tools. The latter were not covered in leaked drafts of the rules or even under the Personal Data Protection Bill, 2019. 

Narendran and Tiwari both point out that as a social media intermediary, there are no real obligations apart from a general intermediary’s until and unless they have more than 50 lakh registered users in India. In that situation, a specialised class of intermediaries—SSMIs—come into existence and have added obligations.  

All SSMIs, such as Facebook, Twitter, Instagram, WhatsApp, YouTube, Koo and Mitron, among others, got three months to comply with their basic due diligence and to hire or appoint three officers, and publish monthly transparency reports. Their other additional obligations—traceability for messaging services, use of automated tools to take down rape and child sexual abuse material and content that is identical to what has been taken down, having a physical contact address in India, grievance tracking mechanism, voluntary verification of users—had to be complied with on February 25 itself. 

Given the expansive definition of an SMI, “it is not reasonable to expect any website that has comments to enable automated filtering, traceability, transparency reports for content taken down, etc”, says Tiwari. 

Where do different companies stand?  

Much has been made of how Google, Facebook and WhatsApp are now in compliance, at least in terms of having the three officers, while Twitter is not. However, a close reading of the rules reveals otherwise.  

The chief compliance officer (CCO), who is personally liable for ensuring that the SSMI fulfils its due diligence obligations, must be a senior employee of the company and resident in India. Similarly, the nodal contact person, who is responsible for dealing with law enforcement agencies, must be different from the CCO, a senior employee and a resident of India. The resident grievance officer (RGO) must also be an employee of the SSMI and must be resident in India. Their name and contact details must be published on the website and/or the app of the SSMI. 

Until June 1, Facebook had not named its RGO in India on its website and the provided address was for its external legal firm—Shardul Amarchand Mangaldas—not for its own offices in India. On June 3, the company updated the website to reflect a name—Spoorthi Priya (resident of India)—but it is not clear if this officer is a Facebook employee.

WhatsApp's RGO, Paresh B. Lal, is a resident of India but not a WhatsApp employee. He is a senior associate at the law firm AZB & Partners, two sources confirmed to Forbes India. Entrackr first reported this. A source also told Forbes India that Lal is the interim RGO and the same was communicated to MeitY as well. Before the Rules were notified, Ashish Chandra, the associate general counsel of WhatsApp, used to be the platform's grievance officer.

Twitter’s interim RGO, Dharmendra Chatur, is a resident of India, but is not an employee of Twitter—he’s a partner at Poovayya & Co, an external law firm that has represented Twitter in multiple court cases in India. The physical address too is for Poovayya & Co’s Bengaluru branch. Google’s grievance officer for India is a Google employee but not a resident of India. Airmeet (which did not specify its number of users), too, has an interim RGO but theirs is their head of human resources. 

Name and details of the CCO and the nodal contact person, a senior government official explained to Forbes India, need not be published but must be communicated to the Ministry of Electronics and Information Technology (MeitY). None of non-Indian companies Forbes India reached out to, including Facebook, WhatsApp, Google, Twitter, LinkedIn and Snap, would comment on whether or not that was the case. Twitter, as per a LinkedIn job post, is looking to hire a CCO. Airmeet, too, tells Forbes India that it is looking for a CCO.

Telegram has hired Abhimanyu Yadav as its "designated grievance officer" to deal with "public content" which is "not in accordance with the updated Intermediary Policy". It is not clear if the platform has hired a CCO or an NCP. Its FAQ page categorically states that Telegram does not process any requests related to private Telegram chats and group chats. For public channels and bots, which are publicly available, users can only "ping" Telegram about illegal content at abuse@telegram.org. This is not the same as hiring a CCO or an NCP. As far as cooperating with law enforcement agencies is concerned, Telegram proclaims, "To this day, we have disclosed 0 bytes of user data to third parties, including governments." German magazine Der Spiegel recently reported that German investigators have never received replies to the notices they sent to Telegram.

Most Indian companies, including MeitY’s MyGov, have published details of their officers public. Jio, which is an SSMI under the rules as it offers JioChat and JioMeet, declined to comment on its compliance status while LinkedIn, Microsoft (owns Outlook, Office365, Microsoft Teams, Skype), Zoom and Yahoo (Yahoo Mail) did not respond. Signal has not done any hiring in India. 

Transparency reports are onerous but welcome

“The incorporation of transparency reports is a good thing,” says Sarkar. “Intermediaries should be mandated to turn over data about how much data intermediaries took down in a reporting period that was marked as hate speech, or content taken down at the government’s orders,” she says. Such data helps researchers assess the expanse of censorship and how world events influence content removal decisions or orders. “For example, if you look at the end of 2020 transparency reports, you will see many removals on health misinformation,” she says. 

On asking if monthly transparency reports are too onerous a requirement, Sarkar says she cannot be sure because there is no data about it. When the Centre For Internet and Society had submitted its comments on whether the Santa Clara Principles should be modified to better cater to small and medium-sized enterprises, it had found that there was not enough evidence to suggest either way whether transparency and accountability measures such as transparency reports were more onerous to comply with or not. 

The Santa Clara Principles on Transparency and Accountability in Content Moderation were crystallised by a group of privacy researchers, activists and advocates in February 2018 “to obtain meaningful transparency and accountability around internet platforms’ increasingly aggressive moderation of user-generated content”.

However, the contours of these transparency reports have not been laid down. For instance, does an international SSMI turn over data about all content takedowns for all users across the world or only for Indian users? How does an SSMI ascertain what is the data related to Indian users? Is it just content that talks about India, or content that is posted by Indians or users within India, or is it content that is reported by Indians or users within India? 

Banerjee believes that publishing these reports will not be easy. “For instance, the content originates in India but the complaint emanates from the US then does it require reporting under these rules? Or on the other hand, the content originates in the US and the complaint about it is in India… it is kind of straightforward that it falls within the rules. But when you don’t pull it down globally, only block it within IP domains, how do you report it? I am not sure how much granularity is required here,” he says. 

YouTube, for instance, just takes into account the IP address from where the taken down content was uploaded to release country-wise lists. Facebook and Instagram do no provide country-wise break-up of automatically flagged and taken down content. Most SSMIs provide country-wise details for government requests for data, content taken down in response to judicial orders and government orders. Country-wise data is not revealed from where the complaint emanated. 

No company, Indian or otherwise, that we reached out to committed to publishing monthly transparency reports. Most global social media companies, including Google, Twitter and Snap, publish biannual transparency reports, searchable by country while Facebook publishes quarterly reports. Among Indian companies, only ShareChat has published a transparency report in the past, and that too only once. 

Are automated content takedowns obligatory or not?

Unlike the 2018 draft version of the rules, only SSMIs “shall endeavour” to use automated tools to take down content showing rape, child sexual abuse material and content that is identical to what has already been removed. But there is confusion about whether or not this is an obligation. The language of the rules, Sarkar says, means that it is essentially obligatory “to try” to remove such content. “How will the government assess compliance in this scenario?” she asks. 

For Sarkar, this kind of gradation in content—where automated tools are restricted to more egregious content—is a good thing and the obligation is only placed on SSMIs. “In the draft rules, every intermediary was supposed to have automated content takedowns. That does not make sense for a cloud service provider which is also an intermediary under the IT Act,” she says. Similarly, the shortest takedown time—24 hours—is reserved for user complaints about content showing nudity or morphed images. Such content “needs a more rapid response” and is thus a step in the right direction, she says. 

Good faith moderation is now protected 

Both Tiwari and Banerjee appreciate the fact that the rules now protect intermediaries if they proactively take down content in good faith. The Good Samaritan exception essentially means: If an intermediary takes proactive steps to clean up its platform, it will not lose safe harbour. It is included in Section 230 of the American Communications Decency Act. 

“It is not as broad as it should be, but is better than the previous version of these rules,” Tiwari says. “The fear was that good faith proactive monitoring, such as taking down CSAM, would be read by overzealous litigators that this amounts to editorial control and thus strips such intermediaries from their safe harbour protection. That is now protected,” Banerjee says. It is now statutorily recognised that basic amount of proactive filtering will not amount to editorial control, he says. 

However, Banerjee warns of “insidious insertions” within the rules. One example is inclusion of “any human, automated or algorithmic editorial control for onward transmission or communication” as part of what is not exempted from activities of an intermediary. The 2018 version did not include “automated or algorithmic editorial control” and this inclusion brings the rules in conflict with protection given to good faith proactive content takedowns, he says. 

The way forward

The rules, since their notification, have faced significant backlash and litigation. Last week, 14 international human rights organisations, including Electronics Frontier Foundation, Human Rights Watch, Reporters Without Borders and Access Now, called on the government to suspend the implementation of the rules and to review them through a “meaningful” public consultation. The statement called the onerous obligations placed on intermediaries at attempt to “intimidate intermediaries and their employees” into over-compliance “without adhering to human rights obligations” to avoid civil lawsuits or criminal complaints against their staff. 

On May 27, Twitter went public with its concerns about the new rules and asked for three additional months for compliance (it was not granted the time but the Delhi high court, while hearing a challenge against Twitter’s non-compliance, asked it to submit its response within three weeks). These rules, for Twitter, “inhibit free, open public conversation”. Last week, Nick Clegg, former deputy prime minister of the United Kingdom and the current vice president of Facebook’s global affairs and communications, called these rules “highly intrusive”. 

“There are several basis to argue the unconstitutionality for the rules and we should all be working really hard to make sure that certain parts of the rules get struck down,” Narendran says. Tiwari agrees: “In its current state, the government should withdraw these guidelines and carry out a fresh public consultation with the latest draft as the standard before notifying them again.”

For Narendran, one of the biggest problems is that the rules did not undergo public consultation. While it is true that MeitY had carried out public consultations on the 2018 draft, the notified rules introduce new categories of intermediaries (SMI, SSMI) and a few new obligations as well and these have not undergone any consultation. The much criticised traceability requirement has been retained, and arguably, been made worse. The inclusion of the Digital Media Code of Ethics to regulate digital news publishers and streaming platforms never saw the light of the day before notification.  

The inclusion of the Digital Media Code of Ethics, pre-speech censoring and impact of “weeping delegated legislation with severe impact on free speech” are deeply problematic and can be constitutionally challenged, according Narendran. 

It is only through an RTI filed by the Internet Freedom Foundation that it was learnt that the Vidhi Centre for Legal Policy provided legal advice on the rules at a cost of ₹ 968,576 (including GST). The same private think tank was involved in advising the home ministry for an unspecified service in fiscal year 2018-19, and advised the government on the Personal Data Protection Bill, the Aadhaar Act, 2016, and the Aargoya Setu Data Access and Knowledge Sharing Protocol, 2020, the Caravan had reported. It was paid ₹ 48,15,000 over five years for its legal services in formulating the Aadhaar Act, Scroll reported.  

Rajeev Chandrasekhar, Rajya Sabha MP (BJP), however, says that this is an “evolving regulation”. “No one in the policy space is saying that there is a foolproof way of regulating the internet. Going from a regime of reasonably zero regulation of intermediaries and zero accountability to a regime where basic principles of intermediary liability of accountability, consumer redressal are incorporated,” he tells Forbes India. He says the entire world was grappling with this conundrum but at least India has taken the lead and gone for “this way” instead of “a crackdown, which would have been much worse”.  

Update (June 18, 4:23 pm): Added details of WhatsApp's resident grievance officer.

Update (June 18, 1:18 pm): Added details of Telegram's resident grievance officer and how the platform does not share data with law enforcement agencies.

X