Facebook is shutting down facial recognition system after a decade

Facebook plans to shut down its decade-old facial recognition system this month, deleting the face scan data of more than 1 billion users and effectively eliminating a feature that has fuelled privacy concerns, government investigations, a class-action lawsuit and regulatory woes

By Kashmir Hill and Ryan Mac
Published: Nov 3, 2021

A facial recognition app on a smartphone in New York, Aug. 1, 2019. Facebook plans to shut down its decade-old facial recognition system this month, deleting the face scan data of more than 1 billion users and effectively eliminating a feature that has fueled privacy concerns, government investigations, a class-action lawsuit and regulatory woes.
Image: Amr Alfiky/The New York Times

Facebook plans to shut down its decade-old facial recognition system this month, deleting the face scan data of more than 1 billion users and effectively eliminating a feature that has fueled privacy concerns, government investigations, a class-action lawsuit and regulatory woes.

Jerome Pesenti, vice president of artificial intelligence at Meta, Facebook’s newly named parent company, said in a blog post Tuesday that the social network was making the change because of “the many concerns about the place of facial recognition technology in society.” He added that the company still saw the software as a powerful tool, but “every new technology brings with it potential for both benefit and concern, and we want to find the right balance.”

The decision shutters a feature that was introduced in December 2010 so that Facebook users could save time. The facial recognition software automatically identified people who appeared in users’ digital photo albums and suggested users “tag” them all with a click, linking their accounts to the images. Facebook now has built one of the largest repositories of digital photos in the world, partly thanks to this software.

Facial recognition technology, which has advanced in accuracy and power in recent years, has increasingly been the focus of debate because of how it can be misused by governments, law enforcement and companies. In China, authorities use the capabilities to track and control the Uyghurs, a largely Muslim minority. In the United States, law enforcement has turned to the software to aid policing, leading to fears of overreach and mistaken arrests. Some cities and states have banned or limited the technology to prevent potential abuse.

Facebook only used its facial recognition capabilities on its own site and did not sell its software to third parties. Even so, the feature became a privacy and regulatory headache for the company. Privacy advocates repeatedly raised questions about how much facial data Facebook had amassed and what the company could do with such information. Images of faces that are found on social networks can be used by startups and other entities to train facial recognition software.

Read More

When the Federal Trade Commission fined Facebook a record $5 billion to settle privacy complaints in 2019, the facial recognition software was among the concerns. Last year, the company also agreed to pay $650 million to settle a class-action lawsuit in Illinois that accused Facebook of violating a state law that requires residents’ consent to use their biometric information, including their “face geometry.”

The social network made its facial recognition technology announcement as it also grapples with intense public scrutiny. Lawmakers and regulators have been up in arms over the company in recent months after a former Facebook employee, Frances Haugen, leaked thousands of internal documents that showed the firm was aware of how it enabled the spread of misinformation, hate speech and violence-inciting content.

The revelations have led to congressional hearings and regulatory inquiries. Last week, Mark Zuckerberg, the chief executive, renamed Facebook’s parent company as Meta and said he would shift resources toward building products for the next online frontier, a digital world known as the metaverse.

The change affects more than one-third of Facebook’s daily users who had facial recognition turned on for their accounts, according to the company. That meant they received alerts when new photos or videos of them were uploaded to the social network. The feature had also been used to flag accounts that might be impersonating someone else and was incorporated into software that described photos to blind users.

“Making this change required us to weigh the instances where facial recognition can be helpful against the growing concerns about the use of this technology as a whole,” said Jason Grosse, a Meta spokesperson.

Although Facebook plans to delete more than 1 billion facial recognition templates, which are digital scans of facial features, by December, it will not eliminate the software that powers the system, which is an advanced algorithm called DeepFace. The company has also not ruled out incorporating facial recognition technology into future products, Grosse said.

Privacy advocates nonetheless applauded the decision.

“Facebook getting out of the face recognition business is a pivotal moment in the growing national discomfort with this technology,” said Adam Schwartz, a senior lawyer with the Electronic Frontier Foundation, a civil liberties organization. “Corporate use of face surveillance is very dangerous to people’s privacy.”

Facebook is not the first large technology company to pull back on facial recognition software. Amazon, Microsoft and IBM have paused or ceased selling their facial recognition products to law enforcement in recent years while expressing concerns about privacy and algorithmic bias and calling for clearer regulation.

Facebook’s facial recognition software has a long and expensive history. When the software was rolled out to Europe in 2011, data protection authorities there said the move was illegal and that the company needed consent to analyze photos of a person and extract the unique pattern of an individual face. In 2015, the technology also led to the filing of the class-action suit in Illinois.

Over the last decade, the Electronic Privacy Information Center, a Washington-based privacy advocacy group, filed two complaints about Facebook’s use of facial recognition with the FTC. When the FTC fined Facebook in 2019, it named the site’s confusing privacy settings around facial recognition as one of the reasons for the penalty.

“This was a known problem that we called out over 10 years ago, but it dragged out for a long time,” said Alan Butler, EPIC’s executive director. He said he was glad Facebook had made the decision but added that the protracted episode exemplified the need for more robust U.S. privacy protections.

“Every other modern democratic society and country has a data protection regulator,” Butler said. “The law is not well designed to address these problems. We need more clear legal rules and principles and a regulator that is actively looking into these issues day in and day out.”


Butler also called for Facebook to do more to prevent its photos from being used to power other companies’ facial recognition systems, such as Clearview AI and PimEyes, startups that have scraped photos from the public web, including from Facebook and from its sister app, Instagram.


In Meta’s blog post, Pesenti wrote that facial recognition’s “long-term role in society needs to be debated in the open” and that the company “will continue engaging in that conversation and working with the civil society groups and regulators who are leading this discussion.”

Meta has discussed adding facial recognition capabilities to a future product. In an internal meeting in February, an employee asked if the company would let people “mark their faces as unsearchable” if future versions of a planned smart glasses device incorporated facial recognition technology, according to attendees. The meeting was first reported by BuzzFeed News.

In the meeting, Andrew Bosworth, a longtime company executive who will become Meta’s chief technology officer next year, told employees that facial recognition technology had real benefits but acknowledged its risks, according to attendees and his tweets. In September, the company introduced a pair of glasses with a camera, speakers and a computer processing chip in partnership with Ray-Ban; it did not include facial recognition capabilities.

“We’re having discussions externally and internally about the potential benefits and harms,” said Grosse, the Meta spokesperson. “We’re meeting with policymakers, civil society organizations and privacy advocates from around the world to fully understand their perspectives before introducing this type of technology into any future products.”

©2019 New York Times News Service

X