A dad took photos of his naked toddler for the doctor. Google flagged him as a criminal

The episode cost Mark more than a decade of contacts, emails and photos, and made him the target of a police investigation

By Kashmir Hill
Published: Aug 22, 2022

Mark, who asked to be identified only by his first name for fear of potential reputational harm, in San Francisco, Calif. on Aug. 6, 2022. A police investigator was unable to get in touch with Mark because his Google Fi phone number no longer worked. (Aaron Wojack/The New York Times)Mark, who asked to be identified only by his first name for fear of potential reputational harm, in San Francisco, Calif. on Aug. 6, 2022. A police investigator was unable to get in touch with Mark because his Google Fi phone number no longer worked. (Aaron Wojack/The New York Times)

Mark noticed something amiss with his toddler. His son’s penis looked swollen and was hurting him. Mark, a stay-at-home dad in San Francisco, grabbed his Android smartphone and took photos to document the problem so he could track its progression.

It was a Friday night in February 2021. His wife called their health care provider to schedule an emergency consultation for the next morning, by video because it was a Saturday and there was a pandemic going on. A nurse said to send photos so the doctor could review them in advance.

Mark’s wife grabbed her husband’s phone and texted a few close-ups of their son’s groin area to her iPhone so she could upload them to the health care provider’s messaging system.

The episode cost Mark more than a decade of contacts, emails and photos, and made him the target of a police investigation. Mark, who asked to be identified only by his first name for fear of potential reputational harm, had been caught in an algorithmic net designed to snare people exchanging child sexual abuse material.

Because technology companies capture so much data, they have been pressured to examine what passes through their servers to detect and prevent criminal behavior. Child advocates say the companies’ cooperation is essential to combat the online spread of sexual abuse imagery. But it can entail peering into private archives that has cast innocent behavior in a sinister light in at least two cases The New York Times has unearthed.

Read More

Jon Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization, called the cases canaries “in this particular coal mine.”

After setting up a Gmail account in the mid-aughts, Mark, who is in his 40s, came to rely heavily on Google. His Android smartphone camera backed up his photos and videos to the Google Cloud. He had a phone plan with Google Fi.

Two days after taking the photos of his son, Mark’s phone made a notification noise: His account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” A “learn more” link led to a list of possible reasons, including “child sexual abuse and exploitation.”

Mark was confused at first but then remembered his son’s infection. “Oh, God, Google probably thinks that was child porn,” he thought.

He filled out a form requesting a review of Google’s decision, explaining his son’s infection. At the same time, he discovered the domino effect of Google’s rejection. Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son’s first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn’t get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.

In a statement, Google said, “Child sexual abuse material is abhorrent, and we’re committed to preventing the spread of it on our platforms.”

A few days after Mark filed the appeal, Google responded that it would not reinstate the account, with no further explanation.

The day after Mark’s troubles started, the same scenario was playing out in Texas. A toddler in Houston had an infection in his “intimal parts,” his father wrote in an online post that I stumbled upon while reporting out Mark’s story. At the pediatrician’s request, Cassio, who also asked to be identified only by his first name, used an Android to take photos, which were backed up automatically to Google Photos. He then sent them to his wife via Google’s chat service.

Cassio was in the middle of buying a house when his Gmail account was disabled. He asked his mortgage broker to switch his email address, which made the broker suspicious until Cassio’s real estate agent vouched for him.

“It was a headache,” Cassio said.

The tech industry’s first tool to seriously disrupt the vast online exchange of so-called child pornography was PhotoDNA, a database of known images of abuse, converted into unique digital codes; it could be used to quickly comb through large numbers of images to detect a match even if a photo had been altered in small ways. After Microsoft released PhotoDNA in 2009, Facebook and other tech companies used it to root out users circulating illegal and harmful imagery.

A bigger breakthrough came in 2018, when Google developed an artificially intelligent tool that could recognize never-before-seen exploitative images of children. That meant finding not just known images of abused children but images of unknown victims who could potentially be rescued by authorities. Google made its technology available to other companies, including Facebook.

When Mark’s and Cassio’s photos were automatically uploaded from their phones to Google’s servers, this technology flagged them. A Google spokesperson said the company scans only when an “affirmative action” is taken by a user; that includes when the user’s phone backs up photos to the company’s cloud.

A human content moderator for Google would have reviewed the photos after they were flagged by AI to confirm they met the federal definition of child sexual abuse material. When Google makes such a discovery, it locks the user’s account, searches for other exploitative material and, as required by federal law, makes a report to the CyberTipline at the National Center for Missing and Exploited Children.

In 2021, the CyberTipline reported that it had alerted authorities to “over 4,260 potential new child victims.” The sons of Mark and Cassio were counted among them.

In December, Mark received an envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator had asked for everything in Mark’s Google account: his internet searches, his location history, his messages and any document, photo and video he’d stored with the company.

The search, related to “child exploitation videos,” had taken place in February, within a week of his taking the photos of his son.

Mark called the investigator, Nicholas Hillard, who said the case was closed. Hillard had tried to get in touch with Mark, but his phone number and email address hadn’t worked.

“I determined that the incident did not meet the elements of a crime and that no crime occurred,” Hillard wrote in his report.

Mark appealed his case to Google again, providing the police report, but to no avail.

Cassio was also investigated by police. A detective from the Houston Police department called this past fall, asking him to come into the station.

After Cassio showed the detective his communications with the pediatrician, he was quickly cleared. But he, too, was unable to get his decade-old Google account back, despite being a paying user of Google’s web services.

Not all photos of naked children are pornographic, exploitative or abusive. Carissa Byrne Hessick, a law professor at the University of North Carolina who writes about child pornography crimes, said that legally defining what constitutes sexually abusive imagery can be complicated.

But Hessick said she agreed with police that medical images did not qualify. “There’s no abuse of the child,” she said. “It’s taken for nonsexual reasons.”

I have seen the photos that Mark took of his son. The decision to flag them was understandable: They are explicit photos of a child’s genitalia. But the context matters: They were taken by a parent worried about a sick child.

“We do recognize that in an age of telemedicine and particularly COVID, it has been necessary for parents to take photos of their children in order to get a diagnosis,” said Claire Lilley, Google’s head of child safety operations. The company has consulted pediatricians, she said, so that its human reviewers understand possible conditions that might appear in photographs taken for medical reasons.

Cassio was told by a customer support representative earlier this year that sending the pictures to his wife using Google Hangouts violated the chat service’s terms of service.

As for Mark, Lilley, at Google, said that reviewers had not detected a rash or redness in the photos he took and that the subsequent review of his account turned up a video from six months earlier that Google also considered problematic, of a young child lying in bed with an unclothed woman.

Mark did not remember this video and no longer had access to it, but he said it sounded like a private moment he would have been inspired to capture, not realizing it would ever be viewed or judged by anyone else.

“I can imagine it. We woke up one morning. It was a beautiful day with my wife and son, and I wanted to record the moment,” Mark said. “If only we slept with pajamas on, this all could have been avoided.”

A Google spokesperson said the company stands by its decisions, even though law enforcement cleared the two men.

©2019 New York Times News Service

X