A man took a picture of his son for a doctor and almost ended up in jail: how Google can ruin your life - ForumDaily
The article has been automatically translated into English by Google Translate from Russian and has not been edited.
Переклад цього матеріалу українською мовою з російської було автоматично здійснено сервісом Google Translate, без подальшого редагування тексту.
Bu məqalə Google Translate servisi vasitəsi ilə avtomatik olaraq rus dilindən azərbaycan dilinə tərcümə olunmuşdur. Bundan sonra mətn redaktə edilməmişdir.

A man took a picture of his son for a doctor and almost ended up in jail: how Google can ruin your life

Google has an automated tool to detect offensive images of children. But the system can make a mistake, and the consequences will be serious, according to NYTimes.

Photo: IStock

Mark noticed that something was wrong with his baby. His son's penis looked swollen and hurt. Mark, a father from San Francisco, took his Android smartphone and took photos to document the problem and track its progress.

This was on a Friday night in February 2021. His wife called the nurse consultant at their medical facility to set up an emergency video consultation the next morning because it was a Saturday and there was a pandemic. The nurse said to send pictures so the doctor could review them in advance.

Mark's wife grabbed her husband's phone and sent some high-quality close-ups of their son's groin area to her iPhone so she could upload them to the healthcare provider's messaging system. One showed Mark's arm, which helped better show the swelling. Mark and his wife didn't think about the tech giants who made this rapid capture and sharing of digital data possible, or what those giants might think of the images.

On the subject: Decaying body of tourist found in Mexico: his 5-year-old son was there all the time

With the help of photographs, the doctor diagnosed and prescribed antibiotics, which quickly helped. But after this episode, Mark developed a much bigger problem that cost him more than a decade of contacts, emails and photos and made him the subject of a police investigation. Mark, who asked to be referred to only by his first name for fear of potential reputational damage, fell into an algorithmic network designed to catch people sharing child sexual abuse material.

Because tech companies collect so much data on a regular basis, they have been forced to act as sentries, checking what goes through their servers in order to detect and prevent criminal behavior. Child advocates say the companies' collaboration is needed to combat the high proliferation of images of sexual violence online. But it could entail peeking into private archives like digital photo albums — an intrusion that users might not expect — that cast innocent behavior in a sinister light in at least two cases The Times found.

“There could be tens, hundreds, thousands of them,” said John Callas, a technologist at the Electronic Frontier Foundation, a digital civil liberties organization.

Given the gruesome nature of the accusations, Callas suggested that most people wrongfully labeled would not publicize what had happened.

“I knew these companies were watching us and that privacy is not something we can hope for,” Mark said. "But I didn't do anything wrong."

The police agreed. Google is not.

"Serious Violation"

After setting up a Gmail account in the mid-40s, Mark, in his XNUMXs, has come to rely heavily on Google. He synced appointments with his wife on the Google calendar. His Android camera backed up photos and videos to the Google cloud. He even had a phone plan with Google Fi.

Two days after he took a picture of his son, Mark's phone made an intermittent notification sound: his account has been disabled due to "harmful content" which is "a serious violation of Google policy and may be illegal." The "learn more" link led to a list of possible causes, including "sexual abuse and child exploitation".

Mark was confused at first, but then he remembered his son's illness. Oh God, Google must think this is child porn, he thought.

Oddly enough, Mark worked as a software engineer on a large tech company's automated tool to remove video content flagged as problematic by users. He knew that in such systems there is often a person who makes sure that the computers do not make mistakes, and he assumed that his case would be solved as soon as it reached this person.

He filled out a form asking Google to reconsider, explaining his son's illness. At the same time, he discovered the domino effect of Google's failure. Not only did he lose emails, contact information for friends and former colleagues, and documentation from his son's early years, but his Google Fi account was closed, meaning he had to get a new phone number from a different carrier. Without access to his old phone number and email address, he couldn't get the security codes he needed to sign in to other online accounts, blocking him out for most of his digital life.

“The more eggs in one basket, the more likely they are to break,” he said.

Google said in a statement: "Child sexual abuse material is disgusting and we are committed to preventing it from being shared on our platforms."

A few days after Mark filed an appeal, Google responded that they would not reinstate the account, without any further explanation.

Mark didn't know this, but the Google review team also flagged the video he took, and the San Francisco Police Department has already opened an investigation into it.

How Google tags images

The day after Mark's problems began, the same scenario played out in Texas. Toddler in Houston had an infection in his "private parts," his father wrote in an online post that a journalist stumbled upon while recounting Mark's story. At the request of Cassio's pediatrician, who also asked to be referred to by his first name only, his father used his Android to take photos that were automatically saved to Google Photos. He then sent them to his wife via Google Chat.

Cassio was in the process of buying a house and signing countless digital documents when his Gmail account was disabled. He asked his mortgage broker to change his email address, which made the broker suspicious until Cassio's real estate agent vouched for him.

“It was a huge headache,” Cassio said.

Images of children being exploited or sexually abused are tagged millions of times a year by the tech giants. In 2021, Google alone filed over 600 reports of child abuse material and suspended over 000 user accounts as a result. The experiences of Mark and Cassio were drops in a great ocean.

The first high-tech industry tool to seriously disrupt the vast online exchange of so-called child pornography was PhotoDNA, a database of known images of abuse converted into unique digital codes or hashes; it can be used to quickly look through large numbers of images to find matches, even if the photo has only been slightly altered. After Microsoft released PhotoDNA in 2009, Facebook and other tech companies used it to root out users who shared illegal and malicious images.

“This is an amazing tool,” the president of the National Center for Missing and Exploited Children said at the time.

A bigger breakthrough came nearly a decade later, in 2018, when Google developed an AI tool that could recognize never-before-seen images of child exploitation. This meant looking not only for known images of abused children, but also for images of unknown victims that the authorities could potentially rescue. Google has made its technology available to other companies, including Facebook.

When photos of Mark and Cassio were automatically uploaded from their phones to Google's servers, the technology marked them. John Callas of EFF called scanning intrusive, saying that a family photo album on someone's personal device should be a "private realm". A Google spokesperson said the company only scans when a user takes a "positive action," including when the user's phone backs up photos to the company's cloud.

“This is exactly the nightmare that worries us all,” Callas said. “They're going to scan my family album and then I'll be in trouble.”

A human content moderator for Google reviewed the photos after AI tagged them to confirm they met the federal definition of child sexual abuse material. When Google makes such a discovery, it suspends the user's account, looks for other exploitative material, and, as required by federal law, submits a report on CyberTipline to the National Center for Missing and Exploited Children.

The non-profit organization has become a collection center for abuse reports; last year it received 29,3 million reports, or about 80 reports a day. Fallon McNulty, who runs CyberTipline, said most of these images are previously reported images that are still constantly circulating online. So her staff of 000 analysts focuses on potential new victims so they can prioritize those cases for law enforcement.

“Typically, if NCMEC staff review a CyberTipline report and include exploitative material that has not been seen before, they will escalate,” McNulty said. “This could be a child who has not yet been identified or protected and is not safe.”

McNulty said that Google's amazing ability to detect these images so that her organization could report them to the police for further investigation was "an example that the system is working as it should."

CyberTipline employees add any new offensive images to a hashed database that is used by technology companies for scanning purposes. When Mark's wife found out about this, she deleted Mark's photos of their son from her iPhone, fearing that Apple might tag her account. Last year, Apple announced plans to scan iCloud photos for known child sexual abuse images, but the rollout has been delayed indefinitely due to resistance from privacy groups.

In 2021, CyberTipline reported that it had alerted authorities to "more than 4260 potential new child victims". Among them were the sons of Mark and Cassio.

"There was no crime"

In December 2021, Mark received an envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he was under investigation, as well as copies of search warrants issued by Google and its ISP. The investigator, whose contact information was provided, requested everything in Mark's Google account: his web searches, his location history, his messages, and any documents, photos, and videos he kept at the company.

The "child exploitation video" search was conducted in February, within a week of taking photos of his son.

Mark called investigator Nicholas Hillard, who said the case was closed. Mr. Hillard tried to contact Mark, but his phone number and email address didn't work.

“I determined that the incident did not meet the criteria for a crime and that there was no crime,” Hillard wrote in his report. The police had access to all of Google's information about Mark and decided that this was not child abuse or exploitation.

Mark asked if Hillard could tell Google that he was innocent in order to get his account back.

"You should talk to Google," Hillard said, according to Mark. - I can not do anything".

Mark appealed to Google again with a police report, but to no avail. After receiving notice two months ago that his account was being permanently deleted, Mark spoke to a lawyer about suing Google and how much it might cost.

"I figured it probably wasn't worth $7000," he said.

Kate Klonick, a professor of law at St. John's University who has written about online content moderation, said it can be tricky "to take into account what's not visible in a photo, like the behavior of the people who share the image, or the intentions of the person taking that content." . False positives, where people are mistakenly tagged, are inevitable given the billions of scanned images. While most people would probably see this compromise as justified given the benefit of identifying abused children, Klonick said companies need a "robust process" to clean up and recover innocent people who have been wrongly flagged.

“This would be problematic if it was just about content moderation and censorship,” Klonick said. “But this is doubly dangerous because it also leads to someone being reported to law enforcement.”

According to her, it could be worse, the parent may lose custody of the child. "You can imagine how it could escalate," Klonick said.

Cassio's case was also being investigated by the police. In the fall of 2021, a detective from the Houston Police Department called him and asked him to come to the station.

After Cassio showed the detective his interactions with the pediatrician, he was quickly acquitted. But he, too, was unable to get his ten-year Google account back, despite being a paying user of Google Web Services. Now he uses a Hotmail address for email, which people make fun of him for. He makes several backups of his data.

Everything is ambiguous

Not all photographs of naked children are pornographic, exploitative or offensive. Karissa Byrne Hessick, a law professor at the University of North Carolina who writes about child pornography crimes, says the legal definition of what constitutes sexual assault images can be tricky.

But Hessick said she agreed with the police that medical images didn't qualify. “There is no child abuse,” she said.

In machine learning, a computer program learns by receiving “correct” and “incorrect” information until it can distinguish between them. To avoid flagging photos of babies in the bath or children running naked on sprinklers, Google AI for abuse detection was trained on both images of potentially illegal material found by Google in user accounts in the past and images that did not indicate abuse to give give him a more accurate idea of ​​what to mark.

The decision to label the photographs Mark took was clear: they were explicit photographs of a child's genitals. But context matters: they were taken down by a parent concerned about a sick child.

“We understand that in the era of telemedicine and especially Covid, it was necessary for parents to take pictures of their children in order to make a diagnosis,” said Claire Lilly, head of Google's child safety division. The company has consulted with pediatricians so that its reviewers understand the possible conditions that may appear in photographs taken for medical reasons, she said.

Dr. Suzanne Haney, chair of the American Academy of Pediatrics' Council on Child Abuse and Neglect, advised parents not to photograph their children's genitals, even when directed by a doctor.

“The last thing you want is for your child to get used to someone taking pictures of their genitals,” Dr. Haney said. “If you absolutely must, avoid uploading to the cloud and delete them immediately.”

She said most doctors were probably unaware of the risks involved in asking for such photographs.

“I applaud Google for what they're doing,” Dr. Haney said of the company's anti-abuse efforts. “But we have a really terrible problem.” “Unfortunately, this was due to parents trying to do right by their children.”

Earlier this year, a customer service representative told Cassio that sending photos to his wife using Google Hangouts violated the chat's terms of service. “Do not use Hangouts in any way that could harm children,” the terms read. “Google has a zero-tolerance policy for this content.”

As for Mark, Google's Lilly said that reviewers found no rash or redness in the photos he took, and that a follow-up check on his account found a six-month-old video, which Google also found problematic, of a small child in bed with an undressed woman.

Mark didn't remember the video and didn't have access to it anymore, but he said it sounded like a personal moment he'd like to capture without realizing that anyone would ever see or appreciate it.

“I remember this moment. We woke up one morning. It was a beautiful day with my wife and son and I wanted to capture the moment,” Mark said. “If only we had slept in our pajamas, all this could have been avoided.”

A Google spokesperson said the company is sticking to its decisions, although law enforcement has acquitted the two men.

Blame by default

Hessick, a law professor, said tech companies' collaboration with law enforcement to address and end child sexual abuse is "incredibly important," but she believes it should allow for correction.

You may be interested in: top New York news, stories of our immigrants, and helpful tips about life in the Big Apple - read it all on ForumDaily New York.

“From Google’s point of view, it’s easier to just ban these people from using their services,” she suggested. Otherwise, the company would have to deal with the more difficult questions of "what behavior is appropriate with children and what is appropriate to photograph and what is not."

Mark still has hope that he can get his information back. The San Francisco Police Department keeps the contents of his Google account on a flash drive. Now Mark is trying to get a copy. A police spokesman said the department was ready to help him.

Read also on ForumDaily:

In Chicago, three Ukrainians were poisoned with an unknown substance: two of them died

Fell from a height of 30 meters: a woman fell off a cliff while hiking in Oregon

More and more illegal immigrants are sneaking into the USA: how do they do it and what does the selfie have to do with it

Miscellanea Google Educational program pictures of children
Subscribe to ForumDaily on Google News

Do you want more important and interesting news about life in the USA and immigration to America? — support us donate! Also subscribe to our page Facebook. Select the “Priority in display” option and read us first. Also, don't forget to subscribe to our РєР ° РЅР ° Р »РІ Telegram  and Instagram- there is a lot of interesting things there. And join thousands of readers ForumDaily New York — there you will find a lot of interesting and positive information about life in the metropolis. 



 
1066 requests in 1,050 seconds.