Facebook has expanded on comments CEO Mark Zuckerberg made about the company’s child protection plans when it switches to end-to-end encryption for its messaging apps as part of a belated focus on privacy…
In March, Zuckerberg announced that private messages in both Facebook Messenger and Instagram would in future use end-to-end encryption, so that not even Facebook would have access. That led to the US attorney general and acting head of Homeland Security writing to Zuckerberg urging him to abandon this plan, in an open letter co-signed by the UK’s secretary of state for the Home Office and Australia’s minister for Home Affairs.
In addition to the usual terrorism justification for opposing strong encryption, the letter cited child safety as an additional reason.
[Strong encryption] puts our citizens and societies at risk by severely eroding a company’s ability to detect and respond to illegal content and activity, such as child sexual exploitation and abuse, terrorism, and foreign adversaries’ attempts to undermine democratic values and institutions, preventing the prosecution of offenders and safeguarding of victims. It also impedes law enforcement’s ability to investigate these and other serious crimes. Risks to public safety from Facebook’s proposals are exacerbated in the context of a single platform that would combine inaccessible messaging services with open profiles, providing unique routes for prospective offenders to identify and groom our children.
Zuckerberg responded by saying there were other methods of detecting and preventing child grooming, without any need for access to message content.
Facebook’s child protection plans
The Financial Times reports that Facebook has now expanded on this, saying that prevention was better than detection.
Antigone Davis, Facebook’s global head of safety, said the company was looking at how to ensure the safety of children while pressing ahead with the move to encryption. She said Facebook’s goal was to shift from flagging and removing illegal content to preventing abusers from contacting potential victims in the first place.
“When you find content, the problem with that is the harm has already been done. Ultimately you want to prevent that content from being shared in the first place, or from being created,” Ms Davis said. “So the way we are thinking about it is, how can we stop these connections?”
She said Facebook could look at user profiles and flag someone making a series of requests to minors they do not know, or people who are part of suspicious groups. She added that the company could also scan photographs for comments to flag patterns of bad behavior.
Other alerts could include large age gaps between people communicating privately on Messenger or Instagram Direct Messages, frequency of messaging, and people that lots of users are blocking or deleting, she added.
Apple has been one of a number of tech giants to oppose government attacks on end-to-end encryption, which is used by the company’s Messages and FaceTime apps. This includes a letter rejecting a call by UK security services for a so-called “ghost proposal” to secretly add law enforcement agents to private chats.
This article was originally posted here