Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate. As a direct result, starting today, people who have broken certain rules on Facebook — including our Dangerous Organizations and Individuals policy — will be restricted from using Facebook Live […]
Today we are tightening the rules that apply specifically to Live. We will now apply a ‘one strike’ policy to Live in connection with a broader range of offenses. From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.
The vagueness arises from the fact that Facebook doesn’t list the specific violations which will trigger the Facebook Live restrictions, nor does it specify the time-outs that will be imposed, merely giving 30 days as an example.
That example too seems ridiculously weak. Someone shares a statement from a terrorist group and 31 days later can use Facebook Live?
The company does acknowledge the difficult balancing act between freedom of expression and the prevention of abuse.
We recognize the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook. Our goal is to minimize risk of abuse on Live while enabling people to use Live in a positive way every day.
However, while governments have to act with enormous care in restricting freedoms in the name of safety, access to Facebook Live is hardly a basic human right. It seems to me that erring on the side of caution here would be more appropriate.
On a more positive note, Facebook is investing in new research to help it automatically detect and block modified versions of banned videos – a problem shared by YouTube.
One of the challenges we faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack. People — not always intentionally — shared edited versions of the video, which made it hard for our systems to detect.
The social network is partnering with the University of Maryland, Cornell University, and the University of California, Berkeley, on the AI research.
This article was originally posted here