POLITICAL FIGHT REASON FOR FACEBOOK AND GOOGLE’S NEW TERRORIST CONTENT DATABASE
This week, Facebook, Google, Microsoft, and
Twitter announced a new database for images and videos that promote terrorism.
The database, which is hosted by Facebook, is designed as a defense against
propaganda videos and imagery by terrorist groups — a tactic ISIS has aggressively embraced.
Flag a single image on
a single service, and any of the participating companies will be able to find
and remove copies, potentially erasing it from the most popular places on the
web in a single stroke. But the implementation of the database is far more
fraught, the result of a complex and ongoing negotiation between tech companies
and European governments looking to rein them in.
While
none of the companies explicitly acknowledge the connection, the database
appears to be the result of a code of conduct concerning hate speech, instituted by the European Commission and
signed by the same four companies in May. The language of the agreement is
remarkably specific, requiring companies to “encourage the provision of notices
and flagging of content” that incites violence or hate, particularly through
inter-corporate partnerships. It also requires companies to review the majority
of hateful content within 24 hours of being notified, and remove it if
necessary. That’s a high bar for a network the size of Facebook or YouTube, but
it’s the price of staying on Europe’s good side.
“[The
database] is definitely responsive to what’s going on in Europe,” says Danielle
Citron, a law professor at the University of Maryland, who specializes in cyber
law and harassment. “What you’re finding is just a manifestation of this code
of conduct from May, coupled with pressure from the United Kingdom and European
Union.”
Web
companies also face significant pressure from courts, as prosecutors and
plaintiffs aim to hold companies responsible for the persistence of hate groups
online. Earlier this month, Mark Zuckerberg was sued by German prosecutors for failing to
ban hate speech on Facebook, and the company is facing a similar civil suit in the US over
Hamas’s use of the network. Twitter has fended off a number of similar lawsuits by appealing to the US safe harbor provision, which has no
clear equivalent in EU or UK law.
While
the new database shows companies are taking European concerns seriously, it
makes few changes to how services actually operate. The database won’t change
company policies on what gets banned, and it doesn’t change how companies scan
internally for violations of those policies. The immediate effect of the
database is simply to give each individual ban a broader reach. When Twitter
finds an ISIS video, it can now forward the video to Microsoft, Google, and
Facebook through the database, instead of simply blocking it on a single
service and moving on. Crucially, each company will still independently assess
whether a video violates its terms, and a video won’t be automatically taken
down simply because another company flagged it.
The
biggest change is that companies will have access to more reports on
potentially ban-worthy content — but companies also have broad discretion over
what to do with those reports. Google’s policy is to scan for flagged images on
YouTube, but not in private storage services like Google Drive or in Gmail
attachments. Microsoft has taken a different approach, foregoing third-party
reports entirely. Microsoft will submit its own flagged content to the database
but the company won’t conduct any new scans based on other flags in the
database. That’s in keeping with Microsoft policy of only investigating content
that has been flagged through the company’s own reporting tool. Reached for
comment, Facebook and Twitter declined to clarify their scanning policies
concerning information received from the database.
That
ambiguity has raised alarms with some outside groups, who see the policies as
arbitrary and largely decided in secret. The new system gives little sense of
how companies will decide what qualifies as terrorist content and what rights
users will have in that process. “It’s highly likely that much of the content
in this database will be lawful speech in the US,” the Center for Democracy and
Technology argued in a statement after the announcement.
“Without a bright line denoting what can – and cannot – be submitted to the
database, the terms of the agreement are vulnerable to mission creep.” The letter
also calls for a clearer way to appeal erroneously flagged content, a mechanism
that’s notably missing from the current incarnation of the database.
This
is a familiar problem for tech platforms, which have been struggling with moderation questions for
nearly a decade now. There are almost no legal restrictions on how tech
companies can moderate speech — the First Amendment doesn’t extend to private
platforms — but internet companies have generally seen heavy-handed moderation
as expensive and unappealing to users. The most notable exceptions are child
pornography, covered by an informal law enforcement partnership, and copyright-protected
content, which is regulated under the Digital Millennium Copyright Act. The new
database is heavily influenced by both systems, using a similar hash system to
scan for flagged images without reading whole files.
Still,
the reality of terrorism enforcement online is still wildly inconsistent
between services, and it’s unclear how much banning accounts can do. Twitter
has been particularly aggressive, suspending more than 360,000 different ISIS-linked accounts in the
past 18 months, a campaign that observers have noticed in the field.
Jade Parker, who monitors terrorist accounts as a senior research associate at
TAPSTRI, says the main effect is to shorten the half-life of a given account.
“On Facebook, ISIS accounts get suspended within a couple days, maybe a week,”
says Parker, “as opposed to Twitter, where a single ISIS user will get banned
as many as eight times in a day.”
Parker
is skeptical whether the new measures will make it significantly harder for
terrorist groups to recruit online, particularly given the presence of offshore
companies that don’t respond to takedown notices. “What the
database is going to do, hopefully, is shrink their sphere of influence to
companies that aren’t involved like Telegram,” says Parker. “That’s good in
terms of the reach of their message, but all anyone has to do is create a
Telegram account.”
No comments: