Sci-Tech
6 years ago

Social media firms defend AI’s effectiveness in spotting terrorism

Published :

Updated :

Top social media firms on Wednesday stressed the increasing effectiveness of artificial intelligence (AI) tools to spot posts from terrorists in a Senate Commerce Committee hearing.

Executives from Facebook, Twitter, and YouTube assured this in a Senate hearing, says CNN.

Monika Bickert, Facebook's head of policy and counterterrorism, said 99 per cent of terrorism content from ISIS and Al Qaeda is detected and removed before any user reports it thanks to automated tools like photo and video matching.

Juniper Downs, YouTube's director of public policy, noted that 98 per cent of violent extremist videos removed from the service are now identified by algorithms, up from 40 per cent last June.

Some members of the Senate committee, however, expressed scepticism.

Senator Bill Nelson, a Democrat representing Florida, described the use of AI tools for "screening out most of the bad guys' stuff" as "encouraging," but "not quite enough."

"These platforms have created a new and stunningly effective way for nefarious actors to attack and to harm," Nelson said.

That sense of concern was amplified by Clint Watts, a senior fellow at Georgetown's centre for Cyber and Homeland Security, who joined the three executives in testifying before the committee.

"Social media companies continue to get beat in part because they rely too heavily on technologists and technical detection to catch bad actors," Watts said in prepared remarks. AI and other technical solutions "will greatly assist in cleaning up nefarious activity, but will for the near future, fail to detect that which hasn't been seen before."

The Facebook and YouTube execs each noted previously announced plans to recruit thousands of additional workers to review content across their platforms, including extremist content.

Facebook, Google and Twitter were grilled by Congress late last year in a series of hearings on how foreign nationals used social media to meddle in the 2016 election by spreading misinformation and trying to sow discord among voters.

The hearing Wednesday focused on domestic and international terrorism activity on the platforms. But certain senators also resumed questioning the companies regarding Russian propaganda, fake news and political advertising transparency, with an eye toward the midterm elections.

Carlos Monje Jr, Twitter's director of public policy and philanthropy, said the company is still working to notify users who were exposed to content from a troll farm with links to the Kremlin.

Monje was also grilled by Sen. Brian Schatz, a Democrat from Hawaii, about Twitter's struggle to crack down on fake accounts, and Monje admitted that the users behind them "keep coming back."

"Based on your results, you're not where you need to be for us to be reassured that you're securing our democracy," said Schatz, a Democrat representing Hawaii. "How can we know that you're going to get this right and before the midterms?"

Share this news