The News
Wednesday 25 of December 2024

Social media plays whack-a-mole with Russia interference


FILE - In this March 29, 2018, file photo, the logo for Facebook appears on screens at the Nasdaq MarketSite in New York's Times Square. Facebook is spending heavily to avoid a repeat of the Russian interference that played out on its service in 2016. Its adversaries are wily, more adept at camouflaging themselves and apparently aren’t always detectable by Facebook’s much-vaunted artificial intelligence systems. (AP Photo/Richard Drew, File),FILE - In this March 29, 2018, file photo, the logo for Facebook appears on screens at the Nasdaq MarketSite in New York's Times Square. Facebook is spending heavily to avoid a repeat of the Russian interference that played out on its service in 2016. Its adversaries are wily, more adept at camouflaging themselves and apparently aren’t always detectable by Facebook’s much-vaunted artificial intelligence systems. (AP Photo/Richard Drew, File)
FILE - In this March 29, 2018, file photo, the logo for Facebook appears on screens at the Nasdaq MarketSite in New York's Times Square. Facebook is spending heavily to avoid a repeat of the Russian interference that played out on its service in 2016. Its adversaries are wily, more adept at camouflaging themselves and apparently aren’t always detectable by Facebook’s much-vaunted artificial intelligence systems. (AP Photo/Richard Drew, File),FILE - In this March 29, 2018, file photo, the logo for Facebook appears on screens at the Nasdaq MarketSite in New York's Times Square. Facebook is spending heavily to avoid a repeat of the Russian interference that played out on its service in 2016. Its adversaries are wily, more adept at camouflaging themselves and apparently aren’t always detectable by Facebook’s much-vaunted artificial intelligence systems. (AP Photo/Richard Drew, File)

Facebook is spending heavily to avoid a repeat of the Russian interference that played out on its service in 2016, bringing on thousands of human moderators and advanced artificial intelligence systems to weed out fake accounts and foreign propaganda campaigns.

But it may never get the upper hand. Its adversaries are wily, more adept at camouflaging themselves and apparently aren’t always detectable by Facebook’s much-vaunted AI. They employ better operational security, constantly test Facebook’s countermeasures and then exploit whatever holes they find.

“They’ve got lots of very good, smart technical people, who are assessing the situation all the time and gaming the system,” said Mike Posner, a former U.S. diplomat who directs New York University’s Stern Center for Business and Human Rights.

With the U.S. midterm elections approaching and renewed scrutiny on Capitol Hill, Facebook revealed this week that it has uncovered and removed 32 apparently fake accounts and pages. The accounts appear designed to manipulate Americans’ political opinions using tactics similar to those adopted ahead of the 2016 presidential election on social-media services, including Facebook, Instagram, Twitter, YouTube, Tumblr and Reddit.

This time, however, whoever is responsible is doing a better job hiding their tracks. They are buying ads with U.S. or Canadian dollars, not rubles, and using virtual private networks and other methods to look more like people logging in from U.S locations.

“Offensive organizations improve their techniques once they have been uncovered,” Facebook Chief Security Officer Alex Stamos wrote in a blog post Tuesday. That also makes it harder to know who Facebook’s current adversaries are.

“Because the 2016 operation was widely seen as a success, it means a number of other players are likely entering the field,” said Thomas Rid, a professor of strategic studies at the Johns Hopkins University who is writing a book about 20th century disinformation efforts.

Much like during the Cold War — when Soviet agents once pretended to be the Ku Klux Klan to stoke racial division — the strategy remains to “strengthen the fringes, boosting the far right extremists and far left extremists at the same time,” Rid said.

Facebook has not said who’s responsible for the latest influence campaign. The fake accounts, however, resemble those created from 2014 through 2016 by the Internet Research Agency, a so-called troll farm based in St. Petersburg, Russia. In February, U.S. special counsel Robert Mueller indicted 13 people associated with the IRA for plotting to disrupt the 2016 election.

The Atlantic Council, a Washington-based think tank that works with Facebook to analyze disinformation around elections worldwide, analyzed eight of the 32 pages and accounts a day before Facebook shut them down. While researchers found the pages left “few clues to their identities” compared to Russian accounts Facebook shut down in April, they noticed that more posts avoided English text in favor of memes or other graphics.

Such text can yield telltale grammatical errors common to Russian speakers. Some cropped up in posts that used text, such as conjugation mistakes between singular and plural verb forms and the misuse of articles like “a” and “the.”

The Council’s Digital Forensic Research Lab found many of the accounts were similar to IRA pages in their approach, tactics, language and content — in particular, the targeting of specific demographics like feminists, blacks, Latin Americans, and anti-Trump activists.

“It is becoming clearer that IRA activity represents just a small fraction of the total Russian effort on social media,” said Democratic Sen. Mark Warner, speaking Wednesday at a Senate Intelligence Committee hearing. “In reality, the IRA operatives were just the incompetent ones who made it easy to get caught.”

Experts, meanwhile, warn that Facebook’s AI tools aren’t a panacea. The tools can help human moderators identify posts that warrant a closer look, but they can’t do the job themselves.

“A couple thousand moderators are all going to have slightly different criteria that they spot,” said Joanna Bryson, a computer scientist at the University of Bath. “It’s not quite as easy to sneak by as it is with a single algorithm.”

Miles Brundage, a research fellow at Oxford University’s Future of Humanity Institute, says any Facebook AI is in for a “cat and mouse game of evasion and detection” with adversaries who can try different techniques until they find something that works.

Facebook, which last year said IRA-connected accounts generated 80,000 posts that could have reached 126 million people, isn’t the only social-media network that’s been targeted by Russians. Twitter told Congress last October that it shut down more than 2700 accounts linked to the IRA, but only after they put out 1.4 million election-related tweets.

Google likewise said it found two accounts linked to the Russian group that bought almost $5,000 worth of ads during the 2016 election, as well as 18 YouTube channels likely backed by Russian agents.

For the moment, however, Facebook is alone in disclosing additional problems. Google did not immediately respond when contacted to see if it had discovered any further influence efforts. Twitter had no comment, and in a statement, Reddit dodged the question, saying only that it has always had measures in place to “prevent or limit” malicious actions.

In general, tech companies have been reluctant to share everything — or anything — they find with the public, even as they work behind the scenes with law enforcement and intelligence officials.