In May, several French and German social media influencers received a strange proposal.
A London-based public relations agency wanted to pay them to promote messages on behalf of a client. A polished three-page document detailed what to say and on which platforms to say it.
But it asked the influencers to push not beauty products or vacation packages, as is typical, but falsehoods tarring Pfizer-BioNTech’s Covid-19 vaccine. Stranger still, the agency, Fazze, claimed a London address where there is no evidence any such company exists.
Some recipients posted screenshots of the offer. Exposed, Fazze scrubbed its social media accounts. That same week, Brazilian and Indian influencers posted videos echoing Fazze’s script to hundreds of thousands of viewers.
The scheme appears to be part of a secretive industry that security analysts and American officials say is exploding in scale: disinformation for hire.
Private firms, straddling traditional marketing and the shadow world of geopolitical influence operations, are selling services once conducted principally by intelligence agencies.
They sow discord, meddle in elections, seed false narratives and push viral conspiracies, mostly on social media. And they offer clients something precious: deniability.
“Disinfo-for-hire actors being employed by government or government-adjacent actors is growing and serious,” said Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab, calling it “a boom industry.”
Similar campaigns have been recently found promoting India’s ruling party, Egyptian foreign policy aims and political figures in Bolivia and Venezuela.
Mr. Brookie’s organization tracked one operating amid a mayoral race in Serra, a small city in Brazil. An ideologically promiscuous Ukrainian firm boosted several competing political parties.
In the Central African Republic, two separate operations flooded social media with dueling pro-French and pro-Russian disinformation. Both powers are vying for influence in the country.
A wave of anti-American posts in Iraq, seemingly organic, were tracked to a public relations company that was separately accused of faking anti-government sentiment in Israel.
Most trace to back-alley firms whose legitimate services resemble those of a bottom-rate marketer or email spammer.
Job postings and employee LinkedIn profiles associated with Fazze describe it as a subsidiary of a Moscow-based company called Adnow. Some Fazze web domains are registered as owned by Adnow, as first reported by the German outlets Netzpolitik and ARD Kontraste. Third-party reviews portray Adnow as a struggling ad service provider.
European officials say they are investigating who hired Adnow. Sections of Fazze’s anti-Pfizer talking points resemble promotional materials for Russia’s Sputnik-V vaccine.
For-hire disinformation, though only sometimes effective, is growing more sophisticated as practitioners iterate and learn. Experts say it is becoming more common in every part of the world, outpacing operations conducted directly by governments.
The result is an accelerating rise in polarizing conspiracies, phony citizen groups and fabricated public sentiment, deteriorating our shared reality beyond even the depths of recent years.
The trend emerged after the Cambridge Analytica scandal in 2018, experts say. Cambridge, a political consulting firm linked to members of Donald J. Trump’s 2016 presidential campaign, was found to have harvested data on millions of Facebook users.
The controversy drew attention to methods common among social media marketers. Cambridge used its data to target hyper-specific audiences with tailored messages. It tested what resonated by tracking likes and shares.
The episode taught a generation of consultants and opportunists that there was big money in social media marketing for political causes, all disguised as organic activity.
Some newcomers eventually reached the same conclusion as Russian operatives had in 2016: Disinformation performs especially well on social platforms.
At the same time, backlash to Russia’s influence-peddling appeared to have left governments wary of being caught — while also demonstrating the power of such operations.
“There is unfortunately a huge market demand for disinformation,” Mr. Brookie said, “and a lot of places across the ecosystem that are more than willing to fill that demand.”
Commercial firms conducted for-hire disinformation in at least 48 countries last year — nearly double from the year before, according to an Oxford University study. The researchers identified 65 companies offering such services.
Last summer, Facebook removed a network of Bolivian citizen groups and a journalistic fact-checking organizations. It said the pages, which had promoted falsehoods supporting the country’s right-wing government, were fake.
Stanford University researchers traced the content to CLS Strategies, a Washington, D.C.-based communications firm that had registered as a consultant with the Bolivian government. The firm had done similar work in Venezuela and Mexico.
A spokesman referred to the company’s statement last year saying its regional chief had been placed on leave but disputed Facebook’s accusation that the work qualified as foreign interference.
Eroding Reality
.
New technology enables nearly anyone to get involved. Programs batch generate fake accounts with hard-to-trace profile photos. Instant metrics help to hone effective messaging. So does access to users’ personal data, which is easily purchased in bulk.
The campaigns are rarely as sophisticated as those by government hackers or specialized firms like the Kremlin-backed Internet Research Agency.
But they appear to be cheap. In countries that mandate campaign finance transparency, firms report billing tens of thousands of dollars for campaigns that also include traditional consulting services.
The layer of deniability frees governments to sow disinformation more aggressively, at home and abroad, than might otherwise be worth the risk. Some contractors, when caught, have claimed they acted without their client’s knowledge or only to win future business.
Platforms have stepped up efforts to root out coordinated disinformation. Analysts especially credit Facebook, which publishes detailed reports on campaigns it disrupts.
Still, some argue that social media companies also play a role in worsening the threat. Engagement-boosting algorithms and design elements, research finds, often privilege divisive and conspiratorial content.
Political norms have also shifted. A generation of populist leaders, like Rodrigo Duterte of the Philippines, has risen in part through social media manipulation. Once in office, many institutionalize those methods as tools of governance and foreign relations.
In India, dozens of government-run Twitter accounts have shared posts from India Vs Disinformation, a website and set of social media feeds that purport to fact-check news stories on India.
India Vs Disinformation is, in reality, the product of a Canadian communications firm called Press Monitor.
Nearly all the posts seek to discredit or muddy reports unfavorable to Prime Minister Narendra Modi’s government, including on the country’s severe Covid-19 toll. An associated site promotes pro-Modi narratives under the guise of news articles.
A Digital Forensic Research Lab report investigating the network called it “an important case study” in the rise of “disinformation campaigns in democracies.”
A representative of Press Monitor, who would identify himself only as Abhay, called the report completely false.
He specified only that it incorrectly identified his firm as Canada-based. Asked why the company lists a Toronto address, a Canadian tax registration and identifies as “part of Toronto’s thriving tech ecosystem,” or why he had been reached on a Toronto phone number, he said that he had business in many countries. He did not respond to an email asking for clarification.
A LinkedIn profile for Abhay Aggarwal identifies him as the Toronto-based chief executive of Press Monitor and says that the company’s services are used by the Indian government.
‘Spamouflage’
A set of pro-Beijing operations hint at the field’s capacity for rapid evolution.
Since 2019, Graphika, a digital research firm, has tracked a network it nicknamed “Spamouflage” for its early reliance on spamming social platforms with content echoing Beijing’s line on geopolitical issues. Most posts received little or no engagement.
In recent months, however, the network has developed hundreds of accounts with elaborate personas. Each has its own profile and posting history that can seem authentic. They appeared to come from many different countries and walks of life.
Graphika traced the accounts back to a Bangladeshi content farm that created them in bulk and likely sold them to a third party.
The network pushes strident criticism of Hong Kong democracy activists and American foreign policy. By coordinating without seeming to, it created an appearance of organic shifts in public opinion — and often won attention.
The accounts were amplified by a major media network in Panama, prominent politicians in Pakistan and Chile, Chinese-language YouTube pages, the left-wing British commentator George Galloway and a number of Chinese diplomatic accounts.
A separate pro-Beijing network, uncovered by a Taiwanese investigative outlet called The Reporter, operated hundreds of Chinese-language websites and social media accounts.
Disguised as news sites and citizen groups, they promoted Taiwanese reunification with mainland China and denigrated Hong Kong’s protesters. The report found links between the pages and a Malaysia-based start-up that offered web users Singapore dollars to promote the content.
But governments may find that outsourcing such shadowy work also carries risks, Mr. Brookie said. For one, the firms are harder to control and might veer into undesired messages or tactics.
For another, firms organized around deceit may be just as likely to turn those energies toward their clients, bloating budgets and billing for work that never gets done.
“The bottom line is that grifters are going to grift online,” he said.