Research report explains how adtech supercharges political deceit, allowing even bumblers to be master propagandists

A new report from the New America Foundation uses the current fear that Russian government elements manipulated the 2016 US election to explore the relationship between advertising technology, surveillance capitalism, and "precision propaganda," showing how the toolsuite developed for the advertising industry is readily repurposable by even modestly competent actors to spread disinformation campaigns.


The report,
Digital Deceit: The Technologies Behind Precision Propaganda on the Internet
, is an excellent companion to Cracked Labs's deep dive into "corporate surveillance" technology, showing how the ability to segment, discover and target audiences receptive to messages, manipulate search results, and track individuals across different devices and services can be used for selling anything, including political ideology, whether or not it is grounded in falsehood.


The authors, Dipayan Ghosh and Ben Scott, are admirably neutral on the question of whether the American elections were determined by Russian interference, saying that it doesn't matter whether you believe it — the tactics they lay out work just as well for domestic disinformation campaigns as they do for foreign ones, and US history is full of unethical politicking by mainstream and fringe elements who used media manipulation to serve their goals. If you care about the future of politics, you should be paying attention to this.


The connection between adtech and politics is obvious, of course: the project of getting someone to care about something is central to marketing, publicity, creating art, fomenting religious movements, success in electoral politics, shifting social and sexual mores, and much else in human endeavor. Any tool that makes it easier to do any of these things ripples out to all of them.


The authors touch very briefly on the question of monopoly and anti-trust, and I think this is a mistake. The reason privacy laws are so lax is that surveillance businesses are highly concentrated, and able to work together effectively to lobby for laws that benefit them (even if they harm the rest of us). If, for example, companies could be held liable for statutory damages in the event of a data-breach, there would be a lot less data collection and even less of that data would be retained.

Likewise, the reason that platform companies are able to pile up such deep dossiers on us is that they've been allowed to monopolize their market segments, so that the great bulk of internet pages and mobile apps feature some kind of Facebook and Google integration, allowing both companies to gather user-data, even from people who never signed up to use their services. A fragmented industry of smaller, competing players would not be able to compile such comprehensive files on our use of technology.

Any critique of internet-driven propaganda that doesn't include some critique of late-stage capitalism's role in allowing for consolidation and dominance by a few players will never be complete. It may be that New America — despite its excellent work and well-earned reputation — is incapable of going there, though, given that the institution recently purged its most prominent anti-trust researchers when Google threatened to withdraw funding over their criticism of its businesses.

The other missing piece in this puzzle is regression to the mean: the idea that persuasion techniques that work today owe much of their efficacy to novelty, and when the novelty wears off, they lose much of their efficacy. It may be that there is an endless well of new persuasion techniques that A/B splitting and machine learning will discover, but it's also possible that, having picked all the low-hanging fruit, these technologies will find themselves climbing exponentially difficult landscapes, where each new gain is twice as hard to attain as the last, and will thus deadend in a couple of iterations. Because we don't have a good, widely accepted theoretical basis for understanding how these persuasion techniques work, any statements about the longevity (or short life) of them is based more on faith than evidence.


To inform and support this important public debate, this paper analyzes the technologies of digital advertising and marketing in order to deepen our understanding of precision propaganda.

*
Behavioral data tracking. The lifeblood of digital advertising and marketing is data. An array of technologies operate unseen to the user, capturing every click, purchase, post and geolocation. This data is aggregated, connected with personal identifiers, and built into consumer profiles. This data helps disinformation operators derive the precisely targeted audiences that respond to particular messages.

*
Online ad buying. The market for targeted online advertising drives sentiment change and persuasion. The most sophisticated systems enable automated experimentation with thousands of message variations paired with profiled audience segments. This precision advertising helps disinformation to reach and grow responsive audiences and to drive popular messages into viral phenomenon.

*
Search engine optimization (SEO). There is a multi-billion dollar industry dedicated to optimizing search engine results by reverse engineering the Google search page rank algorithm. Disinformation operators use techniques known as “black hat SEO” to trick the algorithm and dominate search results for a few hours of the news cycle before Google corrects the distortion.

*
Social media management services (SMMS). A new kind of digital marketing company sits at the intersection of machine learning algorithms and advertising technology. The SMMS offers advertisers a fully-integrated solution that pre-configures messages for different target audiences across multiple media channels simultaneously and automatically. It is a finely tuned disinformation machine for the precision propagandist.

*
Artificial intelligence in marketing. Machine learning algorithms are already integrated into targeted advertising platforms and complex data analytics. Stronger forms of AI will be available in the near term. These advances will greatly increase the potency of disinformation operations by enhancing the effectiveness of behavioral data tracking, audience segmentation, message targeting/testing, and systemic campaign management.


Digital Deceit:
The Technologies Behind Precision Propaganda on the Internet
[Dipayan Ghosh and Ben Scott/New America Foundation]