Amazon is building out its ad-targeting program to allow for ad-buys like "people near a physiotherapist's office who've bought a knee-brace," and reports that the ads are incredibly successful.
What's interesting about Amazon's reference customers for the efficacy of their program is how trivial they are: figuring out that knee-brace buyers who live near a physio might patronize that physio is not exactly rocket surgery. There are a ton of these "signals" that are probably a good predictor of future actions: if you buy a baseball bat and ball, you might travel to a nearby park to meet some friends in the near future; if you look up car ratings, you might buy a car soon; and so on.
The insights are trivial: what's sophisticated is the targeting after the insights. In the age of paper car-buyers' guides, the only way to advertise to potential car-shoppers was to buy ad-space in those guides. You couldn't — for example — know when people were talking about buying a car with friends and show them an ad there. Surveillance capitalism has bequeathed upon us vastly improved means of identifying people who have some rare, widely diffused trait (say, people who recently called to price out a refrigerator repair and might be in the market to buy a new fridge, something the average person does between 0 and three of four times in their entire lives).
But what surveillance capitalists do is conflate their ability to identify people who share a trait with their ability to identify traits to look for in people. They imply — or even claim outright — that they can use machine learning to find nonobvious, even inexplicable signals that are precursors of some rare behavioral event.
That's the story of the rise and fall of Google Flu: first, Google claimed that looking at the search histories of people who were later discovered to have the flu revealed unobvious search queries that were reliable predictors of future influenza outbreaks. But they were wrong: it was a statistical mirage.
Some big data/surveillance capitalism companies even go farther and say that they can use the same machine-learning techniques to actually trick or coerce people into behaving in ways they wouldn't have otherwise contemplated: Cambridge Analytica famously claimed that they could make normal, reasonable people into Donald Trump voters by showing them carefully tailored Facebook posts.
But in retrospect, it seems that what Cambridge Analytica did was figure out how to find racists and convince them that Donald Trump would be a good president. It was a terrible outcome, sure, but way less worrying than if Cambridge Analytica had actually invented a Facebook-based mind-control ray.
The thing is that advertising has always been incredibly marginal: almost all advertising is "wasted" in the sense that it doesn't convince anyone to change their behavior. When a company like Google or Amazon starts to make billions by fielding a "vastly improved" advertising product, it means a couple of things have happened: first, they've figured out how to target a relatively obvious signal of commercial intention that no one else is targeting yet (think of when consumer packaged goods companies started sending sales-reps to maternity wards with baskets full of newborn supplies on the theory that harried new parents would just buy more of the same when those ran out) and second, that the combination of competition to target the same signal and regression to the mean among customers hasn't reduced its efficacy back to the margins.
Those "vastly improved" ads are often lifting the efficacy rate from less than one percent to still less than one percent: it's easy to double or triple the efficacy of something that works 0.01% of the time, but you're still talking about something that barely works at all.
If behavioral modification — as opposed to behavioral targeting — were as easy as its vendors claimed, the entire dieting industry would have gone out of business decades ago.
After all, these companies lie about everything, which is why we don't trust them: why would we assume that they are telling the truth in their sales literature?
Users of the self-service system can choose from hundreds of automated audience segments. Some of Amazon’s targeting capabilities are dependent on shopping behaviors, such as “International Market Grocery Shopper” and people who have bought “Acne Treatments” in the past month, or household demographics, such as “Presence of children aged 4-6.” Others are based on the media people consume on Amazon, such as “Denzel Washington Fans” or people who have recently streamed fitness and exercise videos on Amazon. The company declined to comment.
Just the Cheese, a brand run by Specialty Cheese Company in Reeseville, Wis., makes crunchy dried cheese bars that have taken off as a low-carb snack. By using algorithms to analyze how Just the Cheese’s search ads performed on Amazon’s site, the ad agency Quartile Digital noticed that people who searched for keto snacks and cauliflower pizza crust, both low-carb diet trends, also bought a lot of cheese bars. So Quartile ran display ads across the web targeting Amazon customers who had bought those two specific product categories. Over three months, Amazon showed the ads on websites more than six million times, which resulted in almost 22,000 clicks and more than 4,000 orders.
Amazon Knows What You Buy. And It’s Building a Big Ad Business From It. [Karen Weise/New York Times]
(Image: Cryteria, CC-BY)