Tim Harford points out that Dieselgate — when VW designed cars that tried to guess when they were undergoing emissions test and dial back their pollution — wasn’t the first time an industry designed its products to cheat when regulators were looking; the big banks did the same thing to beat the “stress tests” that finance regulators used to check whether they would collapse during economic downturns (the banks “made very specific, narrow bets designed to pay off gloriously in specific stress-test scenarios” so that they looked like they’d do better than they actually would).
Harford concludes that the problem is that the regulators divulged too much information about their testing process to the companies, so the companies could design ways to fool the system; for some cases, regulators should be “vaguer” about the test criteria to fight cheating (Harford also says that in other cases, such as algorithms that evaluate teachers, the regulators should be more explicit).
I’m skeptical of the “vaguer” solution, because it violates a primary tenet of security: there is no security in obscurity. Companies like VW get lots of chances to have their cars tested, and might be able to figure out the underlying test mechanisms eventually — meanwhile, a regulator that didn’t have to disclose its testing methodology might make stupid mistakes that no one called them on (the best remedy for groupthink and its blindness is outside scrutiny), or rogue employees might divulge the methodology in exchange for bribes (or because they expect to get a cushy industry job after their tenure in public service), or they might just cheat themselves. Without knowing what tests are being done, it’s harder for outsiders to know if the tests are being done fairly.
Which leaves us with the question: how do we stop cheating? One problem with VW is that it was clearly too big to fail. Though the company cheated tests in a way that resulted in many untimely deaths, only a few individuals are likely to face criminal penalties, and the fines involved, while large, are calculated to leave the company standing so that innocent VW owners and employees aren’t victimized by company’s executive malfeasance. A smaller company would probably either be put to death through crippling fines, or broken down and sold to its competitors.
It may be that giant firms are unregulatable in some important sense — once they reach critical mass, any real penalty has more collateral damage than a government is willing to risk. This was Matt Taibbi’s thesis in The Divide, which is why he said that the penalty for being caught cheating should be having the company broken up into pieces so small they could neither offer effective bribes to regulators nor scare courts with the fallout if they were meaningfully punished.
Just like humans, algorithms aren’t perfect. Amazon’s “you might want to buy bottle cleanser” is not a serious error. “You’re fired” might be, which means we need some kind of oversight or appeal process if imperfect algorithms are to make consequential decisions.Even if an algorithm flawlessly linked a teacher’s actions to the students’ test scores, we should still use it with caution. We rely on teachers to do many things for the students in their class, not just boost their test scores. Rewarding teachers too tightly for test scores encourages them to neglect everything we value but cannot measure.
The economists Oliver Hart and Bengt Holmström have been exploring this sort of territory for decades, and were awarded the 2016 Nobel Memorial Prize in Economics for their pains. But, all too often, politicians, regulators and managers ignore well-established lessons.
In fairness, there often are no simple answers. In the case of VW, transparency was the enemy: regulators should have been vaguer about the emissions test to prevent cheating. But in the case of teachers, more transparency rather than less would help to uncover problems in the teacher evaluation algorithm.
How to catch a cheat
[Tim Harford]