google Two years later, Google solves 'racist algorithm' problem by purging 'gorilla' label from image classifier Cory Doctorow
scholarship Adversarial patches: colorful circles that convince machine-learning vision system to ignore everything else Cory Doctorow
privacy "Edge AI": encapsulating machine learning classifiers in lightweight, energy-efficient, airgapped chips Cory Doctorow
happy mutants Tiny alterations in training data can introduce "backdoors" into machine learning models Cory Doctorow
security Researchers think that adversarial examples could help us maintain privacy from machine learning systems Cory Doctorow
competition "I Shouldn't Have to Publish This in The New York Times": my op-ed from the future Cory Doctorow
scholarship Machine learning classifiers are up to 20% less accurate when labeling photos from homes in poor countries Cory Doctorow
security Towards a method for fixing machine learning's persistent and catastrophic blind spots Cory Doctorow
security Towards a general theory of "adversarial examples," the bizarre, hallucinatory motes in machine learning's all-seeing eye Cory Doctorow
security Machine learning models keep getting spoofed by adversarial attacks and it's not clear if this can ever be fixed Cory Doctorow
AI Adversarial examples: attack can imperceptibly alter any sound (or silence), embedding speech that only voice-assistants will hear Cory Doctorow
security Researchers trick Google's AI into thinking rifles are helicopters, without any knowledge of the algorithm's design Cory Doctorow
scholarship Researchers can fool machine-learning vision systems with a single, well-placed pixel Cory Doctorow
scholarship The first-ever close analysis of leaked astroturf comments from China's "50c party" reveal Beijing's cybercontrol strategy Cory Doctorow