In DolphinAttack: Inaudible Voice Commands, researchers from Zhejiang University demonstrate an attack on popular voice assistants in laptops and mobile devices from Apple, Google, Amazon, Microsoft, Samsung, and Huawei: by commanding these assistants using speech that has been shifted to ultrasonic ranges, they are able to hijack devices in public places without their owners’ knowledge.
The attack owes its efficacy to the devices’ use of ultrasonic for signaling to establish contact with one another, and as a means of resolving ambiguity and nuance in speech recognition. The designers of the systems have thus created software that can recognize ultrasonic voice commands — but lacks the smarts to be alarmed at human speech that occurs in registers beyond the capacity of the human vocal apparatus.
The attack involved about $3 worth of audio hardware.
The attackers successfully issued commands to dial arbitrary phone numbers; open connections to poisoned websites; open physical, internet-connected home locks; redirect automotive navigation systems; and so on. They were able to attack devices that were “locked” and theoretically unresponsive, thanks to defaults in these systems that cause them to respond to voice commands while locked up.
9 CONCLUSIONIn this paper, we propose DolphinAttack, an inaudible attack to SR
systems. DolphinAttack leverages the AM (amplitude modulation)
technique to modulate audible voice commands on ultrasonic carriers
by which the command signals can not be perceived by human.
With DolphinAttack, an adversary can attack major SR systems
including Siri, Google Now, Alexa, and etc. To avoid the abuse of
DolphinAttack in reality, we propose two defense solutions from
the aspects of both hardware and software.
DolphinAttack: Inaudible Voice Commands [Guoming Zhang, Chen Yan, Xiaoyu Ji†, Tianchen Zhang, Taimin Zhang and Wenyuan Xu†/ACM Conference on Computer and Communications Security]
A Simple Design Flaw Makes It Astoundingly Easy To Hack Siri And Alexa
[Mark Wilson/Fast Company]
(via /.)