Alexa, Siri, Google Smart Speakers Hacked Via Laser Beam

Smart voice assistants can be hijacked by attackers using lasers to send them remote, inaudible commands.

Researchers have discovered a new way to hack Alexa and Siri smart speakers merely by using a laser light beam. No physical access of the victims’ device, or owner interaction, is needed to launch the hack, which allows attackers to send voice assistants inaudible commands such as unlocking doors.

The attack, dubbed “light commands,” leverages the design of smart assistants’ microphones. These are called microelectro-mechanical systems (MEMS) microphones, which work by converting sound (voice commands) into electrical signals – but in addition to sound, researchers found that MEMS microphones also react to light being aimed directly at them.

Researchers said that they were able to launch inaudible commands by shining lasers – from as far as 110 meters, or 360 feet – at the microphones on various popular voice assistants, including Amazon Alexa, Apple Siri, Facebook Portal, and Google Assistant.

“[B]y modulating an electrical signal in the intensity of a light beam, attackers can trick microphones into producing electrical signals as if they are receiving genuine audio,” said researchers with the University of Michigan and the University of Electro-Communications (Tokyo) in a Monday research paper.

MEMS microphones feature a small, built-in plate called the diaphragm, which when hit with sounds or light sends electrical signals that are translated into commands. Instead of voice commands, researchers found that they could “encode” sound using the intensity of a laser light beam, which causes the diaphragm to move and results in electrical signals representing the attacker’s commands.

light attack google home Alexa

Daniel Genkin, one of the researchers who discovered the attack, told Threatpost that they were able to specifically encode sound based on the intensity of the light beam – so strong sound implies a lot of light while weak sounds make little light.

To develop such commands using the laser beam, researchers measured light intensity (using a photo-diode power sensor) and tested the impact of various light intensities (or diode currents) on microphone outputs.

“We recorded the diode current and the microphone’s output using a Tektronix MSO5204 oscilloscope,” they said. “The experiments were conducted in a regular office environment, with typical ambient noise from human speech, computer equipment, and air conditioning systems.”

In a real-life attack, an attacker could stand outside a house and potentially shine a laser light onto a voice assistant that is visible through a window. From there, an attacker could command the voice assistant to unlock a door, make online purchases, remotely start vehicles or other malicious actions.

Below is a video demonstration of researchers sending commands to Google Home:

Worse, the attack can be mounted “easily and cheaply, “researchers said. They said that they used a simple laser pointer (available for as little as $14 on Amazon), along with a laser driver (designed to drive a laser diode by providing a current) and a sound amplifier to launch the attack.  Researchers for their part used a blue Osram PLT5 450B 450-nm laser diode connected to a Thorlabs LDC205C current driver.

Attackers also may need gear for laser focusing, including a geared tripod head, commercially available telephoto lens or a telescope in order to see microphone ports from long distances.

Researchers tested the attack with a variety of devices that use voice assistants, including the Google NEST Cam IQ, Echo, iPhone XR, Samsung Galaxy S9 and Google Pixel 2. While the paper focused on Alexa, Siri, Portal and Google Assistant, any system that uses MEMS microphones and acts on data without additional user confirmation might be vulnerable, they said.

Researchers said they have not seen any indications that this attack have been maliciously exploited so far. They are currently collaborating with voice assistant vendors to mitigate against the attack.

The good news is that researchers have identified steps that could help protect against the attack, such as further authentication, sensor fusion techniques (such as requiring devices to acquire audio from multiple microphones) or implementing a cover on top of the microphone for attenuating the amount of light hitting the microphone.

“An additional layer of authentication can be effective at somewhat mitigating the attack,” they said. “Alternatively, in case the attacker cannot eavesdrop on the device’s response, having the device ask the user a simple randomized question before command execution can be an effective way at preventing the attacker from obtaining successful command execution.”

Threatpost has reached out to Apple and Facebook for further comment.

“Customer trust is our top priority and we take customer security and the security of our products seriously,” an Amazon spokesperson told Threatpost. “We are reviewing this research and continue to engage with the authors to understand more about their work.”

“We are closely reviewing this research paper,” a Google spokesperson told Threatpost. “Protecting our users is paramount, and we’re always looking at ways to improve the security of our devices.”

What are the top mistakes leading to data breaches at modern enterprises? Find out: Join an expert from SpyCloud and Threatpost senior editor Tara Seals on our upcoming free Threatpost webinar, “Trends in Fortune 1000 Breach Exposure.” Click here to register.

Suggested articles

Hey Alexa, Who Am I Messaging?

Research shows that microphones on digital assistants are sensitive enough to record what someone is typing on a smartphone to steal PINs and other sensitive info.