A research team comprising experts from North Carolina State University (NCSU) and the Ruhr-University Bochum in Germany recently undertook a study of Amazon Alexa skills. What they uncovered was shocking: misleading privacy policies, developers able to claim they were, well, anyone, and multiple skills sharing the same Alexa trigger words, to name just some of the issues.
In response to the research, Amazon has insisted that “any offending skills we identify are blocked during certification or quickly deactivated.”
That the researchers were able to find hundreds of skills requesting access to “privacy-sensitive data” yet with either no privacy policies or misleading ones suggests much more may need to be done in that Amazon rogue skills identification process.
The researchers used an automated process to discover 90,194 unique skills across seven skill stores, from a total number that exceeds 100,000. These skills, programs that enhance the experience of using the Amazon voice-activated assistant, covered everything from listening to music to ordering your groceries. Using another automated process, the research team were able to analyze each skill in some detail.
Anupam Das, one of the researchers and also an assistant professor of computer science at NCSU, said that “when people use Alexa to play games or seek information, they often think they’re interacting only with Amazon, but a lot of the applications they are interacting with were created by third parties, and we’ve identified several flaws in the current vetting process that could allow those third parties to gain access to users’ personal or private information.”
That process appears to be lacking, according to the researchers, who found that nearly 10,000 skills were sharing their trigger key phrase with at least one other. “This is problematic because, if you think you are activating one skill but are actually activating another,” Das said, “this creates the risk that you will share information with a developer that you did not intend to share information with.”
And talking of developers, the researchers also claim a developer can say they are anyone they like without proper verification. Indeed, the researchers were able to publish skills using names such as Microsoft and Samsung.
To add to the potential security and privacy issues uncovered by the research team, it was demonstrated that skill developers could change back end skill code after publication. They demonstrated this by publishing a trip-planner skill and then went ahead and modified the code “to request additional information” after it had gained Amazon approval. “We were not engaged in malicious behavior,” Das said, “but our demonstration shows that there aren’t enough controls in place to prevent this vulnerability from being abused.”
An Amazon statement insists that the security of devices and services is a top priority:
“We conduct security reviews as part of skill certification and have systems in place to continually monitor live skills for potentially malicious behavior. Any offending skills we identify are blocked during certification or quickly deactivated. We are constantly improving these mechanisms to further protect our customers. We appreciate the work of independent researchers who help bring potential issues to our attention.”
I’m not suggesting that you should turn off your Echo device or stop using Alexa skills altogether as a result of this research. However, I’d certainly recommend that you take another look at the skills you have installed, especially those from third parties rather than Amazon itself, and maybe reconsider how essential they are. At the very least, you should weigh up the pros of using them against any potential privacy or security risk and take it from there.
“Hacking a virtual assistant in millions of people’s homes is what malicious actors dream of doing,” a cybersecurity specialist at ESET, Jake Moore, said. “Much like a Trojan, Alexa Skills can be published under a fake identity, which could encourage the user to trust it fully, leaving them vulnerable to attack. Cybercriminals could potentially request credit card details or private data such as demographics and habits of the people in the house.” Moore recommends users only enable Alexa functions if they are confident with what they are doing.
The research paper ‘Hey Alexa, is this Skill Safe?: Taking a Closer Look at the Alexa Skill Ecosystem’ was authored by Christopher Lentzsch and Martin Degeling, Ruhr-Universität Bochum; Sheel Jayesh Shah, Anupam Das and William Enck, North Carolina State University; and Benjamin Andow, Google Inc.