“Smart Spies”- Amazon Alexa and Google Home’s Voice Assistant Were Vulnerable to a Security Flaw
Alexa and Google Home smart speakers have been vulnerable to a security threat that made eavesdropping, voice phishing and using people’s voice cues to deduce passwords possible for hackers. The hack also allowed hackers to befool users in handing out their private data without any knowledge of the same being happening.
In October, security researchers who discovered “Smart Spies” hack and new ways in which Alexa and Google Home smart speakers can be exploited, are now warning about the need to formulate new and effective methods to guard against the eavesdropping hack, reports Threatpost. Notably, no major steps were been taken to ensure protection against these hacks.
SRLabs, a Berlin-based hacking research company, told about the discovery of the vulnerability being made by them earlier this year, they went on reporting it to the concerned organizations, Amazon and Google. Furthermore, in an attempt to demonstrate the exploitation of the flaw, the firm shared a series of videos on Sunday.
As per the reports by CNN Business, Amazon and Google told that the vulnerabilities have been taken care of and likewise the issues have been fixed.
The company “quickly blocked the skill in question and put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified,” a spokesperson from Amazon told CNN Business.
Addressing the issue, SRLabs states in a blog post, “Alexa and Google Home are powerful, and often useful, listening devices in private environments. The privacy implications of an internet-connected microphone listening in to what you say are further reaching than previously understood.”
Experts recommended users to be more mindful of the potentially malignant voice apps that can infect smart speakers, “Using a new voice app should be approached with a similar level of caution as installing a new app on your smartphone.”
“To prevent ‘Smart Spies’ attacks, Amazon and Google need to implement better protection, starting with a more thorough review process of third-party Skills and Actions made available in their voice app stores. The voice app review needs to check explicitly for copies of built-in intents. Unpronounceable characters like “�. “ and silent SSML messages should be removed to prevent arbitrary long pauses in the speakers’ output. Suspicious output texts including “password“ deserve particular attention or should be disallowed completely.” The blog reads.