Google’s Nest Secure had a built-in microphone no one knew about


After the hacking fiasco a few weeks ago, Nest users have been more on edge about their security devices than ever before. The recent discovery of a built-in, hidden microphone on the Nest Guard, part of the Nest Secure security system, has only served to further exacerbate those concerns.

Alphabet Inc's Google said on February 20 it had made an "error" in not disclosing that its Nest Secure home security system had a built-in microphone in its devices.

Consumers might never have known the microphone existed had Google not announced support for Google Assistant on the Nest Secure. This sounds like a great addition, except for one little problem: users didn’t know their Nest Secure had a microphone. None of the product documentation disclosed the existence of the microphone, nor did any of the packaging.

Earlier this month, Google said Nest Secure would be getting an update and users could now enable its virtual assistant technology Google Assistant on Nest Guard.

A microphone built into its Nest Guard alarm/motion sensor/keypad wasn't supposed to be a secret, Google said after announcing Google Assistant support for the Nest Secure system but the revelation that Google Assistant could be used with its Nest home security and alarm system security was a surprise.

“The on-device microphone was never intended to be a secret and should have been listed in the tech specs. That was an error on our part. The microphone has never been on and is only activated when users specifically enable the option,” Google said.

Google’s updated product page now mentions the existence of the microphone.

If your first thought on hearing this news is that Google was spying on you or doing something equally sinister, you aren’t alone. Ray Walsh, a digital privacy expert at BestVPN.com, said “Nest’s failure to disclose the on-board microphone included in its secure home security system is a massive oversight. Nest’s parent company Google claims that the feature was only made available to consumers who activated the feature manually. Presumably, nobody did this; because the feature wasn’t advertised.

Artificial Intelligence Is What’s Protecting Your Microsoft, Google And Similar Accounts





Artificially intelligent systems are quite on the run these days. The new generation believes a lot in the security system which evolves with the hackers’ trickery.

Microsoft, Google, Amazon, and numerous other organizations keep the faith in artificially intelligent security systems.

Technology based on rules and designed to avert only certain and particular kinds of attacks has gotten pretty old school.

There is a raging need for a system which comprehends previous behavior of hacking or any sort of cyber attacks and acts accordingly.

According to researchers, the dynamic nature of machines, especially AI makes it super flexible and all the more efficient in terms of handling security issues.

The automatic and constant retaining process certainly gives AI an edge over all the other forms.

But, de facto, hackers are quite adaptable too. They also usually work on the mechanical tendencies of the AI.

The basic way they go around is corrupting the algorithms and invading the company’s data which is usually the cloud space.

Amazon’s Chief Information Security Officer mentioned that via the aforementioned technology seriously aids in identifying threats at an early stage, hence reducing the severity and instantly restoring systems.

He also cited that despite the absolute aversion of intrusions being impossible, the company’s working hard towards making hacking a difficult job.

Initially, the older systems used to block entry in case they found anything suspicious happening or in case of someone logging in from an unprecedented location.

But, due to the very bluntness of the security system, real and actual users get to bear the inconvenience.

Approximately, 3% of the times, Microsoft had gotten false positives in case of fake logins, which in a great deal because the company has over billions of logins.

Microsoft, hence, mostly calculates and analyzes the technology through the data of other companies using it too.
The results borne are astonishing. The false positive rate has gotten down to 0.001%.

Ram Shankar Siva Kumar, who’s Microsoft’s “Data Cowboy”, is the guy behind training all these algorithms. He handles a 18-engineer team and works the development of the speed of the system.

The systems work efficiently with systems of other companies who use Microsoft’s cloud provisions and services.

The major reason behind, why there is an increasing need to employ AI is that the number of logins is increasing by the day and it’s practically impossible for humans to write algorithms for such vast data.

There is a lot of work involved in keeping the customers and users safe at all times. Google is up and about checking for breachers, even post log in.

Google keeps an eye on several different aspects of a user’s behavior throughout the session because an illegitimate user would act suspiciously for sure, some time or the other.

Microsoft and Amazon in addition to using the aforementioned services are also providing them to the customers.

Amazon has GuardDuty and Macie which it employs to look for sensitive data of the customer especially on Netflix etc. These services also sometimes monitor the employees’ working.

Machine learning security could not always be counted on, especially when there isn’t enough data to train them. Besides, there is always a worry-some feeling about their being exploited.

Mimicking users’ activity to degrade algorithms is something that could easily fool such a technique. Next in line could be tampering with the data for ulterior purposes.

With such technologies in use it gets imperative for organizations to keep their algorithms and formulae a never-ending mystery.

The silver lining though, is that such threats have more of an existence on paper than in reality. But with increasingly active technological innovation, this scenario could change at any time.




Bengaluru Techie Blackmailed To Transfer $2,200 through Bitcoin




In Bengaluru, a hacker claimed he'd utilized a techie's webcam to record his private moments and coerced him, thusly threatening him to transfer $2,200 through bitcoin. Rajesh (name changed), a techie in a multinational software organization on Bannerghatta Road, Bengaluru, received an email which further instructed him to transfer the said amount.

The mail was sent by the hacker,the handle being  'Coy Lynch'  and the mail id as 'hkrawllmerjoc@outlook.com'.

The hacker asserted he had placed malware on specific sites that Rajesh used to visit and had hacked into his camera, hence recording him. The blackmailer even debilitated to destroy him by sending the recorded footage to every one of those on Rajesh's contact list and relatives on the off chance that he refused to do what was asked of him.

Rajesh told STOI, “I checked whether my contact details or personal information was breached. I found that my personal account information was breached from nine sources, including three verified sources. I’m sure he doesn’t have any private video of mine but it’s possible to do it. This has been reported elsewhere. It’s happening here now — blackmail and extortion has started by compromising password.”

Luckily, Rajesh posted a copy of the mail on the Facebook page of Bengaluru police, who are yet to make progress for this case, yet despite everything they haven't discounted the mind-games played by online fraudsters to coerce cash by asserting to have taken control of the victim's laptops or cell phones by utilizing malware.