Can AI become a new tool for hackers?

Over the last three years, the use of AI in cybersecurity has been an increasingly hot topic. Every new company that enters the market touts its AI as the best and most effective. Existing vendors, especially those in the enterprise space, are deploying AI  to reinforce their existing security solutions. Use of artificial intelligence (AI) in cybersecurity is enabling IT professionals to predict and react to emerging cyber threats quicker and more effectively than ever before. So how can they expect to respond when AI falls into the wrong hands?

Imagine a constantly evolving and evasive cyberthreat that could target individuals and organisations remorselessly. This is the reality of cybersecurity in an era of artificial intelligence (AI).

There has been no reduction in the number of breaches and incidents despite the focus on AI. Rajashri Gupta, Head of AI, Avast sat down with Enterprise Times to talk about AI and cyber security and explained that part of the challenge was not just having enough data to train an AI but the need for diverse data.

This is where many new entrants into the market are challenged. They can train an AI on small sets of data but is it enough? How do they teach the AI to detect the difference between a real attack and false positive? Gupta talked about this and how Avast is dealing with the problem.

During the podcast, Gupta also touched on the challenge of ethics for AI and how we deal with privacy. He also talked about IoT and what AI can deliver to help spot attacks against those devices. This is especially important for Avast who are to launch a new range of devices for the home security market this year.

AI has shaken up with automated threat prevention, detection and response revolutionising one of the fastest growing sectors in the digital economy.

Hackers are using AI to speed up polymorphic malware, causing it to constantly change its code so it can’t be identified.

Artificial Intelligence To Aid Scientists Understand Earth Better






According to a latest study in the scientific field, computer sciences are all set to collaborate with geography. With the help of Artificial Intelligence complex processes of the planet Earth could now be understood better.

The Friedrich Schiller University’s researchers got behind the books to carry out the aforementioned study, wherein it’s clear that the AI has a lot to contribute to life science.

Climatic conditions and the study of the Earth systems would now become substantially easy to comprehend.

This ability to understand things better would contribute in improving the already existing systems and models on the Earth’s surface.

Before AI got involved, the investigations done regarding the Earth were merely about static elements including the soil properties from a global scale.

More high-tech techniques will now be employed to handle the processes better, all thanks to Artificial Intelligence.

Variations in largely global land processes like photosynthesis could also be now monitored and all the considerations could be deliberated beforehand.


The Earth system data along with a myriad of sensors is now available so that tracking and comprehending the 'Earthian' processes by the aid of AI would now be an easy job.

This new collaboration is very promising element as processes that are beyond human understanding could now be estimated.

Imagine recognition, natural language processing and classical machine learning applications are all that are encompassed within the new techniques available.

Hurricanes, fire spreads and other complex processes leveraged by local conditions are some of the examples for the application.

Soil movement, vegetation dynamics, ocean transport and other basic themes regarding the Earth’s science and systems also lie within the category of the application.

Data dependent statistical techniques no matter how well the data quality, are not always certifiable and hence susceptible to exploitation.

Hence, machine learning needs to an essential part which would also solve the issue regarding storage capacity and data processing.




Physical and mechanical techniques if brought together would absolutely make a huge difference. It would then be possible to model the motion of ocean’s water and to predict the temperature of the sea surface.

According to what one of the researchers behind the study cited, the major motive is bringing together the “best of both worlds”.

In light of this study, warnings regarding natural calamities or any other extreme events including the climatic and weather possibilities would become way easier than ever were.

Artificial Intelligence Is What’s Protecting Your Microsoft, Google And Similar Accounts





Artificially intelligent systems are quite on the run these days. The new generation believes a lot in the security system which evolves with the hackers’ trickery.

Microsoft, Google, Amazon, and numerous other organizations keep the faith in artificially intelligent security systems.

Technology based on rules and designed to avert only certain and particular kinds of attacks has gotten pretty old school.

There is a raging need for a system which comprehends previous behavior of hacking or any sort of cyber attacks and acts accordingly.

According to researchers, the dynamic nature of machines, especially AI makes it super flexible and all the more efficient in terms of handling security issues.

The automatic and constant retaining process certainly gives AI an edge over all the other forms.

But, de facto, hackers are quite adaptable too. They also usually work on the mechanical tendencies of the AI.

The basic way they go around is corrupting the algorithms and invading the company’s data which is usually the cloud space.

Amazon’s Chief Information Security Officer mentioned that via the aforementioned technology seriously aids in identifying threats at an early stage, hence reducing the severity and instantly restoring systems.

He also cited that despite the absolute aversion of intrusions being impossible, the company’s working hard towards making hacking a difficult job.

Initially, the older systems used to block entry in case they found anything suspicious happening or in case of someone logging in from an unprecedented location.

But, due to the very bluntness of the security system, real and actual users get to bear the inconvenience.

Approximately, 3% of the times, Microsoft had gotten false positives in case of fake logins, which in a great deal because the company has over billions of logins.

Microsoft, hence, mostly calculates and analyzes the technology through the data of other companies using it too.
The results borne are astonishing. The false positive rate has gotten down to 0.001%.

Ram Shankar Siva Kumar, who’s Microsoft’s “Data Cowboy”, is the guy behind training all these algorithms. He handles a 18-engineer team and works the development of the speed of the system.

The systems work efficiently with systems of other companies who use Microsoft’s cloud provisions and services.

The major reason behind, why there is an increasing need to employ AI is that the number of logins is increasing by the day and it’s practically impossible for humans to write algorithms for such vast data.

There is a lot of work involved in keeping the customers and users safe at all times. Google is up and about checking for breachers, even post log in.

Google keeps an eye on several different aspects of a user’s behavior throughout the session because an illegitimate user would act suspiciously for sure, some time or the other.

Microsoft and Amazon in addition to using the aforementioned services are also providing them to the customers.

Amazon has GuardDuty and Macie which it employs to look for sensitive data of the customer especially on Netflix etc. These services also sometimes monitor the employees’ working.

Machine learning security could not always be counted on, especially when there isn’t enough data to train them. Besides, there is always a worry-some feeling about their being exploited.

Mimicking users’ activity to degrade algorithms is something that could easily fool such a technique. Next in line could be tampering with the data for ulterior purposes.

With such technologies in use it gets imperative for organizations to keep their algorithms and formulae a never-ending mystery.

The silver lining though, is that such threats have more of an existence on paper than in reality. But with increasingly active technological innovation, this scenario could change at any time.




New AI Program can Effectively Remove the Noise from a Photograph




On July 9, researchers from NVIDIA, Aalto University, and MIT unveiled a yet another AI program that can adequately expel the noise and artifacts from pictures , this AI algorithm is the first of its kind as it doesn't even require a "clean " reference picture to get it going.

To make their noise filtering AI, the researchers began by adding noise to 50,000 sets of clean pictures and then fed these sets of grainy pictures to their AI, preparing it to expel the noise to reveal a much cleaned up version of the photograph, one that looked relatively indistinguishable to the picture before the noise was included.

Subsequent to training their AI, the scientists/researchers tried it by utilizing three sets of pictures to which they'd included the noise and they found that the framework could de-commotion the photographs in milliseconds, creating a version just marginally milder looking than the first before the noise was included.

Researchers say their algorithm can be utilized to denoise old grainy photographs, remove content based watermarks, clear up medicinal X-ray examines taken with undersampled inputs, upgrading astronomical photography, and denoise artificially produced pictures.

This in any case, isn't the principal case of an AI that can enhance inferior photos and absolutely not the first time when NVIDIA's research team has been behind such an amazing exploration.

As in April, NVIDIA and different researchers made another AI-based algorithm that can recreate pictures from which substantial lumps of content has been expelled.
Also in May, NVIDIA and other kindred researchers made an AI algorithm that can encourage robots to do different tasks just by watching a couple of redundancies by human workers.

Despite the fact that it's not yet clear when or if, this product may end up accessible to the overall population. Be that as it may, when that day comes, the researchers trust that it could definitely be helpful for various applications whether it be from astrology to medical imaging.


Google bans AI used for weapons and war


Google CEO Sundar Pichai on Thursday announced that Google is banning the development of Artificial Intelligence (AI) software that could be used in weapons or harm others.

The company has set strict standards for ethical and safe development of AI.

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a blog post. “As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions."

The objectives Google has framed out for this include that the AI should be socially beneficial, should not create or promote bias, should be built and tested safely, should have accountability, and that it should uphold privacy principles, etc.

The company, however, will not pursue AI development in areas where it threatens harm on other people, weapons, technology that violate human rights and privacy, etc.

“Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints,” the post read.

However, while the company will not create weapons, it had said that it will continue to work with the military and government.

"These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe," Pichai said.

This decision comes after a series of resignation of several employees after public criticism of Google’s contract with the Defense Department for an AI that could help analyze drone video, called Project Maven.


Is AI allegedly hacking users’ account?

Recently the leak of a few documents online seems to reveal insight into the computer gaming industry's use of Artificial Intelligence (AI) to increase advertising revenue and gaming deals. The classified documents showed up on Imgur two days back, and have been doing the rounds on Twitter. The leaked documents, if genuine, uncover the startling lengths that the computer game industry will go to with a specific end goal to snoop on gamers using AI.


The archives state that reconnaissance data is accumulated to order detailed profiles about users. As indicated by the reports AI focused on the users' smartphones and utilized inactive listening innovation/technology to connect with the smartphone's microphone, phones are checked to see whether they (users) stay in a similar area for eight hours or more. On the off chance that this is observed to be genuine the subject is set apart as "at home". 

The unsubstantiated documents at that point go ahead to clarify the detailed observing or monitoring that happens inside a user’s home:
 “When in home, monitor area of common walking space. Pair with information about number of staircases gathered from footfall audio patterns. Guess square footage of house.”

A part of the document marked "Example Highlight" at that point goes ahead to clarify how it was chosen that "high bonus gaming sessions during relaxing times are paradoxically not the time to encourage premium engagement."

Around then, users are focused with free rewards, bonuses and "non-revenue-generating gameplay ads." As per the leak, at these circumstances "the AI severely discourages premium ads.”
As though this wasn't sufficient, the AI additionally listens in, for catchphrases as well as for "non word sounds." Examples include microwave sounds and notwithstanding biting and chewing noises, which are utilized to figure whether packaged meals have been consumed.

A section marked "Calendar K" clarifies how psychological manipulation is utilized to coerce users into making purchases. AI may sit tight for players to be tired after long gaming sessions. Can turn around the shade of free and paid game titles (generally blue and red), with a specific end goal to "trick a player into making a buy unintentionally."

Unbelievably though,it gets worse. As indicated by the leaked documents the gaming business industry likewise utilizes hacked data dumps to gather additional information about users. Also a segment marked "Schedule O" even clarifies how the AI gathers side channel data.
For the present however, it remains to be seen whether this information or data dump will end up being genuine or not.


As is dependably the case, we encourage smart phone users to be careful about the applications they install. Continuously check for obtrusive authorizations before consenting to install any application or game. On the off chance that a game requests authorization to utilize the microphone, please remember that this sort of reconnaissance might happen.

As per these leaked documents, AI software may likewise be utilizing previously hacked information and data to pick up passage to outsider or third-party administrations and services. If it happens, at that point the gaming companies might break into auxiliary services to put users under surveillance and develop a detailed profile about them.


For now, these serious allegations still can't seem to be demonstrated valid. Be that as it may, the users are reminded to dependably utilize solid one of a kind passwords for the greater part of their diverse online accounts – to make it substantially harder for organizations and companies to use such practices.

Artificial intelligence still has to fight living consciousness


In today’s technology driven world, the debate on the role of artificial intelligence is gradually heating up. According to computer scientists Stuart Russell and Peter Norvig, the term “Artificial Intelligence” is applied when a machine mimics ‘cognitive’ functions that humans associate with other human minds, such as ‘learning’ and ‘problem solving’.

 In contrast to artificial intelligence or AI, is the process automation which takes manual tasks that do not need much learning and simply mechanizes them.

Automation simply mechanizes routine tasks. But in AI, the computer program itself learns as it goes along, creating a database of information which themselves generate additional computer programming code as they learn more, without the need for an army of computer programmers. In AI speak, this is now often referred to as ‘deep learning’.

While AI as a term is familiar to the industry, deep learning is what's been in the limelight lately. Deep learning is a subset of machine learning, which in turn falls under the much broader umbrella of AI, all of whose broad goals are to make computers do things outside of the box of precise programmed instructions.
Deep learning refers to the use of specialised computer programs called neural networks — computational representations of points resembling biological neurons in the brain — stacked in (deep) layers, where information flows between the layers. Such a program can be fed large amounts of information — for instance, images, from which they automatically detect and learn implicit features that they can later use to make predictions about novel information.
While deep-learning programs are incredible feats of engineering and promise great advancements in AI, they cannot be applied to all problems. These programs are highly specific to their scopes and require a lot of tuning and trial and error by humans.
While tasks, whether or not they need continuous learning, can be automated, there is one thing that a soulless machine can never do, and that is to have living consciousness.