Search This Blog

Showing posts with label AI. Show all posts

White House Declares Guidelines to U.S Federal Agencies for AI Applications

The U.S White House has issued guidelines to the U.S federal agencies regarding the AI (Artificial Intelligence) applications produced in the U.S. According to the Director of the Office of Management and Budget (OMB), the notice inspects policies that will overlook a limit allowed by legislation, non-regulatory and regulatory plans to AI apps made and used outside the U.S federal agencies. These OMB guidelines appear after almost two years when the former U.S President Donald Trump signed an executive order for fast-track expansion of Artificial Intelligence in the United States. 

When signing the executive order, President Trump emphasized that it would overlook the launch and ensure that the U.S resources are spent in developing the AI locally. As per the guidelines, the aim is to assure that organizations won't bring out rules or regulations that may restrict AI's growth and innovation. The guidelines also ask agencies to point out challenging, difficult, or other state laws that may affect the launch of AI in the national market. OMB has issued ten principles that federal agencies can use while implementing AI applications. 

The principles were first brought out as a part of the draft memorandum during the start of 2020. The principles include creating a trust for AI among the people with ensuring the privacy and safety of AI users, promote public participation in the application of AI, provide scientific data and information to the public, assuring risk assessment measures accross various agencies, profit maximization while implementing the AI, aim for ways to AI that won't affect the innovation, technology must be safe and reliable, user transparency, promotion of a safe AI system that is secure and companies must share their experience with the AI. 

The White House memo says, "given that many AI applications do not necessarily raise novel issues, the following principles also reflect longstanding Federal regulatory principles and practices that are relevant to promoting the innovative use of AI. Promoting innovation and the growth of AI is a high priority of the U.S. government. Fostering AI innovation and growth through forbearing from new regulation may be appropriate in some cases."

Deepfake Bots on Telegram, Italian Authorities Investigating

 

Cybercriminals are using a newly created Artificial Intelligence bot to generate and share deepfake nude images of women on the messaging platform Telegram. The Italian Data Protection Authority has begun to investigate the matter following the news by a visual threat intelligence firm Sensity, which exposed the 'deepfake ecosystem' — estimating that almost 104,852 fake images have been created and shared with a large audience via public Telegram channels as of July 2020. 
 
The bots are programmed to create fake nudes having watermarks or displaying nudity partially. Users upon accessing the partially nude image, pay for the whole photo to be revealed to them. They can do so by simply submitting a picture of any woman to the bot and get back a full version wherein clothes are digitally removed using the software called "DeepNude", which uses neural networks to make images appear "realistically nude". Sometimes, it's done for free of cost as well. 
 
According to the claims of the programmer who created DeepNude, he took down the app long ago. However, the software is still widely accessible on open source repositories for cybercriminals to exploit. Allegedly, it has been reverse-engineered and made available on torrenting websites, as per the reports by Sensity. 
 
In a conversation with Motherboard, Danielle Citron, professor of law at the University of Maryland Carey School of Law, called it an "invasion of sexual privacy", "Yes, it isn’t your actual vagina, but... others think that they are seeing you naked."   

"As a deepfake victim said to me—it felt like thousands saw her naked, she felt her body wasn’t her own anymore," she further told. 
 
More than 50% of these pictures are being obtained through victims' social media accounts or from anonymous sources. The women who are being targeted are from all across the globe including the U.S., Italy, Russia, and Argentina.
 
Quite alarmingly, the bot has also been noticed sharing child pornography as most of the pictures circulated belonged to underage girls. The company headquartered in Amsterdam also told that the vicious Telegram network is build up of 101,080 members approximately. 

In an email to Motherboard, the unknown creator of DeepNude, who goes by the name Alberto, confirmed that the software only works with women as nude pictures of women are easier to find online, however, he's planning to make a male version too. The software is based on an open-source algorithm "pix2pix" that uses generative adversarial networks (GANs). 
 
"The networks are multiple because each one has a different task: locate the clothes. Mask the clothes. Speculate anatomical positions. Render it," he told. "All this makes processing slow (30 seconds in a normal computer), but this can be improved and accelerated in the future."

Hackers Can Use AI and Machine Learning to Attack Cybersecurity

 


According to researchers at NCSA and Nasdaq cybersecurity summit, hackers can use Machine and AI (Artificial Intelligence) to avoid identification during cybersecurity attacks and make the threats more effective. Hackers use AI to escape disclosure; it allows them never to get caught and adapt to new tactics over time, says Elham Tabassi, National Institute of Standards and Technology's chief of staff information technology laboratory. 




Tim Bandos from Digital Guardian says technology always requires human consciousness to strive forward. It has and will require human effort to counter cyberattacks and stop them. According to Tim, Experts and Analysts are the real heroes, and AI is just a sidekick. 

How are hackers using AI to attack cybersecurity? 

1. Data Poisoning 
In some cyberattacks, hackers exploit the data which is used to train machine learning models. In data poisoning, the hacker manipulates a training dataset to manage the model's prediction patterns and prepare it according to his will to do many hacker desires. These can include spamming or phishing emails. Tabassi says that data is the driving mechanism for any machine learning, and one should focus on the information he uses to train the models to act like any model. Machine learning training models and the data it uses affect user trust. For cybersecurity, the industry needs to establish a standard protocol for data quality. 

2. Generative Adversarial Networks 
GANs are nothing but a setting where two AI systems are set up against each other. One AI generates the content, and the other AI finds the errors. The competition between the two AIs together creates reliable content to get through as the original. "This capability could be used by game developers to automatically generate layouts for new game levels, as well as by AI researchers to more easily develop simulator systems for training autonomous machines," says Nvidia blog. According to Bandos, hackers are using GANs to replicate traffic patterns. It allows them not to draw attention to the attack, and the hackers can steal sensitive data and get out of the system within 30-40 minutes.

India And Japan Agree on The Need for Robust and Resilient Digital and Cyber Systems

 

India and Japan finalize a cybersecurity deal as both agreed to the need for vigorous and 'resilient digital and cyber systems'. 

Their ambitious agreement accommodates participation in 5G technology, AI and a variety of other critical regions as the two strategic partners pledged to broad base their ties including in the Indo-Pacific area. 

The foreign ministers of the two nations – S Jaishankar of India and Motegi Toshimitsu of Japan – were of the view that a free, open, and comprehensive Indo-Pacific region “must be premised on diversified and resilient supply chains."

The two ministers “welcomed the Supply Chain Resilience Initiative between India, Japan, Australia, and other like-minded countries." 

Their initiative comes with regards to nations hoping to enhance supply chains out of China subsequent to Beijing suddenly closing factories and units in the repercussions of the Coronavirus pandemic, sending economic activities into a dump. 

The move hurled the subject of dependability of supply chains situated in China with nations hoping to widen the hotspots for critical procurement. In September, the trade ministers of India, Australia, and Japan had consented upon to dispatch an initiative on supply chain resilience.


Jaishankar, in a tweet, said further expansion of India-Japan cooperation in third nations centering around development projects likewise figured in the thirteenth India-Japan foreign minister's strategic dialogue.

The two “welcomed the finalization of the text of the cybersecurity agreement. The agreement promotes cooperation in capacity building, research, and development, security and resilience in the areas of Critical Information Infrastructure, 5G, Internet of Things (IoT), Artificial Intelligence (AI), among others," the statement said. 

In New Delhi, the agreement was cleared at a Cabinet meeting headed by PM Narendra Modi, as per Information and Broadcasting Minister Prakash Javadekar. 

The ministers concurred that the following annual bilateral summit between the leaders of India and Japan would be facilitated by the Indian government “at a mutually convenient time for the two Prime Ministers."

Facebook using AI to track hate speech

 


Facebook's hate speech and malicious content identifying AI seem to be working as the company said that their AI identified and removed 134% more hate speech in the second quarter than in the first. The company stated in the Community Standards Enforcement Report that it acted upon 9.9 million hateful posts in the first quarter of the year and 22.5 million in the second. But the figures also reveal how much of hate content was there and is still on the site, to begin with.

Facebook's VP of Integrity Guy Rosen blames the high number to “the increase in proactive technology” in detecting a said form of content. The company has more and more been relying on machine learning and AI to drive out this type of content by losing bots on the network. 

There has been a similar rise on Instagram as well. They detected 84% of hate speeches in this quarter and 45% in the last and removed 3.3 million of these posts from April to June- a sweeping amount when compared to just 808,900 in January till March. 

The social media site also has plans to use similar technology to monitor Spanish, Arabic, and Indonesian posts. 

These increasing number in hate content does show the platform's improvement in the AI technology used to fish out hate post but it also raises concerns over the hostile environment the network presents. Though the company blames these numbers to an increase in coverage of content.

 “These increases were driven by expanding our proactive detection technologies in English and Spanish,” as the company states.

Some critiques also say that the company has no way of knowing how much percent they are actually capturing and how much there is as they measure it according to 'Prevalence' that is how often a Facebook user sees a hateful post as opposed to how many there actually are. The social media giant also updated as to what they include as hate speech - excluding misinformation that remains a big problem for Facebook.

A resurgence in DDoS Attacks amidst Global COVID-19 lockdowns


Findings of Link11's Security Operations Center (LSOC) uncovered a 97% increase in the number of attacks for the months of April, May, and June in 2020 when compared with the attacks during the same period in the previous year, with an increment of 108% in May 2020.

The annual report incorporates the data which indicated that the recurrence of DDoS attacks relied upon the day of the week and time, with most attacks concentrated around weekends of the week and evenings. 

More attacks were registered on Saturdays, and out of office hours on weekdays. 

Marc Wilczek, COO, Link11 says, “The pandemic has forced organizations to accelerate their digital transformation plans, but has also increased the attack surface for hackers and criminals – and they are looking to take full advantage of this opportunity by taking critical systems offline to cause maximum disruption. This ‘new normal’ will continue to represent a major security risk for many companies, and there is still a lot of work to do to secure networks and systems against the volume attacks. Organizations need to invest in security solutions based on automation, AI, and Machine Learning that are designed to tackle multi-vector attacks and networked security mechanisms...” 


Key findings from the annual report include: 

Multivector attacks on the rise: 52% of attacks consisted of a few strategies for the attack, making them harder to defend against. One attack included at least 14 techniques.

The growing number of reflection amplification vectors:: More usually utilized vectors included DNS, CLDAP, and NTP, while WS Discovery and Apple Remote Control are still being utilized in the wake of being discovered in 2019. 

DDoS sources for reflection amplification attacks distributed around the globe: The top three most significant source nations in H1 2020 were the USA, China, and Russia. Be that as it may, the ever-increasing number of attacks have been traced back to France. 

The average attack bandwidth remains high: The attack volume of DDoS attacks has balanced out at a relatively elevated level, at an average of 4.1 Gbps. In most attacks, 80% were up to 5 Gbps. The biggest DDoS attack was halted at 406 Gbps. 

DDoS attacks from the cloud: At 47%, the percentage of DDoS attacks from the cloud was higher than the entire year 2019 (45%). Instances from every single established provider were 'misused', however, the more usual ones were Microsoft Azure, AWS, and Google Cloud. 

The longest DDoS attack lasted 1,390 minutes – 23 hours and interval attacks, which are set like little pinpricks and flourish on repetition lasted an average of 13 minutes.


Is Data Science loosing all that hype?


All over the world companies are making cuts, the COVID-19 has lead to a major economic downfall, and companies are struggling to stay afloat by reassessing their strategies and priorities. This has made companies realize the actual value of data science in business and things are not looking good. There have been mass cuts and layoffs in tech industries including data scientists and AI specialists and many are saying that the hype over data science is finally coming down.

Over the last five years the data science field has bloomed with a soaring speed and talent in data science has increased exponentially but it is expectant of companies to let this department go as when we look at direct business value, data science, unfortunately, don't add much - they fail to make the essential need-to-be list. Hence, the demand for data scientists will significantly decrease in the foreseeable future.

Dipanjan Sarkar, a Data Science Lead at Applied Materials talks about AI and lose business models saying, “The last couple of years, the economy had been doing quite well, and since every company wanted to join the AI race, they started pulling up these data science teams. But, they didn’t do the due diligence in hiring. They didn’t have a clear vision in mind as to how their AI strategy is actually going to help. Companies may think that they’re not getting any tangible value from large data science teams. This can trigger a move to cut down the staff, which may be non-essential ".

Most of the core business is done by engineering and manual processes and data science just adds the cherry on top. AI, machine learning, and data science are only valuable if t data science creates money or save it. Companies currently are focusing on cash curves and ventures like data science have become big questions thus when companies make cuts, data scientists will be the first to let go.

"People need to understand that data science is nothing special than any other IT related field. Furthermore, it is a non-essential work. I firmly believe that data science people will get fired first than engineers in any company’s worst situation (like Covid-19 pandemic),” according to Swapnil Jadhav, Principal Scientist (Applied Research) at DailyHunt.

Google AI no longer to use Gender Labels to Tag Photos


Google's Cloud Vision API is a Google Artificial Intelligence (AI) tool that recognizes an image and what's in it and labels it, will no longer use gender labels like "man" and "woman", instead it will label it as 'Person.' Google Cloud Vision API is a tool through which developers can attach labels to photos and identify the content. In an email sent to users on Thursday, Google instructed that they will not use 'woman' or 'man' as physical appearance can not determine gender, the change has been done to avoid bias.


“Given that a person’s gender cannot be inferred by appearance,” reads the email, “we have decided to remove these labels to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias.” The bias that Google talks about is a result of "flawed training data," a much-discussed topic. A flaw that results in AI algorithm making assumptions- that is anyone who doesn't fit the algorithm of 'man' or 'woman' and will be misgendered. By labeling them as 'person,' Google attempts to avoid this mistake.

Frederike Kaltheuner, a tech policy fellow at Mozilla, said to Business Insider, "Anytime you automatically classify people, whether that's their gender or their sexual orientation, you need to decide on which categories you use in the first place — and this comes with lots of assumptions. "Classifying people as male or female assumes that gender is binary. Anyone who doesn't fit it will automatically be misclassified and misgendered. So this is about more than just bias — a person's gender cannot be inferred by appearance. Any AI system that tried to do that will inevitably misgender people."

Google notes this bias in its API and AI(artificial intelligence) algorithm and seeking to change this flaw: "We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief." Any more news regarding the Tag feature is yet to be heard from Google.

A Drug Molecule "Invented" By Artificial Intelligence (AI) To Be


Created by British start-up Exscientia and Japanese pharmaceutical firm Sumitomo Dainippon Pharma a drug molecule “artificial" by artificial intelligence (AI) will be utilized in human trails in a world-first for machine learning in the field of medicine.

Especially to be put to the use of treating patients who have obsessive-compulsive disorder (OCD), Exscienta CEO Prof Andrew Hopkins even describes it as a "key milestone in drug discovery".

The molecule- known as DSP-1181 - was made by utilizing algorithms that filtered through potential compounds, checking them against an enormous database of parameters. Normally, drug development takes around five years to finally 'get to trail', but surprisingly enough the AI drug took only a year.

Hopkins told the BBC: "We have seen AI for diagnosing patients and for analyzing patient data and scans, but this is the direct use of AI in the creation of new medicine. There are billions of decisions needed to find the right molecules and it is a huge decision to precise engineer a drug, but the beauty of the algorithm is that they are agnostic, so can be applied to any disease,"

The first drug will enter stage one trails in Japan which, if effective, will be then followed by more tests globally.
The firm is now dealing with potential medications for the treatment of cancer and cardiovascular disease and would like to have another molecule prepared for clinical trials before the year's end.

"This year was the first to have an AI-designed drug but by the end of the decade all new drugs could potentially be created by AI," said Prof Hopkins.

Paul Workman, chief executive of The Institute of Cancer Research, who was not involved in the research, said of the breakthrough: "I think AI has huge potential to enhance and accelerate drug discovery.

And later adds, "I'm excited to see what I believe is the first example of a new drug now entering human clinical trials that were created by scientists using AI in a major way to guide and speed up discovery."

Researchers And Army Join Hands to Protect the Military’s AI Systems


As an initiative to provide protection to the military's artificial intelligence systems from cyber-attacks, researchers from Delhi University and the Army have joined hands, as per a recent Army news release. 

As the Army increasingly utilizes AI frameworks to identify dangers, the Army Research Office is investing in more security. This move was a very calculated one in fact as it drew reference from the NYU supported CSAW HackML competition in 2019 where one of the many major goals was to develop such a software that would prevent cyber attackers from hacking into the facial and object recognition software the military uses to further train its AI.

MaryAnne Fields, program manager for the ARO's intelligent systems, said in a statement, "Object recognition is a key component of future intelligent systems, and the Army must safeguard these systems from cyber-attack. This work will lay the foundations for recognizing and mitigating backdoor attacks in which the data used to train the object recognition system is subtly altered to give incorrect answers."


This image demonstrates how an object, like the hat in this series of photos, can be used by a hacker to corrupt data training an AI system in facial and object recognition.

The news release clearly laid accentuation on a very few important facts like, “The hackers could create a trigger, like a hat or flower, to corrupt images being used to train the AI system and the system would then learn incorrect labels and create models that make the wrong predictions of what an image contains.” 

The winners of the HackML competition, Duke University researchers Yukan Yang and Ximing Qiao, created a program that can 'flag and discover potential triggers'. And later added in a news release, "To identify a backdoor trigger, you must essentially find out three unknown variables: which class the trigger was injected into, where the attacker placed the trigger and what the trigger looks like," 

And now the Army will only require a program that can 'neutralize the trigger', however, Qiao said it ought to be "simple:" they'll just need to retrain the AI model to ignore it. 

And lastly, the software's advancement is said to have been financed by a Short-Term Innovative Research that grants researchers up to $60,000 for their nine months of work.

European Union likely to ban Facial Recognition for 5 years


The EU (Europian Union) is considering restricting the use of facial recognition technology for a possible duration of 5 years, in public area sectors. The reason being is the regulators need some time to consider the protection of unethical exploitation of the technique. The facial recognition is a technique that lets to identify faces that are captured on camera footage to be crosschecked against real-time watchlists, mostly collected by the police.


However, the restrictions for the use are not absolute as the technique can still be used for research and development, and safety purposes. The committee formulating the restriction drafted an 18-page document, which implicates the protection of privacy and security of an individual from the abuse of the facial recognition technique. The new rules are likely to strengthen the security measures further against the exploitation. The EU suggested forcing responsibilities on either party, the developers, and the users of AI (artificial intelligence) and requested member countries of the EU to build an administration to observe the recent laws.

Throughout the ban duration that is 3-5 years, "a solid measure for evaluating the repercussions of facial recognition and plausible security check means can be discovered and applied." The recommendations appear among requests from lawmakers and activists in the United Kingdom to prevent the police from unethical abuse of the AI technique that uses live facial recognition technology for purposes of monitoring the public. Not too late, the Kings Cross estate got into trouble after a revelation that its owners were using facial recognition without the public knowing about it.

The politicians allege that facial recognition is fallacious, interfering, and violates the basic human right of privacy. According to a recent study, the algorithms that facial recognition uses are not only incorrect but are also flawed in identifying the black and Asian faces in comparison to those of the whites.

How Facial Recognition works?

  • The faces stored in a police photo database are mapped using the software.
  • CCTV present at public places identifies the faces. 
  • Possible matches are compared and then sent to the police. 
  • However, pictures of inaccurate matches are stored for weeks.

Earth-I announces the launch of SEVANT, using satellite for surveillance on Copper Smelters


London: Earth-i has announced to launch a service for traders and miners on October 18, ahead of LME(London Metal Exchange) week to spy over copper smelters through satellite imagery to predict their shut down and boom. The service, sold by Earth-i will keep surveillance over copper smelters as to get beforehand notice of their closures and openings, which could lead to jumps in copper prices.


The copper market is widely watched and closely studied by analysts and researchers as an indicator of a good economy since the metal is widely used in many ways from construction to manufacturing. To watch over these irregularities of going off and on of copper smelters resulting in a surge in prices, Britain-based Earth-i which uses geo-spatial intelligence in collaboration with Marex Spectron and the European Space Agency is set to launch a ground-breaking product, the SAVANT Global Copper Smelting Index. The dataset will provide key operational status reports of the world's copper plants to subscribers. Most of the surveys on copper miners and smelters are released every month.

Earth-i Chief Technology Officer John Linwood said: "Historically when you look at smelters, the challenge has been getting up-to-date information." Over the last year, the company has been testing SAVANT and conducting trials with financers, traders and managers and also detected a major closedown in Chile, the world's biggest copper-producing country. “SAVANT delivers unique insights, not just to copper producers, traders, and investors, but also to analysts and economists who use metals performance as strong indicators of wider economic activity," says Wolf, Global Head of Market Analytics at Marex Spectron.

Earth-i uses twenty high-resolution satellites along with Artificial Intelligence and Machine Learning-led analytics working with satellite image processing. They also launched their satellite last year. The SAVANT Global Copper Smelting Index covers and monitors 90% copper smelting around the world. The data will allow users to make informed and timely decisions to tackle jolts in copper prices. "Earth-i will also publish a free monthly global index from 18 October" a statement by earth-i, the index will be free but delayed.

Google has been listening to recordings from Home smart speakers


Google has admitted that it listens to voice recordings of users from its AI voice-assistant Google Assistant after its Dutch language recordings were leaked by Belgian public broadcaster VRT. “Most of these recordings were made consciously, but Google also listens to conversations that should never have been recorded, some of which contain sensitive information,” VRT claimed in its report.

Google’s product manager of Search David Monsees admitted, in a company blog post, that its language experts globally listen to these recordings to help Google better understand languages to develop speech technology.

“These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant,” the post read.

Google, however, insists that only around 0.2 per cent of all audio snippets are reviewed. The clippings, the company says, are anonymous or not associated with user accounts and do not reveal a user’s personal information. The post adds that no background noise is transcribed by the language experts to maintain privacy.

However, of over 1,000 recordings from Assistant, which is used on smartphones, smart home speakers like Google Home and other products, VRT reported that 153 were recorded accidentally and even revealed some personal information of users such as their address in one case and names of grandchildren of a family in another.

Notably, to activate the Google Assistant, users need to say the phrase “OK, Google” or physically trigger the Assistant button on devices, after which it starts recording. Though rare, Google admits that Assistant may falsely accept recording request sometimes when triggered by interpreting something else as “Ok Google”. According to the post, this tends to happen when there is too much background noise.

An App Which Could Have Meant For Any Woman to Be a Victim of Revenge Porn Taken Down By the Developers



An app created solely for "entertainment" a couple of months back, won attention as well as criticism. It professed to have the option to take off the clothes from pictures of women to make counterfeit nudes which implied that any woman could be a victim of revenge porn.

Saying that the world was not prepared for it the app developers have now removed the software from the web and wrote a message on their Twitter feed saying, "The probability that people will misuse it is too high, we don't want to make money this way."

Likewise ensuring that that there would be no different variants of it accessible and subsequently withdrawing the privilege of any other person to utilize it, they have also made sure that any individual who purchased the application would get refund too.

The program was accessible in two forms - a free one that put enormous watermarks over made pictures and a paid rendition that put a little "fake" stamp on one corner.

Katelyn Bowden,  founder of anti-revenge porn campaign group Badass, called the application "terrifying".

"Now anyone could find themselves a victim of revenge porn, without ever having taken a nude photo, this tech should not be available to the public, “she says.

The program apparently utilizes artificial intelligence based neural networks to remove clothing from the images of women to deliver realistic naked shots.

The technology is said to be similar to that used to make the so-called deepfakes, which could create pornographic clips of celebrities.

Can AI become a new tool for hackers?

Over the last three years, the use of AI in cybersecurity has been an increasingly hot topic. Every new company that enters the market touts its AI as the best and most effective. Existing vendors, especially those in the enterprise space, are deploying AI  to reinforce their existing security solutions. Use of artificial intelligence (AI) in cybersecurity is enabling IT professionals to predict and react to emerging cyber threats quicker and more effectively than ever before. So how can they expect to respond when AI falls into the wrong hands?

Imagine a constantly evolving and evasive cyberthreat that could target individuals and organisations remorselessly. This is the reality of cybersecurity in an era of artificial intelligence (AI).

There has been no reduction in the number of breaches and incidents despite the focus on AI. Rajashri Gupta, Head of AI, Avast sat down with Enterprise Times to talk about AI and cyber security and explained that part of the challenge was not just having enough data to train an AI but the need for diverse data.

This is where many new entrants into the market are challenged. They can train an AI on small sets of data but is it enough? How do they teach the AI to detect the difference between a real attack and false positive? Gupta talked about this and how Avast is dealing with the problem.

During the podcast, Gupta also touched on the challenge of ethics for AI and how we deal with privacy. He also talked about IoT and what AI can deliver to help spot attacks against those devices. This is especially important for Avast who are to launch a new range of devices for the home security market this year.

AI has shaken up with automated threat prevention, detection and response revolutionising one of the fastest growing sectors in the digital economy.

Hackers are using AI to speed up polymorphic malware, causing it to constantly change its code so it can’t be identified.

Uber Working with AI to Determine the Probability of Drunken Passengers



Recently according to CNN, the Uber Innovation Inc. documented a patent for a machine learning application that could precisely foresee a user's condition of sobriety and caution the driver with this information. Because apparently Uber is taking a shot at innovating a technology that could decide exactly just how drunken passengers are when requesting for a ride.

The patent application depicts artificial intelligence that figures out how passengers commonly utilize the Uber application, so it can better spot uncommon behaviour in light of the fact that, various Uber drivers have been physically assaulted by passengers as of late, a significant number of whom were inebriated.

The application's algorithms measure various factors that indicate that the passengers are most likely inebriated it incorporates typos, walking speed, how correctly the passengers press in-app buttons, and the amount of time it takes to arrange a ride. Somebody messing up most words, swaying side-to-side and taking at most 15 minutes to arrange for a ride late on Saturdays.

Uber's patent says that it could, possibly, utilize the innovation to deny rides to users in light of their current state, or maybe coordinate them with different drivers with pertinent abilities and training.

The application is said to likewise increase the wellbeing for both the rider as well as the driver.

As per an ongoing CNN investigation, no less than 103 Uber drivers have been blamed for sexually assaulting or abusing passengers in just the previous four years. Now, while the application won't stop the ruthless idea of a few people, it can definitely help in accurately recognizing disabled people so they can be placed with trusted drivers or those with experience in commuting inebriated passengers.

Google bans AI used for weapons and war


Google CEO Sundar Pichai on Thursday announced that Google is banning the development of Artificial Intelligence (AI) software that could be used in weapons or harm others.

The company has set strict standards for ethical and safe development of AI.

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a blog post. “As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions."

The objectives Google has framed out for this include that the AI should be socially beneficial, should not create or promote bias, should be built and tested safely, should have accountability, and that it should uphold privacy principles, etc.

The company, however, will not pursue AI development in areas where it threatens harm on other people, weapons, technology that violate human rights and privacy, etc.

“Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints,” the post read.

However, while the company will not create weapons, it had said that it will continue to work with the military and government.

"These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe," Pichai said.

This decision comes after a series of resignation of several employees after public criticism of Google’s contract with the Defense Department for an AI that could help analyze drone video, called Project Maven.