Search This Blog

Showing posts with label AI. Show all posts

European Union likely to ban Facial Recognition for 5 years


The EU (Europian Union) is considering restricting the use of facial recognition technology for a possible duration of 5 years, in public area sectors. The reason being is the regulators need some time to consider the protection of unethical exploitation of the technique. The facial recognition is a technique that lets to identify faces that are captured on camera footage to be crosschecked against real-time watchlists, mostly collected by the police.


However, the restrictions for the use are not absolute as the technique can still be used for research and development, and safety purposes. The committee formulating the restriction drafted an 18-page document, which implicates the protection of privacy and security of an individual from the abuse of the facial recognition technique. The new rules are likely to strengthen the security measures further against the exploitation. The EU suggested forcing responsibilities on either party, the developers, and the users of AI (artificial intelligence) and requested member countries of the EU to build an administration to observe the recent laws.

Throughout the ban duration that is 3-5 years, "a solid measure for evaluating the repercussions of facial recognition and plausible security check means can be discovered and applied." The recommendations appear among requests from lawmakers and activists in the United Kingdom to prevent the police from unethical abuse of the AI technique that uses live facial recognition technology for purposes of monitoring the public. Not too late, the Kings Cross estate got into trouble after a revelation that its owners were using facial recognition without the public knowing about it.

The politicians allege that facial recognition is fallacious, interfering, and violates the basic human right of privacy. According to a recent study, the algorithms that facial recognition uses are not only incorrect but are also flawed in identifying the black and Asian faces in comparison to those of the whites.

How Facial Recognition works?

  • The faces stored in a police photo database are mapped using the software.
  • CCTV present at public places identifies the faces. 
  • Possible matches are compared and then sent to the police. 
  • However, pictures of inaccurate matches are stored for weeks.

Earth-I announces the launch of SEVANT, using satellite for surveillance on Copper Smelters


London: Earth-i has announced to launch a service for traders and miners on October 18, ahead of LME(London Metal Exchange) week to spy over copper smelters through satellite imagery to predict their shut down and boom. The service, sold by Earth-i will keep surveillance over copper smelters as to get beforehand notice of their closures and openings, which could lead to jumps in copper prices.


The copper market is widely watched and closely studied by analysts and researchers as an indicator of a good economy since the metal is widely used in many ways from construction to manufacturing. To watch over these irregularities of going off and on of copper smelters resulting in a surge in prices, Britain-based Earth-i which uses geo-spatial intelligence in collaboration with Marex Spectron and the European Space Agency is set to launch a ground-breaking product, the SAVANT Global Copper Smelting Index. The dataset will provide key operational status reports of the world's copper plants to subscribers. Most of the surveys on copper miners and smelters are released every month.

Earth-i Chief Technology Officer John Linwood said: "Historically when you look at smelters, the challenge has been getting up-to-date information." Over the last year, the company has been testing SAVANT and conducting trials with financers, traders and managers and also detected a major closedown in Chile, the world's biggest copper-producing country. “SAVANT delivers unique insights, not just to copper producers, traders, and investors, but also to analysts and economists who use metals performance as strong indicators of wider economic activity," says Wolf, Global Head of Market Analytics at Marex Spectron.

Earth-i uses twenty high-resolution satellites along with Artificial Intelligence and Machine Learning-led analytics working with satellite image processing. They also launched their satellite last year. The SAVANT Global Copper Smelting Index covers and monitors 90% copper smelting around the world. The data will allow users to make informed and timely decisions to tackle jolts in copper prices. "Earth-i will also publish a free monthly global index from 18 October" a statement by earth-i, the index will be free but delayed.

Google has been listening to recordings from Home smart speakers


Google has admitted that it listens to voice recordings of users from its AI voice-assistant Google Assistant after its Dutch language recordings were leaked by Belgian public broadcaster VRT. “Most of these recordings were made consciously, but Google also listens to conversations that should never have been recorded, some of which contain sensitive information,” VRT claimed in its report.

Google’s product manager of Search David Monsees admitted, in a company blog post, that its language experts globally listen to these recordings to help Google better understand languages to develop speech technology.

“These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant,” the post read.

Google, however, insists that only around 0.2 per cent of all audio snippets are reviewed. The clippings, the company says, are anonymous or not associated with user accounts and do not reveal a user’s personal information. The post adds that no background noise is transcribed by the language experts to maintain privacy.

However, of over 1,000 recordings from Assistant, which is used on smartphones, smart home speakers like Google Home and other products, VRT reported that 153 were recorded accidentally and even revealed some personal information of users such as their address in one case and names of grandchildren of a family in another.

Notably, to activate the Google Assistant, users need to say the phrase “OK, Google” or physically trigger the Assistant button on devices, after which it starts recording. Though rare, Google admits that Assistant may falsely accept recording request sometimes when triggered by interpreting something else as “Ok Google”. According to the post, this tends to happen when there is too much background noise.

An App Which Could Have Meant For Any Woman to Be a Victim of Revenge Porn Taken Down By the Developers



An app created solely for "entertainment" a couple of months back, won attention as well as criticism. It professed to have the option to take off the clothes from pictures of women to make counterfeit nudes which implied that any woman could be a victim of revenge porn.

Saying that the world was not prepared for it the app developers have now removed the software from the web and wrote a message on their Twitter feed saying, "The probability that people will misuse it is too high, we don't want to make money this way."

Likewise ensuring that that there would be no different variants of it accessible and subsequently withdrawing the privilege of any other person to utilize it, they have also made sure that any individual who purchased the application would get refund too.

The program was accessible in two forms - a free one that put enormous watermarks over made pictures and a paid rendition that put a little "fake" stamp on one corner.

Katelyn Bowden,  founder of anti-revenge porn campaign group Badass, called the application "terrifying".

"Now anyone could find themselves a victim of revenge porn, without ever having taken a nude photo, this tech should not be available to the public, “she says.

The program apparently utilizes artificial intelligence based neural networks to remove clothing from the images of women to deliver realistic naked shots.

The technology is said to be similar to that used to make the so-called deepfakes, which could create pornographic clips of celebrities.

Can AI become a new tool for hackers?

Over the last three years, the use of AI in cybersecurity has been an increasingly hot topic. Every new company that enters the market touts its AI as the best and most effective. Existing vendors, especially those in the enterprise space, are deploying AI  to reinforce their existing security solutions. Use of artificial intelligence (AI) in cybersecurity is enabling IT professionals to predict and react to emerging cyber threats quicker and more effectively than ever before. So how can they expect to respond when AI falls into the wrong hands?

Imagine a constantly evolving and evasive cyberthreat that could target individuals and organisations remorselessly. This is the reality of cybersecurity in an era of artificial intelligence (AI).

There has been no reduction in the number of breaches and incidents despite the focus on AI. Rajashri Gupta, Head of AI, Avast sat down with Enterprise Times to talk about AI and cyber security and explained that part of the challenge was not just having enough data to train an AI but the need for diverse data.

This is where many new entrants into the market are challenged. They can train an AI on small sets of data but is it enough? How do they teach the AI to detect the difference between a real attack and false positive? Gupta talked about this and how Avast is dealing with the problem.

During the podcast, Gupta also touched on the challenge of ethics for AI and how we deal with privacy. He also talked about IoT and what AI can deliver to help spot attacks against those devices. This is especially important for Avast who are to launch a new range of devices for the home security market this year.

AI has shaken up with automated threat prevention, detection and response revolutionising one of the fastest growing sectors in the digital economy.

Hackers are using AI to speed up polymorphic malware, causing it to constantly change its code so it can’t be identified.

Uber Working with AI to Determine the Probability of Drunken Passengers



Recently according to CNN, the Uber Innovation Inc. documented a patent for a machine learning application that could precisely foresee a user's condition of sobriety and caution the driver with this information. Because apparently Uber is taking a shot at innovating a technology that could decide exactly just how drunken passengers are when requesting for a ride.

The patent application depicts artificial intelligence that figures out how passengers commonly utilize the Uber application, so it can better spot uncommon behaviour in light of the fact that, various Uber drivers have been physically assaulted by passengers as of late, a significant number of whom were inebriated.

The application's algorithms measure various factors that indicate that the passengers are most likely inebriated it incorporates typos, walking speed, how correctly the passengers press in-app buttons, and the amount of time it takes to arrange a ride. Somebody messing up most words, swaying side-to-side and taking at most 15 minutes to arrange for a ride late on Saturdays.

Uber's patent says that it could, possibly, utilize the innovation to deny rides to users in light of their current state, or maybe coordinate them with different drivers with pertinent abilities and training.

The application is said to likewise increase the wellbeing for both the rider as well as the driver.

As per an ongoing CNN investigation, no less than 103 Uber drivers have been blamed for sexually assaulting or abusing passengers in just the previous four years. Now, while the application won't stop the ruthless idea of a few people, it can definitely help in accurately recognizing disabled people so they can be placed with trusted drivers or those with experience in commuting inebriated passengers.

Google bans AI used for weapons and war


Google CEO Sundar Pichai on Thursday announced that Google is banning the development of Artificial Intelligence (AI) software that could be used in weapons or harm others.

The company has set strict standards for ethical and safe development of AI.

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a blog post. “As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions."

The objectives Google has framed out for this include that the AI should be socially beneficial, should not create or promote bias, should be built and tested safely, should have accountability, and that it should uphold privacy principles, etc.

The company, however, will not pursue AI development in areas where it threatens harm on other people, weapons, technology that violate human rights and privacy, etc.

“Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints,” the post read.

However, while the company will not create weapons, it had said that it will continue to work with the military and government.

"These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe," Pichai said.

This decision comes after a series of resignation of several employees after public criticism of Google’s contract with the Defense Department for an AI that could help analyze drone video, called Project Maven.