Uber Working with AI to Determine the Probability of Drunken Passengers



Recently according to CNN, the Uber Innovation Inc. documented a patent for a machine learning application that could precisely foresee a user's condition of sobriety and caution the driver with this information. Because apparently Uber is taking a shot at innovating a technology that could decide exactly just how drunken passengers are when requesting for a ride.

The patent application depicts artificial intelligence that figures out how passengers commonly utilize the Uber application, so it can better spot uncommon behaviour in light of the fact that, various Uber drivers have been physically assaulted by passengers as of late, a significant number of whom were inebriated.

The application's algorithms measure various factors that indicate that the passengers are most likely inebriated it incorporates typos, walking speed, how correctly the passengers press in-app buttons, and the amount of time it takes to arrange a ride. Somebody messing up most words, swaying side-to-side and taking at most 15 minutes to arrange for a ride late on Saturdays.

Uber's patent says that it could, possibly, utilize the innovation to deny rides to users in light of their current state, or maybe coordinate them with different drivers with pertinent abilities and training.

The application is said to likewise increase the wellbeing for both the rider as well as the driver.

As per an ongoing CNN investigation, no less than 103 Uber drivers have been blamed for sexually assaulting or abusing passengers in just the previous four years. Now, while the application won't stop the ruthless idea of a few people, it can definitely help in accurately recognizing disabled people so they can be placed with trusted drivers or those with experience in commuting inebriated passengers.


Google bans AI used for weapons and war


Google CEO Sundar Pichai on Thursday announced that Google is banning the development of Artificial Intelligence (AI) software that could be used in weapons or harm others.

The company has set strict standards for ethical and safe development of AI.

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a blog post. “As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions."

The objectives Google has framed out for this include that the AI should be socially beneficial, should not create or promote bias, should be built and tested safely, should have accountability, and that it should uphold privacy principles, etc.

The company, however, will not pursue AI development in areas where it threatens harm on other people, weapons, technology that violate human rights and privacy, etc.

“Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints,” the post read.

However, while the company will not create weapons, it had said that it will continue to work with the military and government.

"These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe," Pichai said.

This decision comes after a series of resignation of several employees after public criticism of Google’s contract with the Defense Department for an AI that could help analyze drone video, called Project Maven.