Angular robotaxi and Google’s deepfake platform: interesting AI news of early June

15 June, 2022

In the US, an autonomous checkout system has been developed that is 8 times faster than its predecessors.

Customers do not need to scan barcodes: it’s enough to place the products on the surface. The system from Mashgin recognizes items placed at almost any angle and scans them with 99.9% accuracy. As a result, the interaction between the customer and the checkout takes only 10 seconds. Over the course of three years, the system will be installed in more than 7000 stores.

 

Google’s AI tool for interview preparation, Interview Warmup, has been made available to all users.

The service assists in practicing answering interview questions in English. The system converts voice into text, analyzes what’s said, and provides advice. For example, the AI suggests words that are used too frequently and points out mistakes.

 

Google has prohibited the use of the Colab platform for creating deepfakes.

Colab is a free interactive cloud environment for working with code. By training neural networks through this service, it was possible to create very realistic substituted faces. The ban is likely related to Google’s efforts to combat fake information on the Internet.

 

A US startup has developed an algorithm for diagnosing depression through voice.

The algorithm is based on the fact that people who feel depressed speak slowly with pauses, and their voice tone changes. This symptom is independent of language and culture. The neural network was trained on tens of thousands of voice recordings from patients who kept emotion diaries. The project’s authors claim that the system can even detect mild cases of the illness.

 

Amazon’s subsidiary introduced an unusual prototype of an autonomous robotaxi.

The vehicle looks unique: it’s a rectangular car without a driver’s seat or a steering wheel. Equipped with sensors placed at the corners of the body providing a 360-degree view, as their fields of vision intersect. This eliminates blind spots.

 

The DALL-E 2 neural network “redrew” masterpieces of painting.

The neural network from OpenAI, capable of generating images based on descriptions, expanded the narratives of famous paintings. As a result, the “Mona Lisa” ended up knee-deep in a river, and the “Girl with a Pearl Earring” was seen in front of a mirror. Check out the images in the following post.

Cleverbots Blog

Today: Saturday

December 21, 1970, 00:00:00