The head of the Sony sensor unit, Satoshi Yoshihara, said they will increase the production of 3D camera sensors that can be deployed on both front and back of phones in late summer. While Yoshihara is aware of the potential for augmented reality applications, the most intriguing aspect of this new technology is a better face recognition sensor than we currently have.
The Face ID approach, which Apple uses for the first time on iPhone X, takes advantage of a grid of invisible points projected on the face and deformations of the face on this grid in a three-dimensional space. In Sony’s 3D sensor, laser signals are used and the depth map can be removed with the return time of these signals. This way, Yoshihara said that more detailed models of people’s faces could be extracted and that the system was processing from 5 meters away.
Imaging hardware has traditionally been associated with photography and videography. However, the perception of depth of the genre that Sony speaks for 2019 is becoming increasingly important. The Japanese giant bought a company named Soft Kinetic a few years ago and changed its name to Sony Depth sensing last year. Sony now wants to take advantage of this technology to take advantage of autonomous cars, drones, robots, head-mounted displays and game consoles.
On the phones, the device is developing the expense of methods of unlocking the device. For example, the OnePlus 6T uses the selfie camera to identify the user’s face. Therefore, the use in the dark is distressed. Apple’s Face ID and its Android rivals use multiple components that are good for larger tablets, such as the new iPad Pro, but occupy a significant amount of space at the top of the device, which is a major obstacle to any phone designer. Sony’s 3D sensors can dig in this game if they reduce the size of required parts and develop something like Face ID for security.