A slew of new Sony patents found by Android Headlines and now published with the World Intellectual Property Organization suggests that Sony has some very big plans for 3D camera sensors and AR applications. The listings cover a relatively wide range in terms of scope — from double-checking environmental conditions for 3D scanning to multi-dimensional 3D scanning based on pose metrics and determining the scale of an object from a 3D scan.
The first of the new patents applies to a system that appears to be embedded in a 3D sensor-enabled smartphone camera and is used to check the user or objects surrounding environment to ensure 3D scanning can take place, to begin with.
Specifically, utilizes the device-bound sensor to perform multiple checks for environmental conditions such as lighting and interference which may be caused by background objects against a checklist of criteria either locally or in the cloud. It won't necessarily need to do that in real time either since the system can work from a series of snapshots.
Sony's second filing builds on the previous premise by implementing a set of instructions in the memory of the camera-enabled device to determine how processing will occur based on the orientation of the device. That will assist the system defined in the third patent move through the process of actually generating the 3D image data associated with the person or object in question.
The accuracy of the system in Sony's latest invention is will also not only utilize the 3D sensor as a similar invention might. Instead, it will center the generation of associated metrics around the use of two or more 2D snapshots that are captured along with the appropriate distance and shape data. That should make capture a much less time-intensive process since it won't need to process raw 3D images from multiple angles into a complete 3D model.
Last but not least, Sony has patented a solution for determining a more exact scale and dimensions of a scanned object utilizing a 'plurality' of images and the ratio between different points in those.
The patents definitely seem linked but how might they be used?
Speculatively, and as noted above, each of the new patents seems to be intrinsically joinable in any number of combinations to serve a plethora of different use cases. For instance, the second of Sony's patents could very well be used in conjunction with the environmental checking system and the capture system itself. In that configuration, a solution could be implemented both ensure that the user is in a good environment for 3D modeling and to guide them through the process of capturing a 3D model.
Using an example that's currently more prevalent in some Android smartphones, the method for determining a scanned object's actual size could be added to the mix for AR purposes. Linked with distance measurements and facial recognition as well as distance measurements, a scanned object could be superimposed on a user's face or body. For example, a scanned hat could be placed on the user's head for a quick augmented snapshot to be shared. Conversely, the same method could take a measurement of a user's proportions and superimpose more traditional AR elements much more accurately.
Sony pushes to keep its word
Sony has previously referred to 3D camera sensors as this generation's equivalent — in terms of innovation — to the addition of a camera on smartphones. So the enormous amount of thought and effort it is pouring into associated patents, although they may never be used in their current form, shouldn't come as much of a surprise.
In fact, the company's belief in that sentiment is so strong that it recently increased its production of related LIDAR-based time-of-flight sensors despite a downturn in overall demand. Depending on whether or not the new patents are used and exactly how they are used, they reinforce the company's predictions with feasible solutions. Those could very well make their way into any number of smartphones if ToF sensors and 3D cameras begin to truly catch on in that industry.