Apple enhances its 3D mapping technology with “multi-sensor” long-range depth mapping.

Apple, through its acquisition of Israeli company PrimeSense in 2013, used its 3D mapping technology for Face ID with iPhone X. In 2015 a granted patent, originally from PrimeSense, illustrated a projection system for a much larger canvas beyond Face ID. PrimeSense technology was originally the basis of the Xbox Kinect used as a motion controller. Over the years, Apple has refined this 3D depth mapping technology with one of the patents covered in our 2021 report titled “Apple Invents Enhanced Depth Mapping using Visual Inertial Odometry for iPhone and/or a Tabletop Device.”

The United States Patent and Trademark Office today released a patent application from Apple titled “Multi-sensor depth mapping,” which once again advances their 3D mapping technology. Apple states that “The present invention relates generally to systems and methods for depth mapping, and in particular to improve the accuracy of depth maps.” More specifically, Apple notes that their invention is to provide depth mapping beyond short distances like Face ID which used a disparity based depth mapping system.

According to Apple, ToF-based depth mapping systems are more accurate at greater distances and are less vulnerable to mechanical and thermal effects than disparity-based depth measurements. Short-range ToF measurements, however, can be greatly affected by small deviations between the photon transmission time and the time-sensitive signals generated in response to the photon’s arrival. Also, while disparity-based depth mapping systems may use standard image sensors with small pitch and high transverse resolution, ToF systems typically require special radiation sources and range sensors, with inherently lower resolution.

Embodiments of the present invention provide depth mapping systems that combine the high transverse resolution of disparity-based depth sensors with the high depth accuracy of ToF-based flow sensors.

These systems use the accurate depth measurements made by a ToF sensor to generate a disparity correction function, which is then applied to improve the accuracy of the disparity-based depth measurements made by a stereoscopic or light-patterned depth sensor. This disparity correction is particularly significant at longer measuring distances, as well as to compensate for calibration loss due to factors such as mechanical shock and environmental conditions. In some embodiments, the disparity-corrected depth measurements made by the disparity-based depth sensor are also used in the calculation of a range correction function that can be used to improve the accuracy of the longer range depth measurements provided by the ToF sensor.

In the disclosed embodiments, an illumination assembly directs the modulated optical radiation towards a target scene. For ToF sensing purposes, the radiation is time-modulated, such as in the form of short pulses for direct ToF sensing or carrier wave modulation for indirect ToF sensing. Furthermore, the radiation can be spatially modulated to project a structured light pattern for disparity-based sensing. Based on the time modulation, a distance sensor detects the respective flight times of the photons reflected from an array of locations arranged along the target scene. For disparity-based detection, a camera captures a two-dimensional image of the target scene.

The Apple patent FIG. 1 below is a schematic graphical illustration of a depth mapping system; FIGURE. 2 is a schematic side view of the depth mapping system of FIG. 1.

2 new depth cameras Apple Patent - 17.11.2022 - Apple Patent Report

For more details, see Apple Patent Application US 20220364849 A1.

Apple’s patent refused to identify which product range their advanced 3D depth mapping is intended for. While the invention could apply to a future iPhone, the device illustrated in the FIG. 1 is not an iPhone. Could Apple be hinting at a future high-end Apple TV box that supports Apple Fitness+ and interactive gaming, or something entirely new?

Inventors

Shay Yosub: Technical Lead, Depth Hardware

Assaf Avraham: System Manager (from PrimeSense)

Joe Nawasra Ph.D: Technical Lead, Camera Hardware Design

Jonathan Pokrass: Head of Algorithms

Moshe Laifenfeld: Manager Depth Detection Algorithm

Niv Gilboa: Electro-optical hardware manager

Tal Kaitz: Algorithms foreman

Akermann; Ronen: Systems Engineering Foreman

Naveh Levanon: Image Processing Engineer

10.51FX - Patent application bar

Leave a Reply

Your email address will not be published. Required fields are marked *