US20230260143A1 - Using energy model to enhance depth estimation with brightness image - Google Patents

Using energy model to enhance depth estimation with brightness image Download PDF

Info

Publication number
US20230260143A1
US20230260143A1 US17/813,300 US202217813300A US2023260143A1 US 20230260143 A1 US20230260143 A1 US 20230260143A1 US 202217813300 A US202217813300 A US 202217813300A US 2023260143 A1 US2023260143 A1 US 2023260143A1
Authority
US
United States
Prior art keywords
depth
pixel
image
brightness
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/813,300
Inventor
Amina Achaibou
Filiberto Pla Bañón
Javier CALPE MARAVILLA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Analog Devices International ULC
Original Assignee
Analog Devices International ULC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Analog Devices International ULC filed Critical Analog Devices International ULC
Priority to US17/813,300 priority Critical patent/US20230260143A1/en
Assigned to Analog Devices International Unlimited Company reassignment Analog Devices International Unlimited Company ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ACHAIBOU, AMINA, BANON, FILIBERTO PLA, MARAVILLA, JAVIER CALPE
Priority to PCT/EP2023/053961 priority patent/WO2023156561A1/en
Publication of US20230260143A1 publication Critical patent/US20230260143A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • G01S17/36Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4915Time delay measurement, e.g. operational details for pixel components; Phase measurement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/78Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/705Pixels for depth measurement, e.g. RGBZ
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation

Definitions

  • the present disclosure relates generally to depth estimation and, more specifically, to using an energy model to enhance depth estimation with brightness images.
  • One technique to measure depth is to directly or indirectly calculate the time it takes for a signal to travel from a signal source on a sensor to a reflective surface and back to the sensor.
  • the time travelled is proportional to the distance from the sensor to the reflective surface. This travel time is commonly referred as time of flight (ToF).
  • ToF time of flight
  • Various types of signals can be used with ToF sensors, the most common being sound and light.
  • FIG. 1 illustrates a depth estimation system according to some embodiments of the present disclosure
  • FIG. 2 A illustrates a continuous wave of a projected signal 210 according to some embodiments of the present disclosure
  • FIG. 2 B illustrates a continuous wave of a captured signal 220 according to some embodiments of the present disclosure
  • FIG. 3 illustrates a cycle of continuous waves of modulated light according to some embodiments of the present disclosure
  • FIG. 4 A is a block diagram illustrating a controller according to some embodiments of the present disclosure.
  • FIG. 4 B is a block diagram illustrating a depth enhancement module according to some embodiments of the present disclosure.
  • FIG. 5 A illustrates an example depth image according to some embodiments of the present disclosure
  • FIG. 5 B illustrates an example brightness image corresponding to the depth image in FIG. 5 A according to some embodiments of the present disclosure
  • FIG. 5 C illustrates an example depth enhanced image generated from the depth image in FIG. 5 A and the brightness image in FIG. 5 B according to some embodiments of the present disclosure
  • FIG. 6 illustrates an example system incorporating a depth estimation system according to some embodiments of the present disclosure
  • FIG. 7 illustrates a mobile device incorporating a depth estimation system according to some embodiments of the present disclosure
  • FIG. 8 illustrates an entertainment system incorporating a depth estimation system according to some embodiments of the present disclosure
  • FIG. 9 illustrates an example robot incorporating a depth estimation system according to some embodiments of the present disclosure.
  • FIG. 10 is a flowchart showing a method of using an energy model to enhance depth estimation with a brightness image, according to some embodiments of the present disclosure.
  • ToF camera systems are range imaging systems.
  • a ToF camera system typically includes a light source that projects light and an imaging sensor that receives reflected light.
  • the ToF camera system can estimate the distance between the imaging sensor and an object by measuring the round trip of the light.
  • a continuous-wave ToF camera system can project multiple periods of a continuous light wave and determine the distance based on the phase difference between the projected light and the received reflected light.
  • a depth image can be generated based on the phase difference.
  • ToF depth maps are often captured with low resolution, different types of noise or missing values.
  • ToF camera systems often fails to accurately estimate depth of boundaries, such as edges of objects, reflectivity boundaries (e.g., a boundary between two areas that have different reflectivity properties), and so on.
  • the inaccurate depth estimation may limit usage of ToF camera systems in the applications mentioned above. Therefore, improved technology for depth estimation is needed.
  • Embodiments of the present disclosure relates to a depth estimation system that can use an energy model to enhance ToF depth estimation using brightness images.
  • a brightness image may be an active brightness image, such as an infrared (IR) image, or a RGB (red, green, and blue) image.
  • the depth estimation system may simultaneously acquire the brightness image and the ToF depth estimation.
  • the brightness image and the ToF depth estimation may be based on same light source (e.g., IR) or different light sources (e.g., visible light for the brightness image versus IR for the ToF depth estimation).
  • Brightness images may have better detection of boundaries and can be used to enhance depth images generated by ToF camera systems.
  • An example of the depth estimation system includes an illuminator assembly, a camera assembly, and a controller.
  • Light is projected onto an object.
  • the illuminator assembly may project light to illuminate a local area, such as an area that includes an object.
  • the light may be modulated light, such as modulated IR.
  • the illuminator assembly may project pulsed light.
  • the illuminator assembly may project one or more continued waves, such as continued waves of different frequencies.
  • the object can reflect at least a portion of the projected light.
  • the camera assembly captures at least a portion of the reflected light and can convert captured photons to charges and accumulate the charges.
  • the depth estimation system may generate a depth image and a brightness image from the charges accumulated in the camera assembly.
  • the depth image is based on a phase shift between the captured light and the projected light, and the brightness image based on brightness of the captured light.
  • the depth image may include a plurality of depth pixels, each of which may correspond to a pixel in the brightness image.
  • the depth estimation system may generate an energy model that represents a fusion of the depth image and the brightness image.
  • the depth estimation system further performs an optimization process to minimize the fusion energy.
  • the fusion energy may be an aggregation of a spatial error energy and a conditional error energy.
  • the spatial error energy can be used to eliminate local errors in depth (e.g., local variations produced by common sensor noise), and the conditional error energy can be used to enhance depth discontinuities and suppress non-local noise.
  • the depth estimation system may perform the optimization process on a pixel level. For instance, the depth estimation system may determine a fusion energy for a pixel and minimize the fusion energy to determine a new depth value of the pixel.
  • the depth estimation system may run separate optimization processes for all or a subset of pixel in the depth image. The depth estimation system can generate an enhanced depth image with the new depth values.
  • the depth estimation system can take advantage of brightness images showing cleaner boundaries to enhance depth estimation of boundaries and can also smooth flat planes. Enhanced depth images generated by the depth estimation system can show the boundaries better than regular ToF depth images. With the more accurate depth estimation, the enhanced depth images can be used in various applications.
  • FIG. 1 illustrates a depth estimation system 100 according to some embodiments of the present disclosure.
  • the depth estimation system 100 may use ToF techniques to generate depth images.
  • the depth estimation system 100 includes an illuminator assembly 110 , a camera assembly 120 , and a controller 130 .
  • the illuminator assembly 110 includes an emitter 160 and a diffuser 165 .
  • the camera assembly 120 includes a lens 190 and an image sensor 195 .
  • different and/or additional components may be included in the depth estimation system 100 .
  • functionality attributed to one component of the depth estimation system 100 may be accomplished by a different component included in depth estimation system 100 or a different system than those illustrated.
  • the illuminator assembly 110 may include no diffuser or more than one diffuser.
  • the camera assembly 120 may include no lens or more than one lens.
  • the illuminator assembly 110 projects light 170 to a local area that includes an object 140 .
  • the emitter 160 is a light source that emits light (“emitted light”).
  • the emitter 160 may include a laser, such as an IR or near-IR (NIR) laser, an edge emitting laser, vertical-cavity surface-emitting laser (VCSEL), and so on.
  • the emitter 160 may include one or more light-emitting diodes (LEDs).
  • the emitter 160 can emit light in the visible band (i.e., ⁇ 380 nm to 750 nm), in the NIR band (i.e., ⁇ 750 nm to 1 mm), in the ultraviolet band (i.e., 10 nm to 380 nm), in the shortwave IR (SWIR) band (e.g., ⁇ 900 nm to 2200 nm), some other portion of the electromagnetic spectrum, or some combination thereof.
  • the illuminator assembly 110 may include multiple emitters 160 , each of which may emit a different wavelength.
  • the illuminator assembly 110 may include a first emitter that emits IR and a second emitter that emits visible light.
  • the diffuser 165 spreads out or scatters the emitted light before the light 170 is projected into the local area.
  • the diffuser 165 may also control brightness of the emitted light.
  • the diffuser 165 may be translucent or semi-transparent.
  • the illuminator assembly 110 may include more, fewer, or different components.
  • the illuminator assembly 110 may include one or more additional diffusers to direct light from the emitter 160 to one or more additional objects in the local area.
  • the illuminator assembly 110 may project the light 170 as modulated light, e.g., according to a periodic modulation waveform.
  • a periodic modulation waveform may be a sinusoidally modulated waveform.
  • the frequency of the periodic modulation waveform is the frequency of the modulated light.
  • the illuminator assembly 110 may project one or more continuous waves. For an individual continuous wave, the illuminator assembly 110 may project multiple periods. Different continuous waves may have different wavelengths and frequencies. For instance, the illuminator assembly 110 can project continuous waves having modulation frequencies in a range from 50 MHz to 200 MHz. In an embodiment, the illuminator assembly 110 includes multiple (i.e., at least two) light projectors. The light projectors may project continuous waves having different frequencies. The light projectors may alternate and project the continuous waves at different times. For example, a first light projector projects a first continuous wave having a first frequency during a first time of period.
  • a second light projector projects a second continuous wave having a second frequency during a second time of period.
  • a third light projector projects a third continuous wave having a third frequency during a third time of period.
  • the three continuous waves may constitute a cycle. This cycle can repeat.
  • the illuminator assembly 110 may include one light projector that projects all the three continuous waves. In other embodiments, the illuminator assembly 110 may project a different number of continuous waves, such as two or more than three.
  • One cycle may constitute one frame. The total time for a cycle may be 10-20 ms.
  • the illuminator assembly 110 can project light through multiple cycles for obtaining multiple frames. There may be time gap between cycles. More information regarding modulated light having multiple frequencies is provided below in conjunction with FIG. 2 .
  • the object 140 in FIG. 1 has a shape of a cube. In other embodiments, the object 140 may have other shapes or structures. Even though not shown in FIG. 1 , the local area may include other objects that can be illuminated by the light.
  • the object reflects the light, reflected light 180 , and the reflected light 180 can be captured by the camera assembly 120 .
  • the camera assembly 120 captures image data of at least a portion of the local area illuminated with the light 170 . For instance, the camera assembly 120 captures the reflected light 180 and generates image data based on the reflected light 180 .
  • the reflected light 180 may be IR.
  • the camera assembly 120 may also capture visible light reflected by the object 140 . The visible light may be projected by the illuminator assembly 110 , ambient light, or a combination of both.
  • the camera assembly 120 is separated from the illuminator assembly 110 in FIG. 1 , in some embodiments, the camera assembly 120 is co-located with the illuminator assembly 110 (e.g., may be part of the same device).
  • the lens 190 receives the reflected light 180 and directs the reflected light 180 to the image sensor 195 .
  • the image sensor 195 includes a plurality of pixels 197 . Even though the pixels 197 shown in FIG. 1 are arranged in a column, pixels 197 of the image sensor 195 may also arranged in multiple columns.
  • a pixel 197 includes a photodiode that is sensitive to light and converts collected photons to charges, e.g., photoelectrons. Each of the photodiodes has one or more storage regions that store the charges.
  • the image sensor 195 may be both a ToF sensor and a brightness sensor.
  • a pixel 197 may be a depth-sensing pixel, a brightness-sensing pixel, or both.
  • a depth-sensing pixel is configured to present a depth output signal that is dependent on the distance from the image sensor 195 to the locus of the object 140 imaged onto the depth-sensing pixel. Such distance is a ‘depth’ of the locus of the object 140 .
  • Each depth-sensing pixel may independently determine a distance to the object 140 viewed by that pixel.
  • the depth output signals of the depth-sensing pixels in the image sensor 195 can be used to generate a depth image of the local area.
  • a brightness-sensing pixel is configured to present a brightness output signal that is dependent on brightness of light reflected from the locus of the object 140 imaged onto the brightness-sensing pixel.
  • the brightness output signals of the brightness-sensing pixels in the image sensor 195 can be used to generate a brightness image of the local area.
  • the brightness image may be an active brightness image.
  • An example of the brightness image is an IR image.
  • each pixel 197 of the image sensor 195 may generate both a depth output signal and a brightness output signal from the reflected light that the pixel 197 captures.
  • the image sensor 195 includes two sets of pixels 197 : one set is for sensing depth and the other set is for sensing brightness.
  • the output signals of the image sensor 195 may be analog signals, such as electrical charges.
  • the image sensor 195 can be synchronized with the projection of the illuminator assembly 110 .
  • the image sensor 195 may have one or more exposure intervals, during which the image sensor 195 takes exposures of the portion of the local area and charges are accumulated in the image sensor 195 . Outside the exposure interval, the image sensor 195 does not take exposures.
  • an exposure interval of the image sensor 195 may be synchronized with a continuous wave or cycle projected by the illuminator assembly 110 . For instance, the exposure interval starts before or when the continuous wave or cycle starts and ends when or after the continuous wave or cycle ends.
  • the image sensor 195 may have multiple exposure intervals for a single continuous wave.
  • the image sensor 195 may take multiple exposures during a continuous wave, and the multiple exposures may correspond to different phase offsets.
  • the time gap maybe 1-2 milliseconds (ms).
  • the exposure intervals may have a constant duration, e.g., approximately 100 microseconds ( ⁇ s). In alternative embodiments, the exposure intervals may have different durations.
  • the image sensor 195 may use global shutter scanning.
  • the image sensor 195 includes a global shutter that may open and scan during each exposure interval and closes when the exposure interval ends.
  • the image sensor 195 may include a tunable filter.
  • the tunable filter blocks light from arriving at the detector and may be mounted anywhere in the optical path of the reflected light 180 .
  • the tunable filter is attached on top of the image sensor 195 or at the front of the camera assembly 120 .
  • the tunable filter can be switched between on (active) and off (inactive).
  • the tunable filter can be inactive during an exposure interval and active when the exposure internal ends. When the tunable filter is inactive, light can pass the tunable filter and reach the image sensor 195 . When the tunable filter is active, light is blocked from the image sensor 195 .
  • the tunable filter when the tunable filter is active, it may let light of a certain wavelength (or a certain band of wavelengths) pass but block light of other wavelengths. For instance, the tunable filter may let light of the wavelengths projected by the illuminator assembly 110 (e.g., the light 170 ) pass, but block light of other wavelengths, which can, for example, reduce noise in the image data captured by the image sensor 195 . In an example where the light 170 is IR, the tunable filter may block visible light. In other embodiments, when the tunable filter is active, it can block light of all wavelengths to avoid charge accumulation in the image sensor 195 . In embodiments where the tunable filter blocks all light, dark noise calibration of the image sensor 195 can be conducted.
  • the camera assembly 120 may read out stored photoelectrons from the image sensor 195 to obtain image data, e.g., from storage regions of each pixel 197 of the image sensor 195 . During the readout, the camera assembly 120 can convert the photoelectrons into digital signals (i.e., analog-to-digital conversion). In embodiments where the illuminator assembly 110 includes multiple light projectors, photoelectrons corresponding to pulses of modulated light projected by different light projectors may be stored in separate storage regions of each photodiode. The camera assembly 120 may read out the separate storage regions to obtain the image data. In some embodiments, the camera assembly 120 may read out all the image data stored in the image sensor 195 .
  • the camera assembly 120 may read out some of the image data stored in the image sensor 195 .
  • the camera assembly 120 may execute multiple readout intervals for the continuous wave.
  • Each readout interval may correspond to a different phase offset.
  • the time gap maybe 1-2 milliseconds (ms).
  • the readout intervals may have a constant duration, e.g., approximately 100 microseconds ( ⁇ s). In alternative embodiments, the readout intervals may have different durations.
  • the controller 130 controls the illuminator assembly 110 and the camera assembly 120 .
  • the controller 130 provides illumination instructions to the illuminator assembly 110 , and the illuminator assembly 110 projects the light 170 in accordance with the illumination instructions.
  • the controller 130 can also provide imaging instructions to the camera assembly 120 , and the camera assembly 120 takes exposures and reads out image data in accordance with the imaging instructions.
  • the controller 130 also determines depth information using image data from the camera assembly 120 . For instance, the controller 130 can generate depth images from the image data.
  • a depth image includes a plurality of depth pixels. Each depth pixel has a value corresponding to an estimated depth, e.g., an estimated distance from a locus of the object 140 to the image sensor 195 .
  • a single depth image may also be referred to as a depth frame or a depth map.
  • the controller 130 may determine depth information based on the phase shift between the light 170 projected by the illuminator assembly 110 and the reflected light 180 .
  • the controller 130 may perform phase unwrapping to determine depth information. In some embodiments (e.g., embodiments where the illuminator assembly 110 projects multiple cycles of modulated light), the controller 130 may generate multiple depth frames.
  • the controller 130 can also generate a brightness image that corresponds to a depth image.
  • the image data for the brightness image and the image data for the depth image may be generated by the camera assembly 120 from same light, such as the reflected light 180 .
  • the brightness image and the depth image are generated simultaneously.
  • the camera assembly 120 may simultaneously reads out the image data for the brightness image and the image data for the depth image.
  • the image data for the brightness image and the image data for the depth image are the same image data.
  • the brightness image may include a plurality of brightness pixels. Each brightness pixel has a value corresponding to a light intensity, e.g., an IR intensity.
  • a brightness pixel in the brightness image may correspond to a depth pixel in the depth image.
  • the brightness pixel and the depth pixel may be generated from light reflected from the same locus of the object 140 .
  • the brightness pixel and the depth pixel may be generated based on signals from the same pixel 197 of the image sensor 195 , and the pixel 197 captures the light reflected from the locus of the object 140 .
  • the controller 130 can further enhance depth estimation in a depth image by fusing the depth image with a brightness image based on an energy model.
  • the brightness image may show one or more cleaner boundaries of the object 140 than the depth image. For instance, one or more depth pixels that represent at least a portion of a boundary of the object 140 may be invalid.
  • the boundary may be an edge of the object 140 .
  • the boundary is a boundary between two areas of the object 140 that have different reflectivity properties, such as a boundary between a fluorescent strip, which has relatively high reflectivity, and a low reflectivity surface.
  • the controller 130 may take advantage of the more accurate information of the boundaries of the object 140 in the brightness image to generate an enhanced depth image, which includes better depth estimation than the original depth image. Certain aspects of the controller 130 are described below in conjunction with FIGS. 4 A and 4 B .
  • FIG. 2 A illustrates a continuous wave of a projected signal 210 according to some embodiments of the present disclosure.
  • the projected signal 210 is a modulated signal, e.g., a modulated light projected by the illuminator assembly 110 in FIG. 1 .
  • the projected signal 210 has a sinusoidally modulated waveform.
  • the sinusoidally modulated waveform may be represented by the following equation:
  • t denotes time
  • S denotes optical power of projected signal
  • f is the frequency of the modulated signal (i.e., modulation frequency)
  • is the mathematical constant
  • a s denotes the amplitude of the modulated signal
  • B s denotes an offset of the modulated signal that may include attenuated original offset and/or an offset due to presence of ambient light (e.g., sunlight or light from artificial illuminants).
  • FIG. 2 B illustrates a continuous wave of a captured signal 220 according to some embodiments of the present disclosure.
  • the captured signal 220 is a captured portion of modulated light reflected by an object illuminated by the projected signal 210 .
  • the captured signal 220 may be captured by the image sensor 195 and may be at least a portion of the reflected light 180 in FIG. 1 .
  • the captured signal 220 can be represented by the following equation:
  • r denotes the optimal power of the captured signal 220
  • denotes an attenuation factor of the captured signal 220
  • denotes a phase shift between the waveform of the captured signal 220 and the waveform of the projected signal 210
  • is time delay between the captured signal 220 and the projected signal 210
  • d denotes the distance from the image sensor 195 to the object 140 (i.e., the depth of the object 140 )
  • c is the speed of light.
  • FIG. 3 illustrates a cycle 310 of continuous waves 315 A-C, 325 A-C, and 335 A-C of modulated light according to some embodiments of the present disclosure.
  • the continuous waves 315 A-C, 325 A-C, and 335 A-C are sinusoidal waves in FIG. 3 .
  • the continuous waves 315 A-C, 325 A-C, and 335 A-C may have different waveforms.
  • the continuous waves 315 A-C has a frequency 317
  • the continuous waves 325 A-C has a frequency 327
  • the continuous waves 335 A-C has a frequency 337 .
  • the three frequencies 317 , 327 , and 337 are different from each other.
  • the frequency 317 is smaller than the frequency 327
  • the frequency 327 is smaller than the frequency 337 .
  • the three frequencies 317 , 327 , and 337 may be in a range from 50 to 200 MHz or higher frequencies.
  • the cycle 310 may be a cycle of projecting the modulated light by the illuminator assembly 110 in FIG. 1 .
  • the cycle 310 may be a cycle of exposure by the camera assembly 120 .
  • the camera assembly 120 may take exposures during the periods of times of the continuous waves 315 A-C, 325 A-C, and 335 A-C and not take exposures beyond these periods of times, despite that the illuminator assembly 110 may project modulated light beyond these periods of times.
  • the cycle 310 may be a cycle of readout by the camera assembly 120 .
  • the camera assembly 120 may read out charges accumulated in the image sensor 195 during the periods of times of the continuous waves 315 A-C, 325 A-C, and 335 A-C and not read out charges beyond these periods of times, despite that the illuminator assembly 110 may project modulated light beyond these periods of times or that the image sensor 195 may take exposures beyond these periods of times.
  • the cycle 310 in FIG. 3 includes three continuous waves for each of the three frequencies, a cycle in other embodiments may include a different number of frequencies or a different number of continuous waves for each frequency.
  • the continuous waves 315 A-C have different phase offsets.
  • the continuous wave 315 A has a phase offset of 0°
  • the continuous wave 315 B has a phase offset of 120°
  • the continuous wave 315 C has a phase offset of 240°.
  • the continuous waves 325 A-C may have different phase offsets from each other: the continuous wave 325 A may have a phase offset of 0°, the continuous wave 325 B may have a phase offset of 120°, versus the continuous wave 325 C may have a phase offset of 240°; and the continuous waves 335 A-C may start at different phase offsets from each other: the continuous wave 335 A may have a phase offset of 0°, the continuous wave 335 B may have a phase offset of 120°, versus the continuous wave 335 C may have a phase offset of 240°.
  • the continuous waves 315 A, 325 A, and 335 A may each have a phase between 0° and 120°
  • the continuous waves 315 B, 325 B, and 335 B may each have a phase between 120° and 240°
  • the continuous waves 315 C, 325 C, and 335 C may each have a phase between 240° and 360°.
  • each continuous wave may have a time duration of around 100 ⁇ s.
  • a time gap between two adjacent continuous waves may be in a range from 1 to 2 ms.
  • the cycle 310 may not have multiple continuous waves for each frequency. Rather, the cycle 310 has a single continuous wave for an individual frequency.
  • the cycle 310 may include a first continuous wave for the frequency 317 , and a second continuous wave for the frequency 327 , and a third continuous wave for the frequency 337 .
  • the first, second, and third continuous waves may all start at 0°.
  • the cycle 310 may produce image data for the controller 130 to generate a frame.
  • the cycle 310 can be repeated for the controller 130 to generate more frames.
  • the controller 130 may perform phase unwrapping to determine depth information.
  • FIG. 4 A is a block diagram illustrating the controller 130 according to some embodiments of the present disclosure.
  • the controller 130 includes a database 410 , an illuminator module 420 , a camera module 430 , a depth module 440 , a brightness module 450 , and a depth enhancement module 460 .
  • These modules are software modules implemented on one or more processors, dedicated hardware units, or some combination thereof.
  • Some embodiments of the controller 130 have different components than those described in conjunction with FIG. 4 .
  • functions of the components described in conjunction with FIG. 4 may be distributed among other components in a different manner than described in conjunction with FIG. 4 .
  • controller 130 may be performed by a device that incorporates a depth estimation system, such as the system 600 in FIG. 6 , the mobile device 700 in FIG. 7 , the entertainment system 800 in FIG. 8 , the robot 900 in FIG. 9 , or other devices.
  • a depth estimation system such as the system 600 in FIG. 6 , the mobile device 700 in FIG. 7 , the entertainment system 800 in FIG. 8 , the robot 900 in FIG. 9 , or other devices.
  • the database 410 stores data generated and/or used by the controller 130 .
  • the database 410 is a memory, such as a ROM, DRAM, SRAM, or some combination thereof.
  • the database 410 may be part of a larger digital memory of a depth estimation system, such as the depth estimation system 100 , or a device that incorporates the depth estimation system.
  • the database 410 stores image data from the camera assembly 120 , depth images generated by the depth module 440 , brightness images generated by the brightness module 450 , enhanced depth images generated by the depth enhancement module 460 , parameters for energy models generated by the depth enhancement module 460 , parameters for optimizing energy models, and so on.
  • the database 410 may store calibration data and/or other data from other components, such as depth instructions.
  • Depth instructions include illuminator instructions generated by the illuminator module 420 and camera instructions generated by the camera module 430 .
  • the illuminator module 420 controls the illuminator assembly 110 via illuminator instructions.
  • the illuminator instructions include one or more illumination parameters that control how light is projected by the illuminator assembly 110 .
  • An illumination parameter may describe, e.g., waveform, wavelength, amplitude, frequency, phase offset, starting time of each continuous wave, ending time of each continuous wave, duration of each continuous wave, some other parameter that controls how the light is projected by the illuminator assembly 110 , or some combination thereof.
  • the illuminator module 420 may retrieve the illuminator instructions from the database 350 . Alternatively, the illuminator module 420 generates the illuminator instructions. For example, the illuminator module 420 determines the one or more illumination parameters. In embodiments where the illuminator assembly 110 include multiple modulated light projectors, the illuminator module 420 may determine separate illumination parameters for different light projectors.
  • the camera module 430 controls the camera assembly 120 via camera instructions.
  • the camera module 430 may retrieve camera instructions from the database 410 .
  • the camera module 430 generates camera instructions based in part on the illuminator instructions generated by the illuminator module 420 .
  • the camera module 430 determines exposure parameters (such as starting time, ending time, or duration of an exposure interval, etc.) of the camera assembly 120 , e.g., based on one or more illumination parameters (such as duration of a continuous wave, etc.) specified in the illuminator instructions. For example, the camera module 430 determines that the duration of an exposure equals the duration of a continuous wave.
  • the camera module 430 determines that duration of an exposure is longer than the duration of a continuous wave to avoid failure to collect a whole continuous wave due to delay in incoming light.
  • the duration of an exposure can be 20% longer than the duration of a continuous wave.
  • the camera module 430 also determines a number of exposure intervals for each continuous wave of modulated light projected by the illumination assembly 110 .
  • the camera instruction may include readout instructions for controlling readouts of the camera assembly 120 .
  • the camera module 430 may determine readout parameters (such as starting time, ending time, or duration of a readout interval, etc.) of the camera assembly 120 . For example, the camera module 430 determines a starting time for each of one or more readout intervals, e.g., based on one or more illumination parameters (such as phase, waveform, starting time, or other parameters of a continuous wave). The camera module 430 may also determine a duration for each readout interval, the number of readout intervals for a continuous wave, time gap between adjacent readout intervals, the number of readout cycles, other readout parameters, or some combination thereof.
  • readout parameters such as starting time, ending time, or duration of a readout interval, etc.
  • the depth module 440 is configured to generate depth images indicative of distance to the object 140 being imaged, e.g., based on digital signals indicative of charge accumulated on the image sensor 195 .
  • the depth module 440 may analyze the digital signals to determine a phase shift exhibited by the light (e.g., the phase shift ⁇ described above in conjunction with FIG. 2 ) to determine a ToF (e.g., the ToF ⁇ described above in conjunction with FIG. 2 ) of the light and further to determine a depth value (e.g., the distance d described above in conjunction with FIG. 2 ) of the object 140 .
  • the depth module 440 can generate a depth image through phase unwrapping. Taking the cycle 310 in FIG. 3 for example, the depth module 440 may determine wrapped distances, each of which corresponds to a respective phase. The depth module 440 can further estimate unwrapped depths for each of the wrapped distances. The depth module 440 further determines Voronoi vectors corresponding to the unwrapped depths and generate a lattice of Voronoi cells. Each unwrapped depth corresponds to a Voronoi cell of the lattice. In alternate embodiments, the depth module 440 is configured to determine depth information using a ratio of charge between the storage regions associated with each photodiode of the camera assembly 120 .
  • the brightness module 450 generates brightness images, such as active brightness images. In some embodiments, for a depth image generated by the depth module 440 , the brightness module 450 generates a corresponding brightness image.
  • the brightness module 450 may generate a brightness image in accordance with a request for the brightness image from the depth enhancement module 460 .
  • the brightness module 450 may generate the depth image based on a phase shift between first captured light and projected light, and generate the corresponding brightness image based on the intensity or amplitude of second captured light.
  • the first captured light and the second capture light are same light. In other embodiments, the second captured light is different from the first captured light. For instance, the first captured light may be IR, versus the second captured light may be visible light.
  • the brightness module 450 may generate the corresponding brightness image based on charges accumulated in all or some of the pixels 197 in the set.
  • the corresponding brightness image includes a plurality of brightness pixels. Each brightness pixel may correspond to a depth pixel in the depth image. For instance, the values of the depth pixel and corresponding brightness pixel may be both determined based on charges accumulated in a same pixel 197 in the image sensor 195 .
  • the charges accumulated in the pixel 197 may be converted from photons of modulated light reflected by a locus of the object 140 .
  • the value of the depth pixel may be determined by the depth module 440 based on a phase shift in the waveform of the modulated light.
  • the value of the corresponding brightness pixel may be determined based on the accumulated charge in that pixel 197 or a different pixel 197 .
  • the depth enhancement module 460 enhances depth estimation made by the depth module 440 based on brightness images generated by the brightness module 450 .
  • the depth enhancement module 460 may retrieve a depth image generated by the depth module 440 and a corresponding brightness image generated by the brightness module 450 .
  • the depth enhancement module 460 may instruct the brightness module 450 to generate the corresponding brightness image.
  • the depth enhancement module 460 may enhance depth estimation of the depth module 440 by using an energy model to fuse the depth image and the corresponding brightness image.
  • the depth enhancement module 460 may generate an energy function based on the depth image and the corresponding brightness image and then generate an enhanced depth image through an optimization of the image energy function.
  • the depth enhancement process by the depth enhancement module 460 converts the depth image into the enhanced depth image.
  • the brightness image may be used as a guidance in the depth enhancement process.
  • FIG. 4 B is a block diagram illustrating the depth enhancement module 460 according to some embodiments of the present disclosure.
  • the depth enhancement module 460 in FIG. 4 B includes a disparity generator 470 , a boundary weight module 475 , an fusion energy module 480 , and an optimization module 485 .
  • the depth enhancement module 460 may include fewer, more, or different components.
  • functions of the components of the depth enhancement module 460 described in conjunction with FIG. 4 may be distributed among other components in a different manner than described in conjunction with FIG. 4 .
  • the disparity generator 470 optionally generates a disparity image (also referred to as “disparity map”) from the depth image.
  • the disparity generator 470 converts depth pixels in the depth images to disparity pixels of the disparity image.
  • a depth value D e.g., a value of a depth pixel in the depth image
  • a disparity value Dt is converted to a disparity value (or disparity) Dt.
  • the disparity generator 470 may determine disparity values for all or some of the depth pixels in the depth image and generate the disparity image based on the disparity values.
  • the boundary weight module 475 determines one or more boundary weights based on the disparity or depth image and the brightness image, e.g., based on a combination of boundaries shown in the disparity map and boundaries shown in the brightness image. Even though some of the description of the boundary weight module 475 refers to using the disparity image to determine boundary weights, in some embodiments, the boundary weight module 475 may determine boundary weights based on the depth image instead.
  • the boundary weight module 475 determines boundary weights by applying gradients to the disparity or depth image and the brightness image.
  • the gradient of an image e.g., the depth image, disparity image, or brightness image
  • a boundary weight can avoid boundary blending and may be used to preserve or enhance a boundary in the image.
  • the boundary weight module 475 may determine a fusion gradient for a pixel of the enhanced depth image.
  • the pixel may correspond to a depth pixel in the depth image, a disparity pixel in the disparity image, and/or a brightness pixel in the brightness image.
  • the pixel may have a depth value (i.e., the depth value of the corresponding depth pixel in the depth image), a disparity value (i.e., the disparity value of the corresponding disparity pixel in the disparity image), a brightness value (i.e., the brightness value of the corresponding brightness pixel in the brightness image), and an enhanced depth value (i.e., the enhanced depth value of the pixel in the enhanced depth image).
  • the pixel may be represented by an index i or a pair of (x,y) coordinates, which indicate a position of the pixel in the enhanced depth image, the depth image, the brightness image, or the disparity image.
  • the boundary weight module 475 may determine the magnitude of a depth gradient (“depth gradient magnitude,” E D ) of the pixel (x,y) in the disparity or depth image.
  • the depth gradient may indicate a change (e.g., a directional change) in the depth values in the depth image.
  • the depth gradient may be a two-dimensional vector that has a magnitude (i.e., E D ) and a direction along the direction of depth value increases.
  • the boundary weight module 475 applies a gradient operator to the disparity or depth image. The gradient operator may return the depth gradient magnitude E D of the pixel (x,y).
  • the boundary weight module 475 may also determine the magnitude of a brightness gradient (“brightness gradient magnitude,” EA) of the pixel (x,y) in the brightness image, e.g., by applying the same or a different gradient operator to the brightness image.
  • the boundary weight module 475 further combines the depth gradient magnitude E D and the brightness gradient magnitude EA to generate a fusion gradient (E I ) of the pixel (x,y).
  • the boundary weight module 475 determines that the fusion gradient E I is a product of the depth gradient magnitude E D and the brightness gradient magnitude EA:
  • the boundary weight module 475 may use one or more gradient operators, examples of which include Sobel gradient operator, Prewitt gradient operator, central different gradient operator, intermediate difference gradient operator, Roberts gradient operator, or other suitable types of gradient operators.
  • the boundary weight module 475 determines a weighted difference in values (depth values or brightness values) of the pixel (x,y) and adjacent pixels as the gradient magnitude (E D or E A ).
  • An adjacent pixel is a pixel that adjoins the pixel. The coordinates of an adjacent pixel may be (x+1,y), (x,y+1), (x+1,y+1), etc.
  • the boundary weight module 475 may determine a respective difference between the value of the pixel and each adjacent pixel and aggregate the differences for all the adjacent pixels based on weights.
  • the weight for an adjacent pixel may be determined based on the fusion depth gradient from the pixel to the adjacent pixel.
  • the distance may be determined based on the coordinates of the pixel and the adjacent pixel.
  • the depth gradient magnitude E D may be a weighted sum or weighted average of the differences in the depth values.
  • the brightness gradient magnitude E A may be a weighted sum or weighted average of the differences in the brightness values.
  • the boundary weight module 475 determines the gradient magnitude (E D or E A ) as a difference in values of the pixel and a single adjacent pixel: the depth gradient magnitude E D is the difference between the depth value of the pixel and the depth value of the adjacent pixel, and the brightness gradient magnitude E A is the difference between the brightness value of the pixel and the brightness value of the adjacent pixel.
  • the gradient operator may be denoted as:
  • the magnitude of the gradient may be defined as ⁇ square root over (g x 2 +g y 2 ) ⁇ ,
  • boundary weight module 475 determines a boundary weight W E for the pixel (x,y) based on the fusion gradient magnitude E I :
  • the boundary weight module 475 may determine respective boundary weights for all or some of the pixels.
  • the fusion energy module 480 determines a fusion energy based on one or more boundary weights determined by the boundary weight module 475 . In some embodiments, the fusion energy module 480 determines a fusion energy for the whole enhanced depth image. In other embodiments, the fusion energy module 480 determines a fusion energy for a pixel of the enhanced depth image, e.g., the pixel (x,y).
  • a fusion energy Q may be a combination of a spatial error energy Q S and a conditional entropy error energy Q H .
  • the spatial error energy Q S may be used to reduce spatial image noise, such as local noises.
  • the local noises may be Gaussian or other sensor noises.
  • the spatial error energy Q S can enforce similarity between neighbouring pixel depths, when applicable.
  • the similarity assumption allows to smooth the depth map within the continuous object surface areas with gradual depth changes. This smoothing effect may be disabled in depth transitions at object boundaries due to the adjacent pixel boundary weights.
  • the spatial error energy Q S can be minimized by reducing a difference between depth values of adjacent pixels.
  • the conditional entropy error energy Q H can enhance depth discontinuities and suppress other non-local noise by maximizing the correlation between the estimated depth image and the brightness image.
  • the conditional entropy error energy Q H may indicate a measure of uncertainty in the depth value of a pixel given the brightness value of the pixel.
  • the fusion energy module 480 determines the spatial error energy term Q S based on a horizontal error ⁇ H and a vertical error ⁇ V , such as:
  • ⁇ H and ⁇ V can be defined as:
  • ⁇ circumflex over (D) ⁇ represents all possible solutions for the resulting depth image, i.e., the enhanced depth image.
  • the fusion energy module 480 may determine the conditional entropy error energy Q H based on mutual information or conditional entropy.
  • the conditional error energy Q H may be defined as:
  • p(D(x,y), A(x,y)) represents the joint probability value for the depth value D(x,y) and the brightness value A(x,y) of the pixel (x,y), and p(D(x,y)
  • the fusion energy module 480 may determine a mutual information energy Q M based on mutual information (MI) between the disparity image (or the depth image) and the brightness image.
  • MI can measure the amount of information that one variable (e.g., one of the two images) contains about another variable (e.g., the other image).
  • MI can be used to reduce uncertainty in one variable given the information in the other variable.
  • MI can be used in multimodal imaging. It can be robust to outliers, efficient to calculate, and provide smooth cost functions on which to optimize. Assuming D and A are discrete random variables, MI can be defined as:
  • conditional entropy H(D/A) Given a depth image and a brightness image, the conditional entropy H(D/A) can be calculated, where the conditional entropy is a measure of how much uncertainty remains on the depth D given the brightness A, that is,
  • p AD (a,d) is the joint probability of brightness and depth values
  • p AD (d/a) is the conditional probability of the depth value given the brightness value.
  • D may be discretised and may have a finite set of values.
  • An alternative is to use disparities instead of depths, since disparities can be bounded and managed in a simpler way.
  • disparities and brightness values may be discretised into given numbers of bins, such as N Dt and N IR respectively.
  • the joint probability distribution p(a,d) can be estimated by normalizing their joint histogram. Disparities may be bounded to [0,Dt max ], where the maximum disparity Dt max corresponds to the minimum depth.
  • the fusion energy module 480 further aggregates the spatial error energy Q S and the conditional error energy Q H to determine the fusion energy Q of the pixel.
  • the fusion energy Q may be represented as:
  • the boundary weight W E which is related to the fusion gradient E I , can be used to preserve boundaries.
  • the optimization module 485 executes an optimization process, in which the optimization module 485 optimizes fusion energy to determine enhanced depth values.
  • the optimization module 485 optimizes the fusion energy by minimizing the image fusion energy.
  • the optimization process may be performed on a pixel level, i.e., the optimization module 485 minimizes the fusion energy for a pixel to determine the enhanced depth value of the pixel.
  • the optimization processes for different pixels may be independent from each other.
  • D*(x,y) is the enhanced depth value of the pixel.
  • the optimization process may be an iterative process.
  • the depth estimation system may run separate optimization processes for all or a subset of pixel in the depth image.
  • the optimization module 485 uses a gradient descent approach to minimize the fusion energy of the pixel.
  • the gradient descent approach includes an adaptive learning rate ⁇ , which allows a fast approximation during first iterations and slows down progressively during the optimization process.
  • the result module 490 generates the enhanced depth image based on the enhanced depth values determined by the optimization module 485 . For instance, the result module 490 may replace the depth value of a pixel with the enhanced depth value.
  • the enhanced depth image represents better depth estimation, especially for one or several boundaries of the object 140 .
  • FIG. 5 A illustrates an example depth image 510 according to some embodiments of the present disclosure.
  • the depth image 510 may be generated by the depth module 440 described above in conjunction with FIG. 4 .
  • FIG. 5 B illustrates an example brightness image 520 corresponding to the depth image 510 in FIG. 5 A according to some embodiments of the present disclosure.
  • the brightness image 520 may be generated by the brightness module 450 described above in conjunction with FIG. 4 .
  • FIG. 5 C illustrates an example depth enhanced image 530 generated from the depth image 510 in FIG. 5 A and the brightness image 520 in FIG. 5 B according to some embodiments of the present disclosure.
  • the three images 510 , 520 , and 530 captures an object, an example of which is the object 140 in FIG. 1 .
  • the object has a flat plane 540 and an edge 550 . A part of the edge is enclosed in the dashed oval shapes in FIGS. 5 A- 5 C .
  • the brightness image 520 shows a cleaner edge than the depth image 510 .
  • the brightness image 520 can be used to enhance depth estimation of the object, particularly the edge 550 of the object.
  • the brightness image 520 is fused with the depth image 510 through an energy model to generate the depth enhanced image 530 .
  • the depth enhanced image 530 shows better depth estimation than the depth image 510 . As shown in FIGS.
  • the edge in the depth enhanced image 530 is cleaner than the depth image 510 .
  • the depth enhanced image 530 may be generated by using an energy model, for which the depth image 510 may be used an input image and the brightness image 520 may be used a guidance image.
  • the depth enhanced image 530 may be generated by the depth enhancement process described above in conjunction with the depth enhancement module 460 . For instance, the pixels representing the edge 550 are identified and new depth values for the pixels are determined based on the values of the pixels in the brightness image 520 . The new depth values of the pixels are used to generate the depth enhanced image 530 .
  • FIGS. 5 A and 5 C also show that the flat plane 540 is smoother in the depth enhanced image 530 .
  • pixels representing the flat plane 540 are also identified. For each of these pixels, a box is defined. The box may be centered at the pixel and include other pixels that surround the pixel. A new depth value of the pixel is determined based on depth values of the other pixels in the box. For instance, the new depth value of the pixel may be an average of the depth values of the other pixels. The new depth values of the pixels are used to generate the depth enhanced image 530 .
  • the depth enhanced image 530 represents more accurate depth estimation of the object, such as more accurate depth estimation of the flat plane 540 , the edge 550 , or both.
  • FIG. 6 illustrates an example system 600 incorporating a depth estimation system according to some embodiments of the present disclosure.
  • An embodiment of the depth estimation system is the depth estimation system 100 in FIG. 1 .
  • the system 600 includes an imaging device 610 , a processor 620 , a memory 630 , an input device 640 , an output device 650 , and a battery/power circuitry 660 .
  • the system 600 may include fewer, more, or different components.
  • the system 600 may include multiple processors, memories, display devices, input devices, or output devices.
  • the imaging device 610 captures depth images and brightness images.
  • the imaging device 610 may include an illuminator assembly, such as the illuminator assembly 110 , for projecting light into an environment surrounding the system 600 .
  • the imaging device 610 can project modulated light, such as pulsed modulated light or continuous waves of modulated light.
  • the imaging device 610 also includes a camera assembly, such as the camera assembly 120 , that captures light reflected by one or more objects in the environment and generates image data of the one or more objects.
  • the processor 620 can process electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.
  • the processor 620 may perform some or all functions of some or all components of the controller 130 , such as depth estimation, enhancing depth estimation with brightness signa, and so on.
  • the processor 620 may include one or more digital signal processors (DSPs), application-specific integrated circuits (ASICs), CPUs, GPUs, cryptoprocessors (specialized processors that execute cryptographic algorithms within hardware), server processors, or any other suitable processing devices.
  • DSPs digital signal processors
  • ASICs application-specific integrated circuits
  • CPUs CPUs
  • GPUs GPUs
  • cryptoprocessors specialized processors that execute cryptographic algorithms within hardware
  • server processors or any other suitable processing devices.
  • the processor 620 may also use depth information (e.g., enhanced depth images) to generate content (e.g., images, audio, etc.) for display to a user of the system by one or more display devices, such as the output device 650 .
  • the content may be used as VR, AR, or MR content.
  • the processor 620 may also generate instructions for other components of the system 600 or another system based on enhanced depth images. For instance, the processor 620 may determine a navigation instruction for a movable device, such as a robot, a vehicle, or other types of movable devices.
  • the navigation instruction may include navigation parameters (e.g., navigation routes, speed, orientation, and so on).
  • the memory 630 may include one or more memory devices such as volatile memory (e.g., DRAM), nonvolatile memory (e.g., read-only memory (ROM)), flash memory, solid state memory, and/or a hard drive.
  • volatile memory e.g., DRAM
  • nonvolatile memory e.g., read-only memory (ROM)
  • flash memory solid state memory
  • hard drive e.g., solid state memory
  • the memory 630 may include memory that shares a die with the processor 620 .
  • the memory 630 may store processor-executable instructions for controlling operation of the depth estimation system 100 , and/or data captured by the depth estimation system 100 .
  • the memory 630 includes one or more non-transitory computer-readable media storing instructions executable to perform depth estimation enhancement processes, e.g., the method 1000 described below in conjunction with FIG.
  • the instructions stored in the one or more non-transitory computer-readable media may be executed by the processor 620 .
  • the input device 640 may include an audio input device.
  • the audio input device 1318 may include any device that generates a signal representative of a sound, such as microphones, microphone arrays, digital instruments (e.g., instruments having a musical instrument digital interface (MIDI) output), and so on.
  • the input device 640 may also include one or more other types of input devices, such as accelerometer, gyroscope, compass, image capture device, keyboard, cursor control device (such as a mouse), stylus, touchpad, bar code reader, Quick Response (QR) code reader, sensor, radio-frequency identification (RFID) reader, and so on.
  • QR Quick Response
  • RFID radio-frequency identification
  • the output device 650 may include one or more display devices, such as one or more visual indicators.
  • Example visual indicators include heads-up display, computer monitor, projector, touchscreen display, liquid crystal display (LCD), light-emitting diode display, or flat panel display, and so on.
  • the output device 650 may also include an audio output device.
  • the audio output device may include any device that generates an audible indicator, such as speakers, headsets, or earbuds, and so on.
  • the output device 650 may also include one or more other output devices, such as audio codec, video codec, printer, wired or wireless transmitter for providing information to other devices, and so on.
  • the battery/power circuitry 660 may include one or more energy storage devices (e.g., batteries or capacitors) and/or circuitry for coupling components of the system 600 to an energy source separate from the system 600 (e.g., AC line power).
  • energy storage devices e.g., batteries or capacitors
  • AC line power e.g., AC line power
  • FIG. 7 illustrates a mobile device 700 incorporating a depth estimation system according to some embodiments of the present disclosure.
  • An example of the depth estimation system is the depth estimation system 100 in FIG. 1 .
  • the mobile device 700 may be a mobile phone.
  • the mobile device 700 includes an imaging assembly 702 .
  • the imaging device may include the illuminator assembly 110 and the camera assembly 120 in FIG. 1 .
  • the imaging assembly 702 may illuminate the environment surrounding the robot 902 with modulated light (e.g., modulated IR) and capture images of one or more objects in the environment.
  • the mobile device 700 may include one or more processors and one or more memories that can perform some or all of the functions of the controller 130 .
  • the mobile device 700 may determine depth information of one or more objects in an environment surrounding the mobile device 700 .
  • the depth information can be used, by the mobile device 700 , another device, or a user of the mobile device, for various purposes, such as VR, AR, or MR applications, navigation applications, and so on.
  • the mobile device 700 may generate and present images (two-dimensional or three-dimensional images) based on the depth information of the environment, and the images may represent virtual objects that do not exist in the real-world environment.
  • the images may augment the real-world objects in the environment so that a user of the mobile device 700 may have an interactive experience of the real-world environment where the real-world objects that reside in the real world are enhanced by computer-generated virtual objects.
  • FIG. 8 illustrates an entertainment system 800 incorporating a depth estimation system according to some embodiments of the present disclosure.
  • a user 808 may interact with the entertainment system via a controller 810 , for example to play a video game.
  • the entertainment system 800 includes a console 802 and display 804 .
  • the console 802 may be a video gaming console configured to generate images of a video game on the display 804 .
  • the entertainment system 800 may include more, fewer, or different components.
  • the console 802 includes an imaging assembly 806 .
  • the imaging assembly 806 may include the illuminator assembly 110 and the camera assembly 120 in FIG. 1 .
  • the imaging assembly 806 may illuminate the environment surrounding the entertainment system 800 with modulated light (e.g., modulated IR) and capture modulated light reflected by one or more objects in the environment to generate images of the objects, such as the user 808 , controller 810 , or other objects.
  • the console 802 may include one or more processors and one or more memories that can perform some or all of the functions of the controller 130 .
  • the console 802 may determine depth information of one or more objects in the environment.
  • the depth information may be used to present images to the user on the display 804 or for control of some other aspect of the entertainment system 800 .
  • the user 808 may control the entertainment system 800 with hand gestures, and the gestures may be determined at least in part through the depth information.
  • the console 802 may generate or update display content (e.g., images, audio, etc.) based on the depth information and may also instruct the display 804 to present the display content to the user 808 .
  • FIG. 9 illustrates an example robot 902 incorporating a depth estimation system according to some embodiments of the present disclosure.
  • the robot 902 includes an imaging assembly 904 that may include the illuminator assembly 110 and the camera assembly 120 in FIG. 1 .
  • the imaging assembly 904 may illuminate the environment surrounding the robot 902 with modulated light (e.g., modulated IR) and capture images of one or more objects in the environment.
  • the robot 902 may include a computing device that can perform some or all of the functions of the controller 130 .
  • the computing device may include one or more processors and one or more memories.
  • the computing device may determine depth information of one or more objects in the environment.
  • the robot 902 may be mobile and the computing device may use the depth information to assist in navigation and/or motor control of the robot 902 .
  • the computing device may include a navigation instruction based on the depth information.
  • the navigation instruction may include a navigation route of the robot 902 .
  • the robot 902 may navigate in the environment in accordance with the navigation instruction.
  • the depth estimation system described herein may be used in other applications, such as autonomous vehicles, security cameras, and so on.
  • FIG. 10 is a flowchart showing a method 1000 of using an energy model to enhance depth estimation with a brightness image, according to some embodiments of the present disclosure.
  • the method 1000 may be performed by the controller 130 .
  • the method 1000 is described with reference to the flowchart illustrated in FIG. 10 , many other methods of using an energy model to enhance depth estimation with a brightness image may alternatively be used.
  • the order of execution of the steps in FIG. 10 may be changed.
  • some of the steps may be changed, eliminated, or combined.
  • the controller 130 converts, in 1010 , a depth image into a disparity image.
  • the depth image includes a plurality of depth pixels.
  • the disparity image includes a plurality of disparity pixels.
  • a disparity value of each of the plurality of disparity pixel is determined based on a depth value of a different depth pixel of the plurality of depth pixels. In some embodiments, the disparity value is proportional to a reciprocal of the depth value.
  • the controller 130 determines, in 1020 , a boundary weight for a target depth pixel of the plurality of depth pixels based on a gradient magnitude of the target depth pixel in the depth image and a gradient magnitude of a brightness pixel in a brightness image.
  • the gradient magnitude of the target depth pixel in the depth image may be a difference (e.g., a weighted difference) between the target depth pixel and a depth pixel that is adjacent to the target depth pixel in the depth image (“adjacent depth pixel”).
  • the difference may be a difference between the depth value of the target depth pixel and the depth value of the adjacent depth pixel.
  • the gradient magnitude of the brightness pixel in a brightness image may be a difference (e.g., a weighted difference) between the brightness pixel and another brightness pixel that is adjacent to the brightness pixel in the brightness image (“adjacent brightness pixel”).
  • the difference may be a difference between the brightness value of the brightness pixel and the brightness value of the adjacent brightness pixel.
  • the brightness image and the depth image capture a same object.
  • the brightness image comprises a plurality of brightness pixels that includes the brightness pixel, and.
  • Each respective brightness pixel of the plurality of brightness pixels corresponds to a respective depth pixel of the plurality of depth pixels.
  • the target depth pixel represents a same locus of an object as the brightness pixel.
  • the depth image and the brightness image are generated based on image data from a same image sensor.
  • the controller 130 may instruct an illuminator assembly to project modulated light into a local area including the object.
  • the controller 130 may also instruct a camera assembly to capture reflected light from at least a portion of the object.
  • the controller 130 may generate the depth image based on a phase shift between the reflected light and the modulated light projected into the local area.
  • the controller 130 may generate the brightness image based on brightness of the reflected light.
  • the reflected light may be first reflected light, and the controller 130 can instruct the camera assembly to capture second reflected light from at least the portion of the object and generate the brightness image based on brightness of the second reflected light.
  • the second reflected light has a different wavelength from the first reflected light.
  • the controller 130 determines, in 1030 , an energy for the target depth pixel based on the boundary weight.
  • the controller 130 determines, in 1040 , a new depth value of the target depth pixel by optimizing the energy.
  • the controller 130 determines a spatial error energy for the target depth pixel based on the boundary weight.
  • the controller 130 may optimize the energy comprises optimizing the spatial error energy by reducing a difference between a depth value of the target depth pixel and a depth value of another depth pixel that is adjacent to the target depth pixel in the depth image.
  • the controller 130 may also determine a conditional error energy for the target depth pixel based on a depth value of the target depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image.
  • the conditional error energy indicates a measure of uncertainty in the depth value of the target depth pixel given the brightness value of the brightness pixel.
  • the controller 130 updates, in 1040 , the depth image by assigning the depth value to the target depth pixel.
  • the controller 130 may generate an enhanced depth image based on the depth value.
  • the enhanced depth image represents better depth estimation than the depth image, as the depth value of the target depth pixel in the enhanced depth image has a better accuracy than the depth value of the target depth pixel in the depth image.
  • the features discussed herein can be applicable to automotive systems, safety-critical industrial applications, medical systems, scientific instrumentation, wireless and wired communications, radio, radar, industrial process control, audio and video equipment, current sensing, instrumentation (which can be highly precise), and other digital-processing-based systems.
  • components of a system such as filters, converters, mixers, amplifiers, digital logic circuitries, and/or other components can readily be replaced, substituted, or otherwise modified in order to accommodate particular circuitry needs.
  • components of a system such as filters, converters, mixers, amplifiers, digital logic circuitries, and/or other components can readily be replaced, substituted, or otherwise modified in order to accommodate particular circuitry needs.
  • complementary electronic devices, hardware, software, etc. offer an equally viable option for implementing the teachings of the present disclosure related to fractional frequency dividers, in various communication systems.
  • Parts of various systems for implementing duty cycle-regulated, balanced fractional frequency divider as proposed herein can include electronic circuitry to perform the functions described herein.
  • one or more parts of the system can be provided by a processor specially configured for carrying out the functions described herein.
  • the processor may include one or more application-specific components, or may include programmable logic gates which are configured to carry out the functions describe herein.
  • the circuitry can operate in analog domain, digital domain, or in a mixed-signal domain.
  • the processor may be configured to carrying out the functions described herein by executing one or more instructions stored on a non-transitory computer-readable storage medium.
  • any number of electrical circuits of the present figures may be implemented on a board of an associated electronic device.
  • the board can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the board can provide the electrical connections by which the other components of the system can communicate electrically.
  • Any suitable processors (inclusive of DSPs, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs, processing demands, computer designs, etc.
  • Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards, via cables, or integrated into the board itself.
  • the functionalities described herein may be implemented in emulation form as software or firmware running within one or more configurable (e.g., programmable) elements arranged in a structure that supports these functions.
  • the software or firmware providing the emulation may be provided on non-transitory computer-readable storage medium comprising instructions to allow a processor to carry out those functionalities.
  • the electrical circuits of the present figures may be implemented as stand-alone modules (e.g., a device with associated components and circuitry configured to perform a specific application or function) or implemented as plug-in modules into application-specific hardware of electronic devices.
  • SOC system on chip
  • An SOC represents an integrated circuit that integrates components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed-signal, and often radio-frequency (RF) functions: all of which may be provided on a single chip substrate.
  • RF radio-frequency
  • Other embodiments may include a multi-chip-module (MCM), with a plurality of separate ICs located within a single electronic package and configured to interact closely with each other through the electronic package.
  • MCM multi-chip-module
  • references to various features e.g., elements, structures, modules, components, steps, operations, characteristics, etc.
  • references to various features e.g., elements, structures, modules, components, steps, operations, characteristics, etc.
  • references to various features are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments.
  • “or” as used in a list of items indicates an inclusive list such that, for example, a list of [at least one of A, B, or C] means A or B or C or AB or AC or BC or ABC (i.e., A and B and C).
  • the term “connected” means a direct electrical connection between the things that are connected, without any intermediary devices/components
  • the term “coupled” means either a direct electrical connection between the things that are connected, or an indirect connection through one or more passive or active intermediary devices/components
  • the term “circuit” means one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function.
  • the terms “substantially,” “approximately,” “about,” etc. may be used to generally refer to being within +/ ⁇ 20% of a target value, e.g., within +/ ⁇ 10% of a target value, based on the context of a particular value as described herein or as known in the art.
  • connection means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof.
  • references to “A and/or B”, when used in conjunction with open-ended language such as “comprising” may refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
  • the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
  • This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified.
  • “at least one of A and B” may refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
  • the term “between” is to be inclusive unless indicated otherwise.
  • “between A and B” includes A and B unless indicated otherwise.
  • Example 1 provides a method, including: converting a depth image including a plurality of depth pixels into a disparity image including a plurality of disparity pixels, where a disparity value of each of the plurality of disparity pixel is determined based on a depth value of a different depth pixel of the plurality of depth pixels; determining a boundary weight for a target depth pixel of the plurality of depth pixels based on a gradient magnitude of the target depth pixel in the depth image and a gradient magnitude of a brightness pixel in a brightness image; determining an energy for the target depth pixel based on the boundary weight; determining a new depth value of the target depth pixel by optimizing the energy; and updating the depth image by assigning the new depth value to the target depth pixel.
  • Example 2 provides the method of example 1, where: the brightness image and the depth image capture a same object, the brightness image includes a plurality of brightness pixels that includes the brightness pixel, and each respective brightness pixel of the plurality of brightness pixels correspond to a respective depth pixel of the plurality of depth pixels.
  • Example 3 provides the method of example 1, where the target depth pixel represents a same locus of an object as the brightness pixel.
  • Example 4 provides the method of example 1, where determining the energy for the target depth pixel based on the boundary weight includes: determining a spatial error energy for the target depth pixel based on the boundary weight, where optimizing the energy includes optimizing the spatial error energy by reducing a difference between a depth value of the target depth pixel and a depth value of another depth pixel that is adjacent to the target depth pixel in the depth image.
  • Example 5 provides the method of example 4, where determining the energy for the target depth pixel based on the boundary weight further includes: determining a conditional error energy for the target depth pixel based on a depth value of the target depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image, where the conditional error energy indicates a measure of uncertainty in the depth value of the target depth pixel given the brightness value of the brightness pixel.
  • Example 6 provides the method of example 1, where the disparity value is proportional to a reciprocal of the depth value.
  • Example 7 provides the method of example 1, where the depth image and the brightness image are generated based on image data from a same image sensor.
  • Example 8 provides the method of example 1, further including: instructing an illuminator assembly to project modulated light into a local area including an object; instructing a camera assembly to capture reflected light from at least a portion of the object; and generating the depth image based on a phase shift between the reflected light and the modulated light projected into the local area.
  • Example 9 provides the method of example 8, further including: generating the brightness image based on brightness of the reflected light.
  • Example 10 provides the method of example 8, where the reflected light is first reflected light, and the method further includes: instructing the camera assembly to capture second reflected light from at least the portion of the object; and generating the brightness image based on brightness of the second reflected light, where the second reflected light has a different wavelength from the first reflected light.
  • Example 11 provides a system, including: an illuminator assembly configured to project modulated light into a local area including an object; a camera assembly configured to capture reflected light from at least a portion of the object; and a controller configured to: generate a depth image from the reflected light, the depth image including a plurality of depth pixels and capturing at least a portion of the object, generate a brightness image including a plurality of brightness pixels and capturing at least the portion of the object, each brightness pixel corresponding to a different depth pixel, for each respective depth pixel of the plurality of depth pixels, determine a respective energy based on a gradient magnitude of the respective depth pixel in the depth image and a gradient magnitude of the corresponding brightness pixel in the brightness image, and generate an enhanced depth image by fusing the depth image with the brightness image based on respective energies of the plurality of depth pixels.
  • Example 12 provides the system of example 11, where fusing the depth image with the brightness image based on respective energies of the plurality of depth pixels includes: for each respective depth pixel of the plurality of depth pixels, optimizing the respective energy.
  • Example 13 provides the method of example 12, where the controller is configured to determine the respective energy based on the gradient magnitude of the respective depth pixel in the depth image and the gradient magnitude of the brightness pixel in the brightness image by: determining a spatial error energy for the respective depth pixel based on the gradient magnitude of the respective depth pixel in the depth image and the gradient magnitude of the brightness pixel in the brightness image, where optimizing the respective energy includes optimizing the spatial error energy by reducing a difference between a depth value of the respective depth pixel and a depth value of another depth pixel that is adjacent to the respective depth pixel in the depth image.
  • Example 14 provides the system of example 11, where the controller is configured to determine the respective energy based on the gradient magnitude of the respective depth pixel in the depth image and the gradient magnitude of the brightness pixel in the brightness image further by: determining a conditional error energy for the respective depth pixel based on a depth value of the respective depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image, where the conditional error energy indicates a measure of uncertainty in the depth value of the respective depth pixel given the brightness value of the brightness pixel.
  • Example 15 provides the system of example 11, where the controller is configured to generate the depth image and the brightness image based on the reflected light by: generating the depth image based on a phase shift between the reflected light and the modulated light projected into the local area; and generating the brightness image based on brightness of the reflected light.
  • Example 16 provides one or more non-transitory computer-readable media storing instructions executable to perform operations, the operations including: converting a depth image including a plurality of depth pixels into a disparity image including a plurality of disparity pixels, where a disparity value of each of the plurality of disparity pixel is determined based on a depth value of a different depth pixel of the plurality of depth pixels; determining a boundary weight for a target depth pixel of the plurality of depth pixels based on a gradient magnitude of the target depth pixel in the depth image and a gradient magnitude of a brightness pixel in a brightness image; determining an energy for the target depth pixel based on the boundary weight; determining a new depth value of the target depth pixel by optimizing the energy; and updating the depth image by assigning the new depth value to the target depth pixel.
  • Example 17 provides the one or more non-transitory computer-readable media of example 16, where the operations further include: the brightness image and the depth image capture a same object, the brightness image includes a plurality of brightness pixels that includes the brightness pixel, and each respective brightness pixel of the plurality of brightness pixels correspond to a respective depth pixel of the plurality of depth pixels.
  • Example 18 provides the one or more non-transitory computer-readable media of example 16, where determining the energy for the target depth pixel based on the boundary weight includes: determining a spatial error energy for the target depth pixel based on the boundary weight, where optimizing the energy includes optimizing the spatial error energy by reducing a difference between a depth value of the target depth pixel and a depth value of another depth pixel that is adjacent to the target depth pixel in the depth image.
  • Example 19 provides the one or more non-transitory computer-readable media of example 18, where determining the energy for the target depth pixel based on the boundary weight further includes: determining a conditional error energy for the target depth pixel based on a depth value of the target depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image, where the conditional error energy indicates a measure of uncertainty in the depth value of the target depth pixel given the brightness value of the brightness pixel.
  • Example 20 provides the one or more non-transitory computer-readable media of example 16, where the depth image and the brightness image are generated based on image data from a same image sensor.

Abstract

A depth estimation system can use an image energy model to enhance depth estimation using brightness image. Light is projected onto an object. The object reflects at least a portion of the projected light. The reflected light is at least partially captured by an image sensor. The depth estimation system may generate a depth image based on a phase shift between the captured light and the projected light and generate a brightness image based on brightness of the captured light. The depth estimation system may determine a fusion energy based on the depth image and the brightness image and minimize the fusion energy to determine a new depth value of a pixel. The depth estimation system can assign the new depth value to the pixel and generates an enhanced depth image. The enhanced depth image includes better depth estimation than the original depth image.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • This application claims the benefit of U.S. Provisional Patent Application No. 63/310,859, filed Feb. 16, 2022, which is incorporated by reference its entirety.
  • TECHNICAL FIELD OF THE DISCLOSURE
  • The present disclosure relates generally to depth estimation and, more specifically, to using an energy model to enhance depth estimation with brightness images.
  • BACKGROUND
  • One technique to measure depth is to directly or indirectly calculate the time it takes for a signal to travel from a signal source on a sensor to a reflective surface and back to the sensor. The time travelled is proportional to the distance from the sensor to the reflective surface. This travel time is commonly referred as time of flight (ToF). Various types of signals can be used with ToF sensors, the most common being sound and light. Some sensors use light as their carrier given the advantages of light with respect to speed, range, power, and low weight.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:
  • FIG. 1 illustrates a depth estimation system according to some embodiments of the present disclosure;
  • FIG. 2A illustrates a continuous wave of a projected signal 210 according to some embodiments of the present disclosure;
  • FIG. 2B illustrates a continuous wave of a captured signal 220 according to some embodiments of the present disclosure;
  • FIG. 3 illustrates a cycle of continuous waves of modulated light according to some embodiments of the present disclosure;
  • FIG. 4A is a block diagram illustrating a controller according to some embodiments of the present disclosure;
  • FIG. 4B is a block diagram illustrating a depth enhancement module according to some embodiments of the present disclosure;
  • FIG. 5A illustrates an example depth image according to some embodiments of the present disclosure;
  • FIG. 5B illustrates an example brightness image corresponding to the depth image in FIG. 5A according to some embodiments of the present disclosure;
  • FIG. 5C illustrates an example depth enhanced image generated from the depth image in FIG. 5A and the brightness image in FIG. 5B according to some embodiments of the present disclosure;
  • FIG. 6 illustrates an example system incorporating a depth estimation system according to some embodiments of the present disclosure;
  • FIG. 7 illustrates a mobile device incorporating a depth estimation system according to some embodiments of the present disclosure;
  • FIG. 8 illustrates an entertainment system incorporating a depth estimation system according to some embodiments of the present disclosure;
  • FIG. 9 illustrates an example robot incorporating a depth estimation system according to some embodiments of the present disclosure; and
  • FIG. 10 is a flowchart showing a method of using an energy model to enhance depth estimation with a brightness image, according to some embodiments of the present disclosure.
  • DESCRIPTION OF EXAMPLE EMBODIMENTS OF THE DISCLOSURE
  • Overview
  • Depth estimation is a fundamental task in three-dimensional (3D) computer vision. High quality and dense depth images resulting from ToF camera systems play a fundamental role in many applications, such as robotics, human-computer interaction, indoor navigation, self-driving cars, object tracking, and gesture recognition. ToF camera systems are range imaging systems. A ToF camera system typically includes a light source that projects light and an imaging sensor that receives reflected light. The ToF camera system can estimate the distance between the imaging sensor and an object by measuring the round trip of the light. A continuous-wave ToF camera system can project multiple periods of a continuous light wave and determine the distance based on the phase difference between the projected light and the received reflected light. A depth image can be generated based on the phase difference.
  • However, ToF depth maps are often captured with low resolution, different types of noise or missing values. ToF camera systems often fails to accurately estimate depth of boundaries, such as edges of objects, reflectivity boundaries (e.g., a boundary between two areas that have different reflectivity properties), and so on. The inaccurate depth estimation may limit usage of ToF camera systems in the applications mentioned above. Therefore, improved technology for depth estimation is needed.
  • Embodiments of the present disclosure relates to a depth estimation system that can use an energy model to enhance ToF depth estimation using brightness images. A brightness image may be an active brightness image, such as an infrared (IR) image, or a RGB (red, green, and blue) image. The depth estimation system may simultaneously acquire the brightness image and the ToF depth estimation. The brightness image and the ToF depth estimation may be based on same light source (e.g., IR) or different light sources (e.g., visible light for the brightness image versus IR for the ToF depth estimation). Brightness images may have better detection of boundaries and can be used to enhance depth images generated by ToF camera systems.
  • An example of the depth estimation system includes an illuminator assembly, a camera assembly, and a controller. Light is projected onto an object. The illuminator assembly may project light to illuminate a local area, such as an area that includes an object. The light may be modulated light, such as modulated IR. The illuminator assembly may project pulsed light. Alternatively, the illuminator assembly may project one or more continued waves, such as continued waves of different frequencies. The object can reflect at least a portion of the projected light. The camera assembly captures at least a portion of the reflected light and can convert captured photons to charges and accumulate the charges. The depth estimation system may generate a depth image and a brightness image from the charges accumulated in the camera assembly. In some embodiments, the depth image is based on a phase shift between the captured light and the projected light, and the brightness image based on brightness of the captured light. The depth image may include a plurality of depth pixels, each of which may correspond to a pixel in the brightness image.
  • The depth estimation system may generate an energy model that represents a fusion of the depth image and the brightness image. The depth estimation system further performs an optimization process to minimize the fusion energy. The fusion energy may be an aggregation of a spatial error energy and a conditional error energy. The spatial error energy can be used to eliminate local errors in depth (e.g., local variations produced by common sensor noise), and the conditional error energy can be used to enhance depth discontinuities and suppress non-local noise. The depth estimation system may perform the optimization process on a pixel level. For instance, the depth estimation system may determine a fusion energy for a pixel and minimize the fusion energy to determine a new depth value of the pixel. The depth estimation system may run separate optimization processes for all or a subset of pixel in the depth image. The depth estimation system can generate an enhanced depth image with the new depth values.
  • The depth estimation system can take advantage of brightness images showing cleaner boundaries to enhance depth estimation of boundaries and can also smooth flat planes. Enhanced depth images generated by the depth estimation system can show the boundaries better than regular ToF depth images. With the more accurate depth estimation, the enhanced depth images can be used in various applications.
  • Example Depth Estimation System
  • FIG. 1 illustrates a depth estimation system 100 according to some embodiments of the present disclosure. The depth estimation system 100 may use ToF techniques to generate depth images. The depth estimation system 100 includes an illuminator assembly 110, a camera assembly 120, and a controller 130. The illuminator assembly 110 includes an emitter 160 and a diffuser 165. The camera assembly 120 includes a lens 190 and an image sensor 195. In alternative configurations, different and/or additional components may be included in the depth estimation system 100. Further, functionality attributed to one component of the depth estimation system 100 may be accomplished by a different component included in depth estimation system 100 or a different system than those illustrated. For example, the illuminator assembly 110 may include no diffuser or more than one diffuser. As another example, the camera assembly 120 may include no lens or more than one lens.
  • The illuminator assembly 110 projects light 170 to a local area that includes an object 140. The emitter 160 is a light source that emits light (“emitted light”). In some embodiments, the emitter 160 may include a laser, such as an IR or near-IR (NIR) laser, an edge emitting laser, vertical-cavity surface-emitting laser (VCSEL), and so on. In other embodiments, the emitter 160 may include one or more light-emitting diodes (LEDs). The emitter 160 can emit light in the visible band (i.e., ˜380 nm to 750 nm), in the NIR band (i.e., ˜750 nm to 1 mm), in the ultraviolet band (i.e., 10 nm to 380 nm), in the shortwave IR (SWIR) band (e.g., ˜900 nm to 2200 nm), some other portion of the electromagnetic spectrum, or some combination thereof. In some embodiments, the illuminator assembly 110 may include multiple emitters 160, each of which may emit a different wavelength. For instance, the illuminator assembly 110 may include a first emitter that emits IR and a second emitter that emits visible light. The diffuser 165 spreads out or scatters the emitted light before the light 170 is projected into the local area. The diffuser 165 may also control brightness of the emitted light. In some embodiments, the diffuser 165 may be translucent or semi-transparent. In other embodiments, the illuminator assembly 110 may include more, fewer, or different components. For instance, the illuminator assembly 110 may include one or more additional diffusers to direct light from the emitter 160 to one or more additional objects in the local area.
  • The illuminator assembly 110 may project the light 170 as modulated light, e.g., according to a periodic modulation waveform. An example of the periodic modulation waveform may be a sinusoidally modulated waveform. The frequency of the periodic modulation waveform is the frequency of the modulated light.
  • The illuminator assembly 110 may project one or more continuous waves. For an individual continuous wave, the illuminator assembly 110 may project multiple periods. Different continuous waves may have different wavelengths and frequencies. For instance, the illuminator assembly 110 can project continuous waves having modulation frequencies in a range from 50 MHz to 200 MHz. In an embodiment, the illuminator assembly 110 includes multiple (i.e., at least two) light projectors. The light projectors may project continuous waves having different frequencies. The light projectors may alternate and project the continuous waves at different times. For example, a first light projector projects a first continuous wave having a first frequency during a first time of period. After the first time of period, a second light projector projects a second continuous wave having a second frequency during a second time of period. After the second time of period, a third light projector projects a third continuous wave having a third frequency during a third time of period. The three continuous waves may constitute a cycle. This cycle can repeat.
  • In another embodiment, the illuminator assembly 110 may include one light projector that projects all the three continuous waves. In other embodiments, the illuminator assembly 110 may project a different number of continuous waves, such as two or more than three. One cycle may constitute one frame. The total time for a cycle may be 10-20 ms. The illuminator assembly 110 can project light through multiple cycles for obtaining multiple frames. There may be time gap between cycles. More information regarding modulated light having multiple frequencies is provided below in conjunction with FIG. 2 .
  • At least a portion of the object 140 is illuminated by the light 170. For purpose of simplicity and illustration, the object 140 in FIG. 1 has a shape of a cube. In other embodiments, the object 140 may have other shapes or structures. Even though not shown in FIG. 1 , the local area may include other objects that can be illuminated by the light. The object reflects the light, reflected light 180, and the reflected light 180 can be captured by the camera assembly 120.
  • The camera assembly 120 captures image data of at least a portion of the local area illuminated with the light 170. For instance, the camera assembly 120 captures the reflected light 180 and generates image data based on the reflected light 180. The reflected light 180 may be IR. In some embodiments, the camera assembly 120 may also capture visible light reflected by the object 140. The visible light may be projected by the illuminator assembly 110, ambient light, or a combination of both.
  • Even though the camera assembly 120 is separated from the illuminator assembly 110 in FIG. 1 , in some embodiments, the camera assembly 120 is co-located with the illuminator assembly 110 (e.g., may be part of the same device). The lens 190 receives the reflected light 180 and directs the reflected light 180 to the image sensor 195. The image sensor 195 includes a plurality of pixels 197. Even though the pixels 197 shown in FIG. 1 are arranged in a column, pixels 197 of the image sensor 195 may also arranged in multiple columns.
  • In some embodiments, a pixel 197 includes a photodiode that is sensitive to light and converts collected photons to charges, e.g., photoelectrons. Each of the photodiodes has one or more storage regions that store the charges. The image sensor 195 may be both a ToF sensor and a brightness sensor. A pixel 197 may be a depth-sensing pixel, a brightness-sensing pixel, or both. A depth-sensing pixel is configured to present a depth output signal that is dependent on the distance from the image sensor 195 to the locus of the object 140 imaged onto the depth-sensing pixel. Such distance is a ‘depth’ of the locus of the object 140. Each depth-sensing pixel may independently determine a distance to the object 140 viewed by that pixel. The depth output signals of the depth-sensing pixels in the image sensor 195 can be used to generate a depth image of the local area. A brightness-sensing pixel is configured to present a brightness output signal that is dependent on brightness of light reflected from the locus of the object 140 imaged onto the brightness-sensing pixel. The brightness output signals of the brightness-sensing pixels in the image sensor 195 can be used to generate a brightness image of the local area. The brightness image may be an active brightness image. An example of the brightness image is an IR image. In some embodiments, each pixel 197 of the image sensor 195 may generate both a depth output signal and a brightness output signal from the reflected light that the pixel 197 captures. In other embodiments, the image sensor 195 includes two sets of pixels 197: one set is for sensing depth and the other set is for sensing brightness. The output signals of the image sensor 195 may be analog signals, such as electrical charges.
  • In embodiments where the illuminator assembly 110 projects multiple continuous waves or multiple cycles of modulated light, the image sensor 195 can be synchronized with the projection of the illuminator assembly 110. For example, the image sensor 195 may have one or more exposure intervals, during which the image sensor 195 takes exposures of the portion of the local area and charges are accumulated in the image sensor 195. Outside the exposure interval, the image sensor 195 does not take exposures. In some embodiments, an exposure interval of the image sensor 195 may be synchronized with a continuous wave or cycle projected by the illuminator assembly 110. For instance, the exposure interval starts before or when the continuous wave or cycle starts and ends when or after the continuous wave or cycle ends. In other embodiments, the image sensor 195 may have multiple exposure intervals for a single continuous wave. For instance, the image sensor 195 may take multiple exposures during a continuous wave, and the multiple exposures may correspond to different phase offsets. In an example, there are three exposure intervals for one continuous wave at three different phase offsets, such as 0° (0), 120° (2π/3), and 240° (4π/3). There may be a time gap between the exposure intervals. The time gap maybe 1-2 milliseconds (ms). The exposure intervals may have a constant duration, e.g., approximately 100 microseconds (μs). In alternative embodiments, the exposure intervals may have different durations.
  • In some embodiments, the image sensor 195 may use global shutter scanning. The image sensor 195 includes a global shutter that may open and scan during each exposure interval and closes when the exposure interval ends. Additionally or alternatively, the image sensor 195 may include a tunable filter. The tunable filter blocks light from arriving at the detector and may be mounted anywhere in the optical path of the reflected light 180. For example, the tunable filter is attached on top of the image sensor 195 or at the front of the camera assembly 120. The tunable filter can be switched between on (active) and off (inactive). The tunable filter can be inactive during an exposure interval and active when the exposure internal ends. When the tunable filter is inactive, light can pass the tunable filter and reach the image sensor 195. When the tunable filter is active, light is blocked from the image sensor 195.
  • In some embodiments, when the tunable filter is active, it may let light of a certain wavelength (or a certain band of wavelengths) pass but block light of other wavelengths. For instance, the tunable filter may let light of the wavelengths projected by the illuminator assembly 110 (e.g., the light 170) pass, but block light of other wavelengths, which can, for example, reduce noise in the image data captured by the image sensor 195. In an example where the light 170 is IR, the tunable filter may block visible light. In other embodiments, when the tunable filter is active, it can block light of all wavelengths to avoid charge accumulation in the image sensor 195. In embodiments where the tunable filter blocks all light, dark noise calibration of the image sensor 195 can be conducted.
  • The camera assembly 120 may read out stored photoelectrons from the image sensor 195 to obtain image data, e.g., from storage regions of each pixel 197 of the image sensor 195. During the readout, the camera assembly 120 can convert the photoelectrons into digital signals (i.e., analog-to-digital conversion). In embodiments where the illuminator assembly 110 includes multiple light projectors, photoelectrons corresponding to pulses of modulated light projected by different light projectors may be stored in separate storage regions of each photodiode. The camera assembly 120 may read out the separate storage regions to obtain the image data. In some embodiments, the camera assembly 120 may read out all the image data stored in the image sensor 195. In other embodiments, the camera assembly 120 may read out some of the image data stored in the image sensor 195. For example, in embodiments where an exposure interval of the image sensor 195 is synchronized with a continuous wave projected by the illuminator assembly 110, the camera assembly 120 may execute multiple readout intervals for the continuous wave. Each readout interval may correspond to a different phase offset. In an example, there are three readout intervals for one continuous wave at three different phase offsets, such as 0°, 120°, and 240°. There may be a time gap between the readout intervals. The time gap maybe 1-2 milliseconds (ms). The readout intervals may have a constant duration, e.g., approximately 100 microseconds (μs). In alternative embodiments, the readout intervals may have different durations.
  • The controller 130 controls the illuminator assembly 110 and the camera assembly 120. For instance, the controller 130 provides illumination instructions to the illuminator assembly 110, and the illuminator assembly 110 projects the light 170 in accordance with the illumination instructions. The controller 130 can also provide imaging instructions to the camera assembly 120, and the camera assembly 120 takes exposures and reads out image data in accordance with the imaging instructions.
  • The controller 130 also determines depth information using image data from the camera assembly 120. For instance, the controller 130 can generate depth images from the image data. A depth image includes a plurality of depth pixels. Each depth pixel has a value corresponding to an estimated depth, e.g., an estimated distance from a locus of the object 140 to the image sensor 195. A single depth image may also be referred to as a depth frame or a depth map. In embodiments where the illuminator assembly 110 projects a continuous wave of modulated light, the controller 130 may determine depth information based on the phase shift between the light 170 projected by the illuminator assembly 110 and the reflected light 180. In embodiments where the camera assembly 120 reads out image data corresponding to different phase offsets of modulated light, the controller 130 may perform phase unwrapping to determine depth information. In some embodiments (e.g., embodiments where the illuminator assembly 110 projects multiple cycles of modulated light), the controller 130 may generate multiple depth frames.
  • The controller 130 can also generate a brightness image that corresponds to a depth image. The image data for the brightness image and the image data for the depth image may be generated by the camera assembly 120 from same light, such as the reflected light 180. In some embodiments, the brightness image and the depth image are generated simultaneously. For instance, the camera assembly 120 may simultaneously reads out the image data for the brightness image and the image data for the depth image. In an embodiment, the image data for the brightness image and the image data for the depth image are the same image data. The brightness image may include a plurality of brightness pixels. Each brightness pixel has a value corresponding to a light intensity, e.g., an IR intensity. A brightness pixel in the brightness image may correspond to a depth pixel in the depth image. The brightness pixel and the depth pixel may be generated from light reflected from the same locus of the object 140. For instance, the brightness pixel and the depth pixel may be generated based on signals from the same pixel 197 of the image sensor 195, and the pixel 197 captures the light reflected from the locus of the object 140.
  • The controller 130 can further enhance depth estimation in a depth image by fusing the depth image with a brightness image based on an energy model. The brightness image may show one or more cleaner boundaries of the object 140 than the depth image. For instance, one or more depth pixels that represent at least a portion of a boundary of the object 140 may be invalid. The boundary may be an edge of the object 140. Alternatively, the boundary is a boundary between two areas of the object 140 that have different reflectivity properties, such as a boundary between a fluorescent strip, which has relatively high reflectivity, and a low reflectivity surface. The controller 130 may take advantage of the more accurate information of the boundaries of the object 140 in the brightness image to generate an enhanced depth image, which includes better depth estimation than the original depth image. Certain aspects of the controller 130 are described below in conjunction with FIGS. 4A and 4B.
  • Example Modulated Signals
  • FIG. 2A illustrates a continuous wave of a projected signal 210 according to some embodiments of the present disclosure. The projected signal 210 is a modulated signal, e.g., a modulated light projected by the illuminator assembly 110 in FIG. 1 . The projected signal 210 has a sinusoidally modulated waveform. In the embodiments of FIG. 2A, the sinusoidally modulated waveform may be represented by the following equation:

  • S(t)=A s sin(2πft)+B s
  • Where t denotes time, S denotes optical power of projected signal, f is the frequency of the modulated signal (i.e., modulation frequency), π is the mathematical constant, As denotes the amplitude of the modulated signal, Bs denotes an offset of the modulated signal that may include attenuated original offset and/or an offset due to presence of ambient light (e.g., sunlight or light from artificial illuminants).
  • FIG. 2B illustrates a continuous wave of a captured signal 220 according to some embodiments of the present disclosure. The captured signal 220 is a captured portion of modulated light reflected by an object illuminated by the projected signal 210. The captured signal 220 may be captured by the image sensor 195 and may be at least a portion of the reflected light 180 in FIG. 1 . In the embodiments of FIG. 2A, the captured signal 220 can be represented by the following equation:
  • r ( t ) = α ( A s sin ( 2 π ft + φ ) + B s ) φ = 2 π f δ δ = 2 d c
  • where r denotes the optimal power of the captured signal 220, α denotes an attenuation factor of the captured signal 220, φ denotes a phase shift between the waveform of the captured signal 220 and the waveform of the projected signal 210, δ is time delay between the captured signal 220 and the projected signal 210, d denotes the distance from the image sensor 195 to the object 140 (i.e., the depth of the object 140), and c is the speed of light.
  • FIG. 3 illustrates a cycle 310 of continuous waves 315A-C, 325A-C, and 335A-C of modulated light according to some embodiments of the present disclosure. The continuous waves 315A-C, 325A-C, and 335A-C are sinusoidal waves in FIG. 3 . In other embodiments, the continuous waves 315A-C, 325A-C, and 335A-C may have different waveforms. The continuous waves 315A-C has a frequency 317, the continuous waves 325A-C has a frequency 327, and the continuous waves 335A-C has a frequency 337. The three frequencies 317, 327, and 337 are different from each other. In the embodiments of FIG. 3 , the frequency 317 is smaller than the frequency 327, and the frequency 327 is smaller than the frequency 337. The three frequencies 317, 327, and 337 may be in a range from 50 to 200 MHz or higher frequencies.
  • In an embodiment, the cycle 310 may be a cycle of projecting the modulated light by the illuminator assembly 110 in FIG. 1 . In another embodiment, the cycle 310 may be a cycle of exposure by the camera assembly 120. For instance, the camera assembly 120 may take exposures during the periods of times of the continuous waves 315A-C, 325A-C, and 335A-C and not take exposures beyond these periods of times, despite that the illuminator assembly 110 may project modulated light beyond these periods of times. In yet another embodiment, the cycle 310 may be a cycle of readout by the camera assembly 120. For instance, the camera assembly 120 may read out charges accumulated in the image sensor 195 during the periods of times of the continuous waves 315A-C, 325A-C, and 335A-C and not read out charges beyond these periods of times, despite that the illuminator assembly 110 may project modulated light beyond these periods of times or that the image sensor 195 may take exposures beyond these periods of times. Even though the cycle 310 in FIG. 3 includes three continuous waves for each of the three frequencies, a cycle in other embodiments may include a different number of frequencies or a different number of continuous waves for each frequency.
  • In FIG. 3 , the continuous waves 315A-C have different phase offsets. For instance, the continuous wave 315A has a phase offset of 0°, the continuous wave 315B has a phase offset of 120°, versus the continuous wave 315C has a phase offset of 240°. Similarly, the continuous waves 325A-C may have different phase offsets from each other: the continuous wave 325A may have a phase offset of 0°, the continuous wave 325B may have a phase offset of 120°, versus the continuous wave 325C may have a phase offset of 240°; and the continuous waves 335A-C may start at different phase offsets from each other: the continuous wave 335A may have a phase offset of 0°, the continuous wave 335B may have a phase offset of 120°, versus the continuous wave 335C may have a phase offset of 240°. The continuous waves 315A, 325A, and 335A may each have a phase between 0° and 120°, the continuous waves 315B, 325B, and 335B may each have a phase between 120° and 240°, and the continuous waves 315C, 325C, and 335C may each have a phase between 240° and 360°. In some embodiments, each continuous wave may have a time duration of around 100 μs. A time gap between two adjacent continuous waves may be in a range from 1 to 2 ms.
  • In other embodiments, the cycle 310 may not have multiple continuous waves for each frequency. Rather, the cycle 310 has a single continuous wave for an individual frequency. For instance, the cycle 310 may include a first continuous wave for the frequency 317, and a second continuous wave for the frequency 327, and a third continuous wave for the frequency 337. The first, second, and third continuous waves may all start at 0°. There may be a time gap between two adjacent continuous waves of the first, second, and third continuous waves. The cycle 310 may produce image data for the controller 130 to generate a frame. The cycle 310 can be repeated for the controller 130 to generate more frames. The controller 130 may perform phase unwrapping to determine depth information.
  • Example Controller
  • FIG. 4A is a block diagram illustrating the controller 130 according to some embodiments of the present disclosure. The controller 130 includes a database 410, an illuminator module 420, a camera module 430, a depth module 440, a brightness module 450, and a depth enhancement module 460. These modules are software modules implemented on one or more processors, dedicated hardware units, or some combination thereof. Some embodiments of the controller 130 have different components than those described in conjunction with FIG. 4 . Similarly, functions of the components described in conjunction with FIG. 4 may be distributed among other components in a different manner than described in conjunction with FIG. 4 . For example, some or all of the functionality described as performed by the controller 130 may be performed by a device that incorporates a depth estimation system, such as the system 600 in FIG. 6 , the mobile device 700 in FIG. 7 , the entertainment system 800 in FIG. 8 , the robot 900 in FIG. 9 , or other devices.
  • The database 410 stores data generated and/or used by the controller 130. The database 410 is a memory, such as a ROM, DRAM, SRAM, or some combination thereof. The database 410 may be part of a larger digital memory of a depth estimation system, such as the depth estimation system 100, or a device that incorporates the depth estimation system. In some embodiments, the database 410 stores image data from the camera assembly 120, depth images generated by the depth module 440, brightness images generated by the brightness module 450, enhanced depth images generated by the depth enhancement module 460, parameters for energy models generated by the depth enhancement module 460, parameters for optimizing energy models, and so on. In some embodiments, the database 410 may store calibration data and/or other data from other components, such as depth instructions. Depth instructions include illuminator instructions generated by the illuminator module 420 and camera instructions generated by the camera module 430.
  • The illuminator module 420 controls the illuminator assembly 110 via illuminator instructions. The illuminator instructions include one or more illumination parameters that control how light is projected by the illuminator assembly 110. An illumination parameter may describe, e.g., waveform, wavelength, amplitude, frequency, phase offset, starting time of each continuous wave, ending time of each continuous wave, duration of each continuous wave, some other parameter that controls how the light is projected by the illuminator assembly 110, or some combination thereof. The illuminator module 420 may retrieve the illuminator instructions from the database 350. Alternatively, the illuminator module 420 generates the illuminator instructions. For example, the illuminator module 420 determines the one or more illumination parameters. In embodiments where the illuminator assembly 110 include multiple modulated light projectors, the illuminator module 420 may determine separate illumination parameters for different light projectors.
  • The camera module 430 controls the camera assembly 120 via camera instructions. The camera module 430 may retrieve camera instructions from the database 410. Alternatively, the camera module 430 generates camera instructions based in part on the illuminator instructions generated by the illuminator module 420. The camera module 430 determines exposure parameters (such as starting time, ending time, or duration of an exposure interval, etc.) of the camera assembly 120, e.g., based on one or more illumination parameters (such as duration of a continuous wave, etc.) specified in the illuminator instructions. For example, the camera module 430 determines that the duration of an exposure equals the duration of a continuous wave. Sometimes the camera module 430 determines that duration of an exposure is longer than the duration of a continuous wave to avoid failure to collect a whole continuous wave due to delay in incoming light. The duration of an exposure can be 20% longer than the duration of a continuous wave. In some embodiments, the camera module 430 also determines a number of exposure intervals for each continuous wave of modulated light projected by the illumination assembly 110.
  • The camera instruction may include readout instructions for controlling readouts of the camera assembly 120. The camera module 430 may determine readout parameters (such as starting time, ending time, or duration of a readout interval, etc.) of the camera assembly 120. For example, the camera module 430 determines a starting time for each of one or more readout intervals, e.g., based on one or more illumination parameters (such as phase, waveform, starting time, or other parameters of a continuous wave). The camera module 430 may also determine a duration for each readout interval, the number of readout intervals for a continuous wave, time gap between adjacent readout intervals, the number of readout cycles, other readout parameters, or some combination thereof.
  • The depth module 440 is configured to generate depth images indicative of distance to the object 140 being imaged, e.g., based on digital signals indicative of charge accumulated on the image sensor 195. The depth module 440 may analyze the digital signals to determine a phase shift exhibited by the light (e.g., the phase shift φ described above in conjunction with FIG. 2 ) to determine a ToF (e.g., the ToF δ described above in conjunction with FIG. 2 ) of the light and further to determine a depth value (e.g., the distance d described above in conjunction with FIG. 2 ) of the object 140.
  • In embodiments where the illumination assembly 110 projects multiple continuous waves that have different phase offsets, the depth module 440 can generate a depth image through phase unwrapping. Taking the cycle 310 in FIG. 3 for example, the depth module 440 may determine wrapped distances, each of which corresponds to a respective phase. The depth module 440 can further estimate unwrapped depths for each of the wrapped distances. The depth module 440 further determines Voronoi vectors corresponding to the unwrapped depths and generate a lattice of Voronoi cells. Each unwrapped depth corresponds to a Voronoi cell of the lattice. In alternate embodiments, the depth module 440 is configured to determine depth information using a ratio of charge between the storage regions associated with each photodiode of the camera assembly 120.
  • The brightness module 450 generates brightness images, such as active brightness images. In some embodiments, for a depth image generated by the depth module 440, the brightness module 450 generates a corresponding brightness image. The brightness module 450 may generate a brightness image in accordance with a request for the brightness image from the depth enhancement module 460. The brightness module 450 may generate the depth image based on a phase shift between first captured light and projected light, and generate the corresponding brightness image based on the intensity or amplitude of second captured light. In some embodiments, the first captured light and the second capture light are same light. In other embodiments, the second captured light is different from the first captured light. For instance, the first captured light may be IR, versus the second captured light may be visible light.
  • In embodiments where the depth module 440 generates the depth image based on charges accumulated in a set of pixels 197 of the image sensor 195, the brightness module 450 may generate the corresponding brightness image based on charges accumulated in all or some of the pixels 197 in the set. The corresponding brightness image includes a plurality of brightness pixels. Each brightness pixel may correspond to a depth pixel in the depth image. For instance, the values of the depth pixel and corresponding brightness pixel may be both determined based on charges accumulated in a same pixel 197 in the image sensor 195. The charges accumulated in the pixel 197 may be converted from photons of modulated light reflected by a locus of the object 140. The value of the depth pixel may be determined by the depth module 440 based on a phase shift in the waveform of the modulated light. The value of the corresponding brightness pixel may be determined based on the accumulated charge in that pixel 197 or a different pixel 197.
  • The depth enhancement module 460 enhances depth estimation made by the depth module 440 based on brightness images generated by the brightness module 450. The depth enhancement module 460 may retrieve a depth image generated by the depth module 440 and a corresponding brightness image generated by the brightness module 450. In some embodiments, the depth enhancement module 460 may instruct the brightness module 450 to generate the corresponding brightness image. The depth enhancement module 460 may enhance depth estimation of the depth module 440 by using an energy model to fuse the depth image and the corresponding brightness image. The depth enhancement module 460 may generate an energy function based on the depth image and the corresponding brightness image and then generate an enhanced depth image through an optimization of the image energy function. The depth enhancement process by the depth enhancement module 460 converts the depth image into the enhanced depth image. The brightness image may be used as a guidance in the depth enhancement process.
  • FIG. 4B is a block diagram illustrating the depth enhancement module 460 according to some embodiments of the present disclosure. The depth enhancement module 460 in FIG. 4B includes a disparity generator 470, a boundary weight module 475, an fusion energy module 480, and an optimization module 485. In other embodiments, the depth enhancement module 460 may include fewer, more, or different components. Also, functions of the components of the depth enhancement module 460 described in conjunction with FIG. 4 may be distributed among other components in a different manner than described in conjunction with FIG. 4 .
  • The disparity generator 470 optionally generates a disparity image (also referred to as “disparity map”) from the depth image. In some embodiments, the disparity generator 470 converts depth pixels in the depth images to disparity pixels of the disparity image. For instance, a depth value D (e.g., a value of a depth pixel in the depth image) is converted to a disparity value (or disparity) Dt. The relationship between D and Dt may be expressed as:

  • Dt=b/D
  • where b is a constant to scale disparities to a given range of values. The disparity generator 470 may determine disparity values for all or some of the depth pixels in the depth image and generate the disparity image based on the disparity values.
  • The boundary weight module 475 determines one or more boundary weights based on the disparity or depth image and the brightness image, e.g., based on a combination of boundaries shown in the disparity map and boundaries shown in the brightness image. Even though some of the description of the boundary weight module 475 refers to using the disparity image to determine boundary weights, in some embodiments, the boundary weight module 475 may determine boundary weights based on the depth image instead.
  • In some embodiments, the boundary weight module 475 determines boundary weights by applying gradients to the disparity or depth image and the brightness image. The gradient of an image (e.g., the depth image, disparity image, or brightness image) identifies boundaries in the image. A boundary weight can avoid boundary blending and may be used to preserve or enhance a boundary in the image. The boundary weight module 475 may determine a fusion gradient for a pixel of the enhanced depth image. The pixel may correspond to a depth pixel in the depth image, a disparity pixel in the disparity image, and/or a brightness pixel in the brightness image. The pixel may have a depth value (i.e., the depth value of the corresponding depth pixel in the depth image), a disparity value (i.e., the disparity value of the corresponding disparity pixel in the disparity image), a brightness value (i.e., the brightness value of the corresponding brightness pixel in the brightness image), and an enhanced depth value (i.e., the enhanced depth value of the pixel in the enhanced depth image). The pixel may be represented by an index i or a pair of (x,y) coordinates, which indicate a position of the pixel in the enhanced depth image, the depth image, the brightness image, or the disparity image.
  • To determine the magnitude of the fusion gradient (“fusion gradient magnitude,” EI), the boundary weight module 475 may determine the magnitude of a depth gradient (“depth gradient magnitude,” ED) of the pixel (x,y) in the disparity or depth image. The depth gradient may indicate a change (e.g., a directional change) in the depth values in the depth image. The depth gradient may be a two-dimensional vector that has a magnitude (i.e., ED) and a direction along the direction of depth value increases. In an embodiment, the boundary weight module 475 applies a gradient operator to the disparity or depth image. The gradient operator may return the depth gradient magnitude ED of the pixel (x,y). The boundary weight module 475 may also determine the magnitude of a brightness gradient (“brightness gradient magnitude,” EA) of the pixel (x,y) in the brightness image, e.g., by applying the same or a different gradient operator to the brightness image. The boundary weight module 475 further combines the depth gradient magnitude ED and the brightness gradient magnitude EA to generate a fusion gradient (EI) of the pixel (x,y). In an embodiment, the boundary weight module 475 determines that the fusion gradient EI is a product of the depth gradient magnitude ED and the brightness gradient magnitude EA:

  • E I =E D *E A
  • In some embodiments, the boundary weight module 475 may use one or more gradient operators, examples of which include Sobel gradient operator, Prewitt gradient operator, central different gradient operator, intermediate difference gradient operator, Roberts gradient operator, or other suitable types of gradient operators. In some embodiments, the boundary weight module 475 determines a weighted difference in values (depth values or brightness values) of the pixel (x,y) and adjacent pixels as the gradient magnitude (ED or EA). An adjacent pixel is a pixel that adjoins the pixel. The coordinates of an adjacent pixel may be (x+1,y), (x,y+1), (x+1,y+1), etc. For instance, the boundary weight module 475 may determine a respective difference between the value of the pixel and each adjacent pixel and aggregate the differences for all the adjacent pixels based on weights. The weight for an adjacent pixel may be determined based on the fusion depth gradient from the pixel to the adjacent pixel. The distance may be determined based on the coordinates of the pixel and the adjacent pixel. The depth gradient magnitude ED may be a weighted sum or weighted average of the differences in the depth values. Similarly, the brightness gradient magnitude EA may be a weighted sum or weighted average of the differences in the brightness values. In other embodiments, the boundary weight module 475 determines the gradient magnitude (ED or EA) as a difference in values of the pixel and a single adjacent pixel: the depth gradient magnitude ED is the difference between the depth value of the pixel and the depth value of the adjacent pixel, and the brightness gradient magnitude EA is the difference between the brightness value of the pixel and the brightness value of the adjacent pixel.
  • In another embodiment, the gradient operator may be denoted as:
  • f = ( g x g y ) = ( f x f y )
  • where
  • f x
  • is the derivative with respect to x and indicates gradient in the x direction,
  • f y
  • is the derivative with respect to y and indicates gradient in they direction. The magnitude of the gradient may be defined as √{square root over (gx 2+gy 2)}, |gx|+|gy|, or other values determined based on gx and gy.
  • Further, the boundary weight module 475 determines a boundary weight WE for the pixel (x,y) based on the fusion gradient magnitude EI:

  • W E(x,y)=e −α*E I (x,y)
  • where α is a scaling constant. The boundary weight module 475 may determine respective boundary weights for all or some of the pixels.
  • The fusion energy module 480 determines a fusion energy based on one or more boundary weights determined by the boundary weight module 475. In some embodiments, the fusion energy module 480 determines a fusion energy for the whole enhanced depth image. In other embodiments, the fusion energy module 480 determines a fusion energy for a pixel of the enhanced depth image, e.g., the pixel (x,y). A fusion energy Q may be a combination of a spatial error energy QS and a conditional entropy error energy QH. The spatial error energy QS may be used to reduce spatial image noise, such as local noises. The local noises may be Gaussian or other sensor noises. The spatial error energy QS can enforce similarity between neighbouring pixel depths, when applicable. The similarity assumption allows to smooth the depth map within the continuous object surface areas with gradual depth changes. This smoothing effect may be disabled in depth transitions at object boundaries due to the adjacent pixel boundary weights. In some embodiments, the spatial error energy QS can be minimized by reducing a difference between depth values of adjacent pixels. The conditional entropy error energy QH can enhance depth discontinuities and suppress other non-local noise by maximizing the correlation between the estimated depth image and the brightness image. The conditional entropy error energy QH may indicate a measure of uncertainty in the depth value of a pixel given the brightness value of the pixel.
  • In some embodiments, the fusion energy module 480 determines the spatial error energy term QS based on a horizontal error ϵH and a vertical error ϵV, such as:

  • Q S(x,y)=Σx,y=1 X,Y W E(x,y)·ϵH 2(x,y)+Σx,y=1 X,Y W E(x,y)·ϵV 2(x,y)
  • For each depth {circumflex over (D)}(x,y) at the position (x,y), ϵH and ϵV can be defined as:

  • ϵH(x,y)=2{circumflex over (D)}(x,y)−{circumflex over (D)}(x+1,y)−{circumflex over (D)}(x−1,y)

  • ϵV(x,y)=2{circumflex over (D)}(x,y)−{circumflex over (D)}(x,y+1)−{circumflex over (D)}(x,y−1)
  • where {circumflex over (D)} represents all possible solutions for the resulting depth image, i.e., the enhanced depth image.
  • The fusion energy module 480 may determine the conditional entropy error energy QH based on mutual information or conditional entropy. The conditional error energy QH may be defined as:

  • Q H(x,y)=W E(x,y)*p(D(x,y),A(x,y))log p(D(x,y)|A(x,y))
  • where p(D(x,y), A(x,y)) represents the joint probability value for the depth value D(x,y) and the brightness value A(x,y) of the pixel (x,y), and p(D(x,y)|A(x,y)) represents the conditional probability.
  • In some embodiments, the fusion energy module 480 may determine a mutual information energy QM based on mutual information (MI) between the disparity image (or the depth image) and the brightness image. MI can measure the amount of information that one variable (e.g., one of the two images) contains about another variable (e.g., the other image). MI can be used to reduce uncertainty in one variable given the information in the other variable. As a similarity or correlation measure, MI can be used in multimodal imaging. It can be robust to outliers, efficient to calculate, and provide smooth cost functions on which to optimize. Assuming D and A are discrete random variables, MI can be defined as:

  • MI=H(A)+H(D)−H(A,D)=H(D)−H(D/A)
  • where H(A) and H(D) are the entropies of the brightness image and the disparity image (or depth image), respectively, H(A,D) is the joint entropy, and H(D/A) the conditional entropy of the depth D given the brightness A. Maximizing the MI may be equivalent to minimizing the conditional entropy H(D/A), therefore, QM can be alternatively defined as QH=H(D/A).
  • Given a depth image and a brightness image, the conditional entropy H(D/A) can be calculated, where the conditional entropy is a measure of how much uncertainty remains on the depth D given the brightness A, that is,
  • H ( D / A ) = - a , d p AD ( a , d ) log p AD ( d / a )
  • where pAD(a,d) is the joint probability of brightness and depth values, and pAD(d/a) is the conditional probability of the depth value given the brightness value. D may be discretised and may have a finite set of values. An alternative is to use disparities instead of depths, since disparities can be bounded and managed in a simpler way. In some embodiments, disparities and brightness values may be discretised into given numbers of bins, such as NDt and NIR respectively. After disparities and brightness values are discretised, the joint probability distribution p(a,d) can be estimated by normalizing their joint histogram. Disparities may be bounded to [0,Dtmax], where the maximum disparity Dtmax corresponds to the minimum depth.
  • The fusion energy module 480 further aggregates the spatial error energy QS and the conditional error energy QH to determine the fusion energy Q of the pixel. In some embodiments, the fusion energy Q may be represented as:

  • Q(x,y)=cQ S(x,y)+(1−c)Q H(x,y)
  • where c is the regularizing parameter used to control the effect of each energy. MI or conditional entropy might not consider the spatial relationship among pixels. The boundary weight WE, which is related to the fusion gradient EI, can be used to preserve boundaries.
  • The optimization module 485 executes an optimization process, in which the optimization module 485 optimizes fusion energy to determine enhanced depth values. The optimization module 485 optimizes the fusion energy by minimizing the image fusion energy. The optimization process may be performed on a pixel level, i.e., the optimization module 485 minimizes the fusion energy for a pixel to determine the enhanced depth value of the pixel. The optimization processes for different pixels may be independent from each other. The optimization process for the pixel i=(x,y) may be represented as:
  • D * ( x , y ) = arg min D ˆ ( Q ( x , y ) ) = arg min D ˆ ( Q S ( x , y ) + ( 1 - c ) Q H ( x , y ) )
  • where D*(x,y) is the enhanced depth value of the pixel. The optimization process may be an iterative process. The depth estimation system may run separate optimization processes for all or a subset of pixel in the depth image.
  • In some embodiments, the optimization module 485 uses a gradient descent approach to minimize the fusion energy of the pixel. The gradient descent approach includes an adaptive learning rate λ, which allows a fast approximation during first iterations and slows down progressively during the optimization process.
  • D ^ ( x , y ) = D ( x , y ) - λ Q ( x , y ) D ( x , y )
  • The result module 490 generates the enhanced depth image based on the enhanced depth values determined by the optimization module 485. For instance, the result module 490 may replace the depth value of a pixel with the enhanced depth value. The enhanced depth image represents better depth estimation, especially for one or several boundaries of the object 140.
  • Example Depth Estimation Enhancement
  • FIG. 5A illustrates an example depth image 510 according to some embodiments of the present disclosure. The depth image 510 may be generated by the depth module 440 described above in conjunction with FIG. 4 . FIG. 5B illustrates an example brightness image 520 corresponding to the depth image 510 in FIG. 5A according to some embodiments of the present disclosure. The brightness image 520 may be generated by the brightness module 450 described above in conjunction with FIG. 4 . FIG. 5C illustrates an example depth enhanced image 530 generated from the depth image 510 in FIG. 5A and the brightness image 520 in FIG. 5B according to some embodiments of the present disclosure.
  • The three images 510, 520, and 530 captures an object, an example of which is the object 140 in FIG. 1 . The object has a flat plane 540 and an edge 550. A part of the edge is enclosed in the dashed oval shapes in FIGS. 5A-5C. As shown in FIGS. 5A and 5B, the brightness image 520 shows a cleaner edge than the depth image 510. Thus, the brightness image 520 can be used to enhance depth estimation of the object, particularly the edge 550 of the object. The brightness image 520 is fused with the depth image 510 through an energy model to generate the depth enhanced image 530. The depth enhanced image 530 shows better depth estimation than the depth image 510. As shown in FIGS. 5A and 5C, the edge in the depth enhanced image 530 is cleaner than the depth image 510. The depth enhanced image 530 may be generated by using an energy model, for which the depth image 510 may be used an input image and the brightness image 520 may be used a guidance image. The depth enhanced image 530 may be generated by the depth enhancement process described above in conjunction with the depth enhancement module 460. For instance, the pixels representing the edge 550 are identified and new depth values for the pixels are determined based on the values of the pixels in the brightness image 520. The new depth values of the pixels are used to generate the depth enhanced image 530.
  • FIGS. 5A and 5C also show that the flat plane 540 is smoother in the depth enhanced image 530. In some embodiments of FIGS. 5A-5C, pixels representing the flat plane 540 are also identified. For each of these pixels, a box is defined. The box may be centered at the pixel and include other pixels that surround the pixel. A new depth value of the pixel is determined based on depth values of the other pixels in the box. For instance, the new depth value of the pixel may be an average of the depth values of the other pixels. The new depth values of the pixels are used to generate the depth enhanced image 530. Through such a depth enhancement process, the depth enhanced image 530 represents more accurate depth estimation of the object, such as more accurate depth estimation of the flat plane 540, the edge 550, or both.
  • Example Applications Incorporating Depth Estimation System
  • FIG. 6 illustrates an example system 600 incorporating a depth estimation system according to some embodiments of the present disclosure. An embodiment of the depth estimation system is the depth estimation system 100 in FIG. 1 . The system 600 includes an imaging device 610, a processor 620, a memory 630, an input device 640, an output device 650, and a battery/power circuitry 660. In other embodiments, the system 600 may include fewer, more, or different components. For instance, the system 600 may include multiple processors, memories, display devices, input devices, or output devices.
  • The imaging device 610 captures depth images and brightness images. The imaging device 610 may include an illuminator assembly, such as the illuminator assembly 110, for projecting light into an environment surrounding the system 600. The imaging device 610 can project modulated light, such as pulsed modulated light or continuous waves of modulated light. The imaging device 610 also includes a camera assembly, such as the camera assembly 120, that captures light reflected by one or more objects in the environment and generates image data of the one or more objects.
  • The processor 620 can process electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The processor 620 may perform some or all functions of some or all components of the controller 130, such as depth estimation, enhancing depth estimation with brightness signa, and so on. The processor 620 may include one or more digital signal processors (DSPs), application-specific integrated circuits (ASICs), CPUs, GPUs, cryptoprocessors (specialized processors that execute cryptographic algorithms within hardware), server processors, or any other suitable processing devices.
  • In some embodiments, the processor 620 may also use depth information (e.g., enhanced depth images) to generate content (e.g., images, audio, etc.) for display to a user of the system by one or more display devices, such as the output device 650. The content may be used as VR, AR, or MR content. The processor 620 may also generate instructions for other components of the system 600 or another system based on enhanced depth images. For instance, the processor 620 may determine a navigation instruction for a movable device, such as a robot, a vehicle, or other types of movable devices. The navigation instruction may include navigation parameters (e.g., navigation routes, speed, orientation, and so on).
  • The memory 630 may include one or more memory devices such as volatile memory (e.g., DRAM), nonvolatile memory (e.g., read-only memory (ROM)), flash memory, solid state memory, and/or a hard drive. In some embodiments, the memory 630 may include memory that shares a die with the processor 620. The memory 630 may store processor-executable instructions for controlling operation of the depth estimation system 100, and/or data captured by the depth estimation system 100. In some embodiments, the memory 630 includes one or more non-transitory computer-readable media storing instructions executable to perform depth estimation enhancement processes, e.g., the method 1000 described below in conjunction with FIG. 10 , or the operations performed by the controller 130 (or some of the components of the controller 130) described above in conjunction with FIG. 1 and FIG. 4 . The instructions stored in the one or more non-transitory computer-readable media may be executed by the processor 620.
  • The input device 640 may include an audio input device. The audio input device 1318 may include any device that generates a signal representative of a sound, such as microphones, microphone arrays, digital instruments (e.g., instruments having a musical instrument digital interface (MIDI) output), and so on. The input device 640 may also include one or more other types of input devices, such as accelerometer, gyroscope, compass, image capture device, keyboard, cursor control device (such as a mouse), stylus, touchpad, bar code reader, Quick Response (QR) code reader, sensor, radio-frequency identification (RFID) reader, and so on.
  • The output device 650 may include one or more display devices, such as one or more visual indicators. Example visual indicators include heads-up display, computer monitor, projector, touchscreen display, liquid crystal display (LCD), light-emitting diode display, or flat panel display, and so on. The output device 650 may also include an audio output device. The audio output device may include any device that generates an audible indicator, such as speakers, headsets, or earbuds, and so on. The output device 650 may also include one or more other output devices, such as audio codec, video codec, printer, wired or wireless transmitter for providing information to other devices, and so on.
  • The battery/power circuitry 660 may include one or more energy storage devices (e.g., batteries or capacitors) and/or circuitry for coupling components of the system 600 to an energy source separate from the system 600 (e.g., AC line power).
  • FIG. 7 illustrates a mobile device 700 incorporating a depth estimation system according to some embodiments of the present disclosure. An example of the depth estimation system is the depth estimation system 100 in FIG. 1 . The mobile device 700 may be a mobile phone. As shown in FIG. 7 , the mobile device 700 includes an imaging assembly 702. The imaging device may include the illuminator assembly 110 and the camera assembly 120 in FIG. 1 . The imaging assembly 702 may illuminate the environment surrounding the robot 902 with modulated light (e.g., modulated IR) and capture images of one or more objects in the environment. Even though not shown in FIG. 7 , the mobile device 700 may include one or more processors and one or more memories that can perform some or all of the functions of the controller 130. With these components, the mobile device 700 may determine depth information of one or more objects in an environment surrounding the mobile device 700. The depth information can be used, by the mobile device 700, another device, or a user of the mobile device, for various purposes, such as VR, AR, or MR applications, navigation applications, and so on. For instance, the mobile device 700 may generate and present images (two-dimensional or three-dimensional images) based on the depth information of the environment, and the images may represent virtual objects that do not exist in the real-world environment. The images may augment the real-world objects in the environment so that a user of the mobile device 700 may have an interactive experience of the real-world environment where the real-world objects that reside in the real world are enhanced by computer-generated virtual objects.
  • FIG. 8 illustrates an entertainment system 800 incorporating a depth estimation system according to some embodiments of the present disclosure. In the example of FIG. 8 , a user 808 may interact with the entertainment system via a controller 810, for example to play a video game. The entertainment system 800 includes a console 802 and display 804. The console 802 may be a video gaming console configured to generate images of a video game on the display 804. In other embodiments, the entertainment system 800 may include more, fewer, or different components.
  • The console 802 includes an imaging assembly 806. The imaging assembly 806 may include the illuminator assembly 110 and the camera assembly 120 in FIG. 1 . The imaging assembly 806 may illuminate the environment surrounding the entertainment system 800 with modulated light (e.g., modulated IR) and capture modulated light reflected by one or more objects in the environment to generate images of the objects, such as the user 808, controller 810, or other objects. Even though not shown in FIG. 8 , the console 802 may include one or more processors and one or more memories that can perform some or all of the functions of the controller 130. The console 802 may determine depth information of one or more objects in the environment. The depth information may be used to present images to the user on the display 804 or for control of some other aspect of the entertainment system 800. For example, the user 808 may control the entertainment system 800 with hand gestures, and the gestures may be determined at least in part through the depth information. The console 802 may generate or update display content (e.g., images, audio, etc.) based on the depth information and may also instruct the display 804 to present the display content to the user 808.
  • FIG. 9 illustrates an example robot 902 incorporating a depth estimation system according to some embodiments of the present disclosure. The robot 902 includes an imaging assembly 904 that may include the illuminator assembly 110 and the camera assembly 120 in FIG. 1 . The imaging assembly 904 may illuminate the environment surrounding the robot 902 with modulated light (e.g., modulated IR) and capture images of one or more objects in the environment. Even though not shown in FIG. 9 , the robot 902 may include a computing device that can perform some or all of the functions of the controller 130. The computing device may include one or more processors and one or more memories. The computing device may determine depth information of one or more objects in the environment. The robot 902 may be mobile and the computing device may use the depth information to assist in navigation and/or motor control of the robot 902. For instance, the computing device may include a navigation instruction based on the depth information. The navigation instruction may include a navigation route of the robot 902. The robot 902 may navigate in the environment in accordance with the navigation instruction.
  • Examples of uses of the technology described herein beyond those shown in FIGS. 7-9 are also possible. For example, the depth estimation system described herein may be used in other applications, such as autonomous vehicles, security cameras, and so on.
  • Example Method of Using Energy Model to Enhance Depth Estimation
  • FIG. 10 is a flowchart showing a method 1000 of using an energy model to enhance depth estimation with a brightness image, according to some embodiments of the present disclosure. The method 1000 may be performed by the controller 130. Although the method 1000 is described with reference to the flowchart illustrated in FIG. 10 , many other methods of using an energy model to enhance depth estimation with a brightness image may alternatively be used. For example, the order of execution of the steps in FIG. 10 may be changed. As another example, some of the steps may be changed, eliminated, or combined.
  • The controller 130 converts, in 1010, a depth image into a disparity image. The depth image includes a plurality of depth pixels. The disparity image includes a plurality of disparity pixels. A disparity value of each of the plurality of disparity pixel is determined based on a depth value of a different depth pixel of the plurality of depth pixels. In some embodiments, the disparity value is proportional to a reciprocal of the depth value.
  • The controller 130 determines, in 1020, a boundary weight for a target depth pixel of the plurality of depth pixels based on a gradient magnitude of the target depth pixel in the depth image and a gradient magnitude of a brightness pixel in a brightness image. The gradient magnitude of the target depth pixel in the depth image may be a difference (e.g., a weighted difference) between the target depth pixel and a depth pixel that is adjacent to the target depth pixel in the depth image (“adjacent depth pixel”). The difference may be a difference between the depth value of the target depth pixel and the depth value of the adjacent depth pixel. Similarly, the gradient magnitude of the brightness pixel in a brightness image may be a difference (e.g., a weighted difference) between the brightness pixel and another brightness pixel that is adjacent to the brightness pixel in the brightness image (“adjacent brightness pixel”). The difference may be a difference between the brightness value of the brightness pixel and the brightness value of the adjacent brightness pixel.
  • In some embodiments, the brightness image and the depth image capture a same object. The brightness image comprises a plurality of brightness pixels that includes the brightness pixel, and. Each respective brightness pixel of the plurality of brightness pixels corresponds to a respective depth pixel of the plurality of depth pixels. For instance, the target depth pixel represents a same locus of an object as the brightness pixel. In some embodiments, the depth image and the brightness image are generated based on image data from a same image sensor.
  • In some embodiments, the controller 130 may instruct an illuminator assembly to project modulated light into a local area including the object. The controller 130 may also instruct a camera assembly to capture reflected light from at least a portion of the object. The controller 130 may generate the depth image based on a phase shift between the reflected light and the modulated light projected into the local area. The controller 130 may generate the brightness image based on brightness of the reflected light. The reflected light may be first reflected light, and the controller 130 can instruct the camera assembly to capture second reflected light from at least the portion of the object and generate the brightness image based on brightness of the second reflected light. The second reflected light has a different wavelength from the first reflected light.
  • The controller 130 determines, in 1030, an energy for the target depth pixel based on the boundary weight. The controller 130 determines, in 1040, a new depth value of the target depth pixel by optimizing the energy. In some embodiments, the controller 130 determines a spatial error energy for the target depth pixel based on the boundary weight. The controller 130 may optimize the energy comprises optimizing the spatial error energy by reducing a difference between a depth value of the target depth pixel and a depth value of another depth pixel that is adjacent to the target depth pixel in the depth image. The controller 130 may also determine a conditional error energy for the target depth pixel based on a depth value of the target depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image. The conditional error energy indicates a measure of uncertainty in the depth value of the target depth pixel given the brightness value of the brightness pixel.
  • The controller 130 updates, in 1040, the depth image by assigning the depth value to the target depth pixel. The controller 130 may generate an enhanced depth image based on the depth value. The enhanced depth image represents better depth estimation than the depth image, as the depth value of the target depth pixel in the enhanced depth image has a better accuracy than the depth value of the target depth pixel in the depth image.
  • Variations and Implementations
  • While embodiments of the present disclosure were described above with references to exemplary implementations as shown in FIGS. 1-10 , a person skilled in the art will realize that the various teachings described above are applicable to a large variety of other implementations.
  • In certain contexts, the features discussed herein can be applicable to automotive systems, safety-critical industrial applications, medical systems, scientific instrumentation, wireless and wired communications, radio, radar, industrial process control, audio and video equipment, current sensing, instrumentation (which can be highly precise), and other digital-processing-based systems.
  • In the discussions of the embodiments above, components of a system, such as filters, converters, mixers, amplifiers, digital logic circuitries, and/or other components can readily be replaced, substituted, or otherwise modified in order to accommodate particular circuitry needs. Moreover, it should be noted that the use of complementary electronic devices, hardware, software, etc., offer an equally viable option for implementing the teachings of the present disclosure related to fractional frequency dividers, in various communication systems.
  • Parts of various systems for implementing duty cycle-regulated, balanced fractional frequency divider as proposed herein can include electronic circuitry to perform the functions described herein. In some cases, one or more parts of the system can be provided by a processor specially configured for carrying out the functions described herein. For instance, the processor may include one or more application-specific components, or may include programmable logic gates which are configured to carry out the functions describe herein. The circuitry can operate in analog domain, digital domain, or in a mixed-signal domain. In some instances, the processor may be configured to carrying out the functions described herein by executing one or more instructions stored on a non-transitory computer-readable storage medium.
  • In one example embodiment, any number of electrical circuits of the present figures may be implemented on a board of an associated electronic device. The board can be a general circuit board that can hold various components of the internal electronic system of the electronic device and, further, provide connectors for other peripherals. More specifically, the board can provide the electrical connections by which the other components of the system can communicate electrically. Any suitable processors (inclusive of DSPs, microprocessors, supporting chipsets, etc.), computer-readable non-transitory memory elements, etc. can be suitably coupled to the board based on particular configuration needs, processing demands, computer designs, etc. Other components such as external storage, additional sensors, controllers for audio/video display, and peripheral devices may be attached to the board as plug-in cards, via cables, or integrated into the board itself. In various embodiments, the functionalities described herein may be implemented in emulation form as software or firmware running within one or more configurable (e.g., programmable) elements arranged in a structure that supports these functions. The software or firmware providing the emulation may be provided on non-transitory computer-readable storage medium comprising instructions to allow a processor to carry out those functionalities.
  • In another example embodiment, the electrical circuits of the present figures may be implemented as stand-alone modules (e.g., a device with associated components and circuitry configured to perform a specific application or function) or implemented as plug-in modules into application-specific hardware of electronic devices. Note that particular embodiments of the present disclosure may be readily included in a system on chip (SOC) package, either in part, or in whole. An SOC represents an integrated circuit that integrates components of a computer or other electronic system into a single chip. It may contain digital, analog, mixed-signal, and often radio-frequency (RF) functions: all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of separate ICs located within a single electronic package and configured to interact closely with each other through the electronic package.
  • It is also imperative to note that all of the specifications, dimensions, and relationships outlined herein (e.g., the number of components of the apparatuses and/or RF device shown in FIGS. 1-2, 4-5, 7, and 9-10 ) have only been offered for purposes of example and teaching only. Such information may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims. It should be appreciated that the system can be consolidated in any suitable manner. Along similar design alternatives, any of the illustrated circuits, components, modules, and elements of the present figures may be combined in various possible configurations, all of which are clearly within the broad scope of this specification. In the foregoing description, example embodiments have been described with reference to particular processor and/or component arrangements. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.
  • Note that with the numerous examples provided herein, interaction may be described in terms of two, three, four, or more electrical components. However, this has been done for purposes of clarity and example only. It should be appreciated that the system can be consolidated in any suitable manner. Along similar design alternatives, any of the illustrated components, modules, and elements of the figures may be combined in various possible configurations, all of which are clearly within the broad scope of this Specification. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of electrical elements. It should be appreciated that the electrical circuits of the figures and its teachings are readily scalable and can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad teachings of the electrical circuits as potentially applied to a myriad of other architectures.
  • Note that in this Specification, references to various features (e.g., elements, structures, modules, components, steps, operations, characteristics, etc.) included in “one embodiment”, “example embodiment”, “an embodiment”, “another embodiment”, “some embodiments”, “various embodiments”, “other embodiments”, “alternative embodiment”, and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments. Also, as used herein, including in the claims, “or” as used in a list of items (for example, a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of [at least one of A, B, or C] means A or B or C or AB or AC or BC or ABC (i.e., A and B and C).
  • Various aspects of the illustrative embodiments are described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. For example, the term “connected” means a direct electrical connection between the things that are connected, without any intermediary devices/components, while the term “coupled” means either a direct electrical connection between the things that are connected, or an indirect connection through one or more passive or active intermediary devices/components. In another example, the term “circuit” means one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function. Also, as used herein, the terms “substantially,” “approximately,” “about,” etc., may be used to generally refer to being within +/−20% of a target value, e.g., within +/−10% of a target value, based on the context of a particular value as described herein or as known in the art.
  • Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the examples and appended claims. Note that all optional features of the apparatus described above may also be implemented with respect to the method or process described herein and specifics in the examples may be used anywhere in one or more embodiments.
  • Interpretation of Terms
  • All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms. Unless the context clearly requires otherwise, throughout the description and the claims:
  • “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to”.
  • “connected,” “coupled,” or any variant thereof, means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof.
  • “herein,” “above,” “below,” and words of similar import, when used to describe this specification shall refer to this specification as a whole and not to any particular portions of this specification.
  • “or,” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.
  • the singular forms “a”, “an” and “the” also include the meaning of any appropriate plural forms.
  • Words that indicate directions such as “vertical”, “transverse”, “horizontal”, “upward”, “downward”, “forward”, “backward”, “inward”, “outward”, “vertical”, “transverse”, “left”, “right”, “front”, “back”, “top”, “bottom”, “below”, “above”, “under”, and the like, used in this description and any accompanying claims (where present) depend on the specific orientation of the apparatus described and illustrated. The subject matter described herein may assume various alternative orientations. Accordingly, these directional terms are not strictly defined and should not be interpreted narrowly.
  • The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.”
  • The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined.
  • Elements other than those specifically identified by the “and/or” clause may optionally be present, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” may refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
  • As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified.
  • Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) may refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
  • As used herein, the term “between” is to be inclusive unless indicated otherwise. For example, “between A and B” includes A and B unless indicated otherwise.
  • Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.
  • In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively.
  • Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims.
  • In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke 35 U.S.C. § 112(f) as it exists on the date of the filing hereof unless the words “means for” or “steps for” are specifically used in the particular claims; and (b) does not intend, by any statement in the disclosure, to limit this disclosure in any way that is not otherwise reflected in the appended claims.
  • The present invention should therefore not be considered limited to the particular embodiments described above. Various modifications, equivalent processes, as well as numerous structures to which the present invention may be applicable, will be readily apparent to those skilled in the art to which the present invention is directed upon review of the present disclosure.
  • SELECT EXAMPLES
  • Example 1 provides a method, including: converting a depth image including a plurality of depth pixels into a disparity image including a plurality of disparity pixels, where a disparity value of each of the plurality of disparity pixel is determined based on a depth value of a different depth pixel of the plurality of depth pixels; determining a boundary weight for a target depth pixel of the plurality of depth pixels based on a gradient magnitude of the target depth pixel in the depth image and a gradient magnitude of a brightness pixel in a brightness image; determining an energy for the target depth pixel based on the boundary weight; determining a new depth value of the target depth pixel by optimizing the energy; and updating the depth image by assigning the new depth value to the target depth pixel.
  • Example 2 provides the method of example 1, where: the brightness image and the depth image capture a same object, the brightness image includes a plurality of brightness pixels that includes the brightness pixel, and each respective brightness pixel of the plurality of brightness pixels correspond to a respective depth pixel of the plurality of depth pixels.
  • Example 3 provides the method of example 1, where the target depth pixel represents a same locus of an object as the brightness pixel.
  • Example 4 provides the method of example 1, where determining the energy for the target depth pixel based on the boundary weight includes: determining a spatial error energy for the target depth pixel based on the boundary weight, where optimizing the energy includes optimizing the spatial error energy by reducing a difference between a depth value of the target depth pixel and a depth value of another depth pixel that is adjacent to the target depth pixel in the depth image.
  • Example 5 provides the method of example 4, where determining the energy for the target depth pixel based on the boundary weight further includes: determining a conditional error energy for the target depth pixel based on a depth value of the target depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image, where the conditional error energy indicates a measure of uncertainty in the depth value of the target depth pixel given the brightness value of the brightness pixel.
  • Example 6 provides the method of example 1, where the disparity value is proportional to a reciprocal of the depth value.
  • Example 7 provides the method of example 1, where the depth image and the brightness image are generated based on image data from a same image sensor.
  • Example 8 provides the method of example 1, further including: instructing an illuminator assembly to project modulated light into a local area including an object; instructing a camera assembly to capture reflected light from at least a portion of the object; and generating the depth image based on a phase shift between the reflected light and the modulated light projected into the local area.
  • Example 9 provides the method of example 8, further including: generating the brightness image based on brightness of the reflected light.
  • Example 10 provides the method of example 8, where the reflected light is first reflected light, and the method further includes: instructing the camera assembly to capture second reflected light from at least the portion of the object; and generating the brightness image based on brightness of the second reflected light, where the second reflected light has a different wavelength from the first reflected light.
  • Example 11 provides a system, including: an illuminator assembly configured to project modulated light into a local area including an object; a camera assembly configured to capture reflected light from at least a portion of the object; and a controller configured to: generate a depth image from the reflected light, the depth image including a plurality of depth pixels and capturing at least a portion of the object, generate a brightness image including a plurality of brightness pixels and capturing at least the portion of the object, each brightness pixel corresponding to a different depth pixel, for each respective depth pixel of the plurality of depth pixels, determine a respective energy based on a gradient magnitude of the respective depth pixel in the depth image and a gradient magnitude of the corresponding brightness pixel in the brightness image, and generate an enhanced depth image by fusing the depth image with the brightness image based on respective energies of the plurality of depth pixels.
  • Example 12 provides the system of example 11, where fusing the depth image with the brightness image based on respective energies of the plurality of depth pixels includes: for each respective depth pixel of the plurality of depth pixels, optimizing the respective energy.
  • Example 13 provides the method of example 12, where the controller is configured to determine the respective energy based on the gradient magnitude of the respective depth pixel in the depth image and the gradient magnitude of the brightness pixel in the brightness image by: determining a spatial error energy for the respective depth pixel based on the gradient magnitude of the respective depth pixel in the depth image and the gradient magnitude of the brightness pixel in the brightness image, where optimizing the respective energy includes optimizing the spatial error energy by reducing a difference between a depth value of the respective depth pixel and a depth value of another depth pixel that is adjacent to the respective depth pixel in the depth image.
  • Example 14 provides the system of example 11, where the controller is configured to determine the respective energy based on the gradient magnitude of the respective depth pixel in the depth image and the gradient magnitude of the brightness pixel in the brightness image further by: determining a conditional error energy for the respective depth pixel based on a depth value of the respective depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image, where the conditional error energy indicates a measure of uncertainty in the depth value of the respective depth pixel given the brightness value of the brightness pixel.
  • Example 15 provides the system of example 11, where the controller is configured to generate the depth image and the brightness image based on the reflected light by: generating the depth image based on a phase shift between the reflected light and the modulated light projected into the local area; and generating the brightness image based on brightness of the reflected light.
  • Example 16 provides one or more non-transitory computer-readable media storing instructions executable to perform operations, the operations including: converting a depth image including a plurality of depth pixels into a disparity image including a plurality of disparity pixels, where a disparity value of each of the plurality of disparity pixel is determined based on a depth value of a different depth pixel of the plurality of depth pixels; determining a boundary weight for a target depth pixel of the plurality of depth pixels based on a gradient magnitude of the target depth pixel in the depth image and a gradient magnitude of a brightness pixel in a brightness image; determining an energy for the target depth pixel based on the boundary weight; determining a new depth value of the target depth pixel by optimizing the energy; and updating the depth image by assigning the new depth value to the target depth pixel.
  • Example 17 provides the one or more non-transitory computer-readable media of example 16, where the operations further include: the brightness image and the depth image capture a same object, the brightness image includes a plurality of brightness pixels that includes the brightness pixel, and each respective brightness pixel of the plurality of brightness pixels correspond to a respective depth pixel of the plurality of depth pixels.
  • Example 18 provides the one or more non-transitory computer-readable media of example 16, where determining the energy for the target depth pixel based on the boundary weight includes: determining a spatial error energy for the target depth pixel based on the boundary weight, where optimizing the energy includes optimizing the spatial error energy by reducing a difference between a depth value of the target depth pixel and a depth value of another depth pixel that is adjacent to the target depth pixel in the depth image.
  • Example 19 provides the one or more non-transitory computer-readable media of example 18, where determining the energy for the target depth pixel based on the boundary weight further includes: determining a conditional error energy for the target depth pixel based on a depth value of the target depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image, where the conditional error energy indicates a measure of uncertainty in the depth value of the target depth pixel given the brightness value of the brightness pixel.
  • Example 20 provides the one or more non-transitory computer-readable media of example 16, where the depth image and the brightness image are generated based on image data from a same image sensor.

Claims (20)

1. A method, comprising:
converting a depth image comprising a plurality of depth pixels into a disparity image comprising a plurality of disparity pixels, wherein a disparity value of each of the plurality of disparity pixel is determined based on a depth value of a different depth pixel of the plurality of depth pixels;
determining a boundary weight for a target depth pixel of the plurality of depth pixels based on a gradient magnitude of the target depth pixel in the depth image and a gradient magnitude of a brightness pixel in a brightness image;
determining an energy for the target depth pixel based on the boundary weight;
determining a new depth value of the target depth pixel by optimizing the energy; and
updating the depth image by assigning the new depth value to the target depth pixel.
2. The method of claim 1, wherein:
the brightness image and the depth image capture a same object,
the brightness image comprises a plurality of brightness pixels that includes the brightness pixel, and
each respective brightness pixel of the plurality of brightness pixels correspond to a respective depth pixel of the plurality of depth pixels.
3. The method of claim 1, wherein the target depth pixel represents a same locus of an object as the brightness pixel.
4. The method of claim 1, wherein determining the energy for the target depth pixel based on the boundary weight comprises:
determining a spatial error energy for the target depth pixel based on the boundary weight,
wherein optimizing the energy comprises optimizing the spatial error energy by reducing a difference between a depth value of the target depth pixel and a depth value of another depth pixel that is adjacent to the target depth pixel in the depth image.
5. The method of claim 4, wherein determining the energy for the target depth pixel based on the boundary weight further comprises:
determining a conditional error energy for the target depth pixel based on a depth value of the target depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image,
wherein the conditional error energy indicates a measure of uncertainty in the depth value of the target depth pixel given the brightness value of the brightness pixel.
6. The method of claim 1, wherein the disparity value is proportional to a reciprocal of the depth value.
7. The method of claim 1, wherein the depth image and the brightness image are generated based on image data from a same image sensor.
8. The method of claim 1, further comprising:
instructing an illuminator assembly to project modulated light into a local area including an object;
instructing a camera assembly to capture reflected light from at least a portion of the object; and
generating the depth image based on a phase shift between the reflected light and the modulated light projected into the local area.
9. The method of claim 8, further comprising:
generating the brightness image based on brightness of the reflected light.
10. The method of claim 8, wherein the reflected light is first reflected light, and the method further comprises:
instructing the camera assembly to capture second reflected light from at least the portion of the object; and
generating the brightness image based on brightness of the second reflected light,
wherein the second reflected light has a different wavelength from the first reflected light.
11. A system, comprising:
an illuminator assembly configured to project modulated light into a local area including an object;
a camera assembly configured to capture reflected light from at least a portion of the object; and
a controller configured to:
generate a depth image from the reflected light, the depth image comprising a plurality of depth pixels and capturing at least a portion of the object,
generate a brightness image comprising a plurality of brightness pixels and capturing at least the portion of the object, each brightness pixel corresponding to a different depth pixel,
for each respective depth pixel of the plurality of depth pixels, determine a respective energy based on a gradient magnitude of the respective depth pixel in the depth image and a gradient magnitude of a brightness pixel in the brightness image, and
generate an enhanced depth image by fusing the depth image with the brightness image based on respective energies of the plurality of depth pixels.
12. The system of claim 11, wherein fusing the depth image with the brightness image based on respective energies of the plurality of depth pixels comprises:
for each respective depth pixel of the plurality of depth pixels, optimizing the respective energy.
13. The system of claim 12, wherein the controller is configured to determine the respective energy based on the gradient magnitude of the respective depth pixel in the depth image and the gradient magnitude of the brightness pixel in the brightness image by:
determining a spatial error energy for the respective depth pixel based on the gradient magnitude of the respective depth pixel in the depth image and the gradient magnitude of the brightness pixel in the brightness image,
wherein optimizing the respective energy comprises optimizing the spatial error energy by reducing a difference between a depth value of the respective depth pixel and a depth value of another depth pixel that is adjacent to the respective depth pixel in the depth image.
14. The system of claim 11, wherein the controller is configured to determine the respective energy based on the gradient magnitude of the respective depth pixel in the depth image and the gradient magnitude of the brightness pixel in the brightness image further by:
determining a conditional error energy for the respective depth pixel based on a depth value of the respective depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image,
wherein the conditional error energy indicates a measure of uncertainty in the depth value of the respective depth pixel given the brightness value of the brightness pixel.
15. The system of claim 11, wherein the controller is configured to generate the depth image and the brightness image based on the reflected light by:
generating the depth image based on a phase shift between the reflected light and the modulated light projected into the local area; and
generating the brightness image based on brightness of the reflected light.
16. One or more non-transitory computer-readable media storing instructions executable to perform operations, the operations comprising:
converting a depth image comprising a plurality of depth pixels into a disparity image comprising a plurality of disparity pixels, wherein a disparity value of each of the plurality of disparity pixel is determined based on a depth value of a different depth pixel of the plurality of depth pixels;
determining a boundary weight for a target depth pixel of the plurality of depth pixels based on a gradient magnitude of the target depth pixel in the depth image and a gradient magnitude of a brightness pixel in a brightness image;
determining an energy for the target depth pixel based on the boundary weight;
determining a new depth value of the target depth pixel by optimizing the energy; and
updating the depth image by assigning the new depth value to the target depth pixel.
17. The one or more non-transitory computer-readable media of claim 16, wherein the operations further comprise:
the brightness image and the depth image capture a same object,
the brightness image comprises a plurality of brightness pixels that includes the brightness pixel, and
each respective brightness pixel of the plurality of brightness pixels correspond to a respective depth pixel of the plurality of depth pixels.
18. The one or more non-transitory computer-readable media of claim 16, wherein determining the energy for the target depth pixel based on the boundary weight comprises:
determining a spatial error energy for the target depth pixel based on the boundary weight,
wherein optimizing the energy comprises optimizing the spatial error energy by reducing a difference between a depth value of the target depth pixel and a depth value of another depth pixel that is adjacent to the target depth pixel in the depth image.
19. The one or more non-transitory computer-readable media of claim 18, wherein determining the energy for the target depth pixel based on the boundary weight further comprises:
determining a conditional error energy for the target depth pixel based on a depth value of the target depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image,
wherein the conditional error energy indicates a measure of uncertainty in the depth value of the target depth pixel given the brightness value of the brightness pixel.
20. The one or more non-transitory computer-readable media of claim 16, wherein the depth image and the brightness image are generated based on image data from a same image sensor.
US17/813,300 2022-02-16 2022-07-18 Using energy model to enhance depth estimation with brightness image Pending US20230260143A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/813,300 US20230260143A1 (en) 2022-02-16 2022-07-18 Using energy model to enhance depth estimation with brightness image
PCT/EP2023/053961 WO2023156561A1 (en) 2022-02-16 2023-02-16 Using energy model to enhance depth estimation with brightness image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263310859P 2022-02-16 2022-02-16
US17/813,300 US20230260143A1 (en) 2022-02-16 2022-07-18 Using energy model to enhance depth estimation with brightness image

Publications (1)

Publication Number Publication Date
US20230260143A1 true US20230260143A1 (en) 2023-08-17

Family

ID=87558765

Family Applications (3)

Application Number Title Priority Date Filing Date
US17/813,300 Pending US20230260143A1 (en) 2022-02-16 2022-07-18 Using energy model to enhance depth estimation with brightness image
US17/890,982 Pending US20230260094A1 (en) 2022-02-16 2022-08-18 Using guided filter to enhance depth estimation with brightness image
US18/067,569 Pending US20230258810A1 (en) 2022-02-16 2022-12-16 Enhancing depth estimation with brightness image

Family Applications After (2)

Application Number Title Priority Date Filing Date
US17/890,982 Pending US20230260094A1 (en) 2022-02-16 2022-08-18 Using guided filter to enhance depth estimation with brightness image
US18/067,569 Pending US20230258810A1 (en) 2022-02-16 2022-12-16 Enhancing depth estimation with brightness image

Country Status (1)

Country Link
US (3) US20230260143A1 (en)

Also Published As

Publication number Publication date
US20230260094A1 (en) 2023-08-17
US20230258810A1 (en) 2023-08-17

Similar Documents

Publication Publication Date Title
CN110998223B (en) Detector for determining the position of at least one object
EP3092509B1 (en) Fast general multipath correction in time-of-flight imaging
US20210383560A1 (en) Depth measurement assembly with a structured light source and a time of flight camera
CN107765260B (en) Method, apparatus, and computer-readable recording medium for acquiring distance information
US8611610B2 (en) Method and apparatus for calculating a distance between an optical apparatus and an object
US10452947B1 (en) Object recognition using depth and multi-spectral camera
US20170329012A1 (en) Optoelectronic modules for distance measurements and/or multi-dimensional imaging
KR20120071970A (en) 3d image acquisition apparatus and method of extractig depth information in the 3d image acquisition apparatus
US10295657B2 (en) Time of flight-based systems operable for ambient light and distance or proximity measurements
US20190355136A1 (en) Reduced power operation of time-of-flight camera
US10996335B2 (en) Phase wrapping determination for time-of-flight camera
US10663593B2 (en) Projector apparatus with distance image acquisition device and projection method
EP3170025B1 (en) Wide field-of-view depth imaging
CN111736173A (en) Depth measuring device and method based on TOF and electronic equipment
KR20170076477A (en) Method and device for acquiring distance information
US20230260143A1 (en) Using energy model to enhance depth estimation with brightness image
JP2020052001A (en) Depth acquisition device, depth acquisition method, and program
CN116520348A (en) Depth imaging system, method, equipment and medium based on modulated light field
WO2023156561A1 (en) Using energy model to enhance depth estimation with brightness image
US11920919B2 (en) Projecting a structured light pattern from an apparatus having an OLED display screen
WO2023156568A1 (en) Using guided filter to enhance depth estimation with brightness image
WO2023156566A1 (en) Enhancing depth estimation with brightness image
Schönlieb et al. Stray-light mitigation for under-display time-of-flight imagers
JP7259660B2 (en) Image registration device, image generation system and image registration program
CN114693757A (en) Spatial neural network deep completion method, system, device and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: ANALOG DEVICES INTERNATIONAL UNLIMITED COMPANY, IRELAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ACHAIBOU, AMINA;BANON, FILIBERTO PLA;MARAVILLA, JAVIER CALPE;SIGNING DATES FROM 20220717 TO 20220718;REEL/FRAME:060688/0692

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION