WO2022200269A1 - Illumination device and method for time-of-flight cameras - Google Patents

Illumination device and method for time-of-flight cameras Download PDF

Info

Publication number
WO2022200269A1
WO2022200269A1 PCT/EP2022/057341 EP2022057341W WO2022200269A1 WO 2022200269 A1 WO2022200269 A1 WO 2022200269A1 EP 2022057341 W EP2022057341 W EP 2022057341W WO 2022200269 A1 WO2022200269 A1 WO 2022200269A1
Authority
WO
WIPO (PCT)
Prior art keywords
illumination device
emitters
illumination
intensity
scene
Prior art date
Application number
PCT/EP2022/057341
Other languages
French (fr)
Inventor
Ruxandra Marina FLOREA
Alexis Vander Biest
Pepe GIL CACHO
Original Assignee
Sony Semiconductor Solutions Corporation
Sony Depthsensing Solutions Sa/Nv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Semiconductor Solutions Corporation, Sony Depthsensing Solutions Sa/Nv filed Critical Sony Semiconductor Solutions Corporation
Priority to EP22716406.8A priority Critical patent/EP4314881A1/en
Publication of WO2022200269A1 publication Critical patent/WO2022200269A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone
    • G01S7/4815Constructional features, e.g. arrangements of optical elements of transmitters alone using multiple transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar

Definitions

  • the present disclosure generally pertains to the field of Time-of- Flight imaging, and in particular to devices and methods for Time-of-Flight image processing.
  • a Time-of-Flight (ToF) camera is a range imaging camera system that determines the distance of objects by measuring the time of flight of a light signal between the camera and the object for each point of the image.
  • a ToF camera has an illumination unit (a LED or VCSEL, Vertical- Cavity Surface-Emitting Laser) that illuminates a scene with modulated light.
  • a pixel array in the ToF camera collects the light reflected from the scene and measures phase-shift which provides information on the travelling time of the light, and hence information on distance.
  • 3D images of a scene are captured. These images are also commonly referred to as “depth map”, or “depth image”, wherein each pixel of the image is attributed with a respective depth measurement.
  • depth image can be determined directly from a phase image, which is the collection of all phase delays determined in the pixels of the iToF camera.
  • the disclosure provides an illumination device comprising multiple emitters distributed over an emitting area, wherein the intensity profile of the emitters is characterized by a gradient distribution of radiant intensity values across the emitting area.
  • the disclosure provides a computer-implemented method comprising determining an intensity image of a scene with an illumination device comprising multiple emitters distributed over an emitting area; computing an intensity map of the scene based on the intensity image, and to defining illumination parameters for respective emitters or groups of emitters of the illumination device based on the computed intensity map.
  • Fig. 1 schematically shows the basic operational principle of an indirect Time-of-Flight imaging system, which can be used for depth sensing or providing a distance measurement;
  • Fig. 2 provides a schematic representation of an exemplifying Time-of-Flight imaging system
  • Fig. 3 schematically illustrates an illumination device that comprises an array of Vertical-Cavity Surface-Emitting Lasers (VCSELs);
  • VCSELs Vertical-Cavity Surface-Emitting Lasers
  • Fig. 4 schematically illustrates an embodiment of a VCSEL illuminator comprising a vertical cavity surface emitting laser (VCSEL) array, column drivers and row enable switches for spot scanning illuminator;
  • VCSEL vertical cavity surface emitting laser
  • Fig. 5 schematically displays a mapping that is defined between batches of one or more source emitters (e.g. VCSEL emitters) and batches of one or more pixels of a pixel array.
  • source emitters e.g. VCSEL emitters
  • Fig. 6 schematically shows an example of the radiant intensity of light emitted by an illumination device
  • Fig. 7 schematically illustrates an example of a ray distribution across various distances in an automotive scenario
  • Figs. 8a and 8b schematically illustrate an exemplifying placement and inclination of camera equipped with a gradient illumination profile in an automotive scenario.
  • Fig. 9a depicts an exemplifying road depth profile registered by an iToF camera with a 150° wide- angle lens
  • Fig. 9b depicts the radiant flux received by the iToF camera equipped with a fisheye 150° lens at pixel level when employing the classical approach of uniformly illuminating the scene.
  • Fig. 9c depicts the radiant intensity profile that the illumination source would need to have such that the radiant flux received at pixel level is constant across the entire array
  • Fig. 10 schematically describes an exemplifying process of defining illumination parameters for each emitter (e.g. VCSEL or group of VCELs) of an illuminator
  • Fig. 11 schematically describes an exemplifying process of defining illumination parameters for each emitter (e.g. VCSEL or group of VCELs) of an illuminator based on a computed intensity map;
  • Fig. 12 schematically describes an exemplifying process of determining depth information of a scene based on illumination parameters that are optimized for the scene;
  • Fig. 13 schematically shows a spot iToF imaging system which produces a spot pattern on a scene
  • Figs. 14a and 14b show exemplifying light distributions obtained by an illumination device which supports a dual mode of operation in which the illumination device can sequentially switch from a spot to fullfield state;
  • Fig. 15 schematically describes an embodiment of an iToF device that can implement an illuminator providing a gradient intensity profile.
  • an illumination device comprising multiple emitters distributed over an emitting area, wherein the intensity profile of the emitters is characterized by a gradient distribution of radiant intensity values across the emitting area.
  • the illumination device of the embodiments described below in more detail provides a profile which is characterized by a gradient distribution of radiant intensity values across the emitting area.
  • the emitters of the illumination device be any illumination units, that emit light, such as LEDs, Vertical-Cavity Surface-Emitting Lasers (VCSELs), or the like.
  • LEDs Vertical-Cavity Surface-Emitting Lasers
  • VCSELs Vertical-Cavity Surface-Emitting Lasers
  • the emitting area may be understood as the area where emitters or groups of emitters of the illumination device are located.
  • Providing an optimized illumination system for Time-of-Flight cameras may ensure a gradient illumination pattern which avoids that objects at close distance from the camera are prone to saturation. Moreover, with the illumination device according to the embodiments, it may be avoided that objects that are far away are not illuminated with enough intensity to capture reliable depth information.
  • the illumination device of the embodiments may provide an optimized allocation of the optical power of the source across the scene which in turn may result in a reduced risk of saturation for pixels corresponding to closer areas of the scene and in an increased the signal-to-noise ratio (SNR).
  • SNR signal-to-noise ratio
  • the flux per pixel received by the sensor may be larger for the groups of pixels that correspond to closer areas of the road and significantly smaller for groups of pixels corresponding to areas of the road that are further from the camera. This may result not only in pixels that correspond to closer areas of the road saturating quicker but in overall reduced signal to noise ratio.
  • a camera looking at a plane of a particular (given) inclination and illuminating the plane with a source whose radiant intensity values are characterized by a gradient profile may record a nearly constant radiant flux at every pixel of the sensor.
  • the confidence recorded by an iToF camera with an illumination device may have an almost constant value per pixel as well.
  • the illumination device is configured to independently drive emitters or groups of emitters in order to provide the gradient distribution of radiant intensity values across the emitting area.
  • the illumination device comprises individual DOEs for emitters or groups of emitters, the DOE's being configured to provide the gradient distribution of radiant intensity values across the emitting area.
  • the illumination device may be configured to apply different gradient illumination profiles for different scenes.
  • the illumination device may be configured to apply different gradient illumination profiles for different inclinations.
  • An inclination may for example be defined by the parameters pitch, yaw, and roll.
  • a ToF camera with an illumination device having the given inclination with respect to a plane would record a constant flux per pixel.
  • the illumination device may be configured to implement an algorithm including a feedback loop in order to determine the intensity profile of the emitters.
  • the emitters are configured to generate Spot ToF beams with a gradient illumination pattern. In other embodiments, the emitters are configured to generate full-field ToF beams with a gradient illumination pattern. Still further, the illumination device may also be a dual illuminator with the capability to switch from a spot ToF state to a full-field state. The illumination device may cooperate with a processor configured to determine an intensity image of a scene; compute an intensity map of the scene based on the intensity image, and to define illumination parameters for respective emitters or groups of emitters of the illumination device based on the computed intensity map.
  • the processor may be further configured to compute the intensity map of the scene based on the intensity per pixel in the intensity image, a pre-defmed illumination flux emitted by the emitters, and a camera model.
  • the processor may be configured to sample/ segment the intensity map based on the emitters location.
  • the embodiments also describe a ToF camera comprising the illumination device of claim 1.
  • the embodiments describe a computer-implemented method comprising determining an intensity image of a scene with an illumination device comprising multiple emitters distributed over an emitting area; computing an intensity map of the scene based on the intensity image, and defining illumination parameters for respective emitters or groups of emitters of the illumination device based on the computed intensity map.
  • the computer-implemented method may further comprise computing the intensity map of the scene based on the intensity per pixel in the intensity image, a pre-defined illumination flux emitted by the emitters, and a camera model.
  • An illumination device as provided by the embodiments can be used in a number of applications, especially in the autonomous driving sector.
  • the gradient illumination profile of the illumination device may ensure that, given a particular inclination of the camera and a particular distance from the camera to the road, the pixels recording information related to closer distances do not saturate, while those recording information related to larger distances acquire more reliable (i.e. accurate) information.
  • Illuminating the scene in a gradient-alike manner has the potential to increase SNR as well as depth accuracy, whether in the near, or far field. This is particularly important in the context of autonomous vehicles, where good depth accuracy is necessary at a wide range of distances in order to prevent accidents.
  • a gradient distribution of radiant intensity is beneficial compared to most traditional illumination sources in 3D sensing cameras which assume an almost constant radiant intensity which results in closer areas of the road having a larger radiance and farther road areas having a smaller radiance.
  • FIG. 1 schematically shows the basic operational principle of an indirect Time-of-Flight imaging system which can be used for depth sensing.
  • the iToF imaging system 101 includes an iToF camera, with an imaging sensor 102 having a matrix of pixels and a processor (CPU) 105.
  • a scene 107 is actively illuminated with amplitude-modulated infrared light 108 at a predetermined wavelength using an illumination device 110, for instance with some light pulses of at least one predetermined modulation frequency generated by a timing generator 106.
  • the amplitude-modulated infrared light 108 is reflected from objects within the scene 107.
  • a lens 103 collects the reflected light 109 and forms an image of the objects within the scene 107 onto the imaging sensor 102.
  • the CPU 105 determines for each pixel a phase delay between the modulated light 108 and the reflected light 109.
  • the depth image can thus be determined directly from the phase image.
  • Fig. 2 provides a schematic representation of an exemplifying Time-of-Flight imaging system.
  • a processing unit (CPU) 105 is communicatively coupled to an imaging sensor 102.
  • the processing unit (CPU) 105 controls the imaging sensor 102, e.g. by providing a modulation frequency generated by a timing generator (106 in Fig. 1) to the imaging sensor 102.
  • the processing unit 105 reads out imaging data captured by the pixel array of the imaging sensor 102 from the imaging sensor 102.
  • the processing unit (CPU) 105 controls an illumination device (in short "illuminator").
  • Fig. 3 schematically illustrates an illumination device that comprises an array of Vertical-Cavity Surface-Emitting Lasers (VCSELs).
  • This VCSEL illuminator may be used as the illuminator 110 of a ToF system such as described in Figs. 1 and 2 above.
  • the VCSEL illuminator 110 comprises an array of VCSELs VC1N-VCMN which are grouped in M sub-sets Ll-LM, N column drivers Dl, D2, . . DN for driving the VCSEL array, and M row enable switches SW1-SWM, where N and M may for example be a number between 2 to 16 or any other number.
  • Each VCSEL VC1N-VCMN may for example have an illumination power of 2W to 10W.
  • the sub-sets Ll- LM are the rows of the VCSEL array.
  • the VCSELs VC11, VC12, . . ., VC1N, VC14 of the first sub set LI are grouped in the first electrical line zone.
  • the VCSELs VC21, VC22, VC23, . . ., VC2N of the second sub-set L2 are grouped in the second electrical line zone.
  • the VCSELs VC31, VC32, VC33, . . ., VC3N of the Mth sub-set LM are grouped in the third electrical line zone.
  • Each electrical line zone is electrically connected to the respective driver Dl, D2,..., DN and via the respective switches SW1-SWM to a supply voltage V.
  • the supply voltage V supplies the power for generating a driving current, where the driving current is the current that is applied to the drivers Dl, D2, . . .,
  • Each driver Dl, D2, . . ., DN receives a respective high modulation frequency signal HFM1, HFM2, . . ., HFMN to drive the VCSEL illuminator 401.
  • a diffractive optical element (DOE) (not shown in Fig. 3) may be disposed in front of the VCSEL array 401 in order to shape and split the VCSEL beams in an energy-efficient manner.
  • DOE may for example be implanted as an array of micro lenses.
  • Each VCSEL of the VCSEL array may be driven with individual driving parameters, in particular intensity.
  • Background information for realizing an illuminator with the capability of driving the VCSELs independently of each other can for be found for example in international patent application WO 2020/026615 Al.
  • the beams produced by such an illuminator are independently controllable.
  • VCSELs In the embodiment of Fig. 3 an array of VCSELs is shown with separate drivers for the lines (rows). This may create a gradient profile in one dimension. According to alternative embodiments, separate drivers may also be provided for columns, or for both rows and columns, or any desired subgroups of VCELs according to a design choice.
  • the embodiments described here may involve "flooded light” or, alternatively, “spot ToF", or both concepts at the same time.
  • a so called “flooded light” illuminator these beams produced by the controllable nodes are partly overlapping in order to create a continues illumination field.
  • Defocused spot beams generate "flooded light” with a rather homogenous density distribution (see Fig. 14a below).
  • the spot illuminator the beams produced by the controllable nodes of the illuminator are not overlapping which results in a pattern of light spots projected onto the illuminated scene (see Figs. 13 and 14b below).
  • Illuminators with gradient distribution of radiant intensity can for example be implemented by accordingly tailoring the source emitters (corresponding to beams), or by accordingly tailoring the source’s DOE, or both.
  • providing a gradient distribution of radiant intensity values across the emitting area may be achieved by setting the driving parameters for the source (e.g. VCSEL) emitters either independently or per batch of emitters.
  • Setting the driving parameters for the source (e.g. VCSEL) emitters or the spacing between the emitters per batch of emitters is equivalent to defining regions of interest (ROIs) in the emitter area, with each region being defined by its own driving parameters and/or spacing between multiple emitters (where applicable).
  • ROIs regions of interest
  • Fig. 4 shows an embodiment in which the source emitters 110 (here VCSELs) of an illumination device are tailored to provide a given gradient distribution on the pixel array 102.
  • Groups (batches) of one or more emitters (here VCSELs) are mapped to scene areas (here a road) which they illuminate, while the reflected light 109 maps scene areas to groups (batches) of pixels on the pixel array 102 that record the scenes’ depth information. In this way, the scene areas are mapped to the corresponding groups of pixels registering the areas’ depth values.
  • Fig. 5 schematically displays a mapping that is defined between batches of one or more source emitters 110 (e.g. VCSEL emitters) and batches of one or more pixels of a pixel array 102.
  • the source emitters 110 can be driven individually (or as a group) depending on the sensor area that receives light information originating from them.
  • the groups (batches) of one or more emitters 110 are driven (and/ or spaced) independently and in such a manner that the resulting source radiant intensity values have a gradient profile.
  • Fig. 6 schematically shows an example of the radiant intensity profile that the illumination source would need to have such that the radiant flux received at pixel level is nearly constant across the entire array.
  • VCSEL emitters 110 of the illumination device produce emitted light 108 with a radiant intensity.
  • the radiant intensity of the emitted light 108 is displayed on the right side of Fig. 6 as a density plot in Watt per steradian [W/ sr].
  • the emitter batches 110 are characterized by driving parameters (and/ or spacing between emitters) that enable a gradient illumination profile for the source as well as a constant radiant flux received at pixel level.
  • the radiant intensity values have a gradient profile.
  • Fig. 9c A more detailed explanation is given with regard to Fig. 9c below.
  • providing a gradient distribution of radiant intensity values may also involve setting the spacing of the source (e.g. VCSEL) emitters either independently or per batch of emitters.
  • the source e.g. VCSEL
  • providing a gradient distribution of radiant intensity values may also involve configuring the optical part of the transmitter.
  • a DOE or a diffuser is constructed such that the (collimated) light emanated by the emitters is directed according to the gradient profile.
  • the emitter structure is combined with the DOE.
  • a coarse set of emitters spots
  • the optical elements could either multiply the emitter pattern to cover the entire field of illumination (see Figure 4), or multiply and mirror the emitter pattern (see Figure 5) should enough emitters be defined at VCSEL level and should a symmetric profile be considered in the lens at the receiver.
  • Fig. 7 schematically illustrates an example of a ray distribution across various distances in an (outdoor) automotive scenario.
  • Different ranges also called Regions of Interest, ROI, above
  • a first radiant intensity T (the highest radiant intensity) is applied in a long range region.
  • a second radiant intensity T (the second highest radiant intensity) is applied in a medium range region.
  • a third radiant intensity E (the third highest radiant intensity) is applied in a low range region.
  • a fourth radiant intensity R (the lowest radiant intensity) is applied in a very low range region.
  • a distance-dependent illumination of the road is provided.
  • Figs. 8a and 8b schematically illustrate an exemplifying placement and inclination of camera equipped with a gradient illumination profile in an automotive scenario.
  • a ToF camera 81 that has a Field Of View (FOV) of 150° is attached to the bumper of car 80, approximately 60cm above the road (see Fig. 8b).
  • FOV Field Of View
  • ToF camera 81 is equipped with an illumination source gradient illumination profile as described in the embodiments above.
  • the Field Of Illumination (FOI) of the source is matched to the Field Of View (FOV) of the lens at the receiver side. That is, in the example given in Figs. 8a and 8b (using a fisheye 150° lens) and by matching the FOI of the illumination source to the FOV of the lens on the receiver side, a vertical FOI of 77° and a horizontal FOI of 108° are applied.
  • Fig. 9a depicts an exemplifying road depth profile registered by a ToF camera equipped with a fisheye 150° lens.
  • the density plot displays the profile of depth values obtained with the fisheye lens from low (dark) to high (light) on a given road plane.
  • Fig. 9b depicts the radiant flux received by the ToF camera equipped with a fisheye 150° lens at pixel level when employing the classical approach of uniformly illuminating the scene.
  • the density plot displays the profile of radiant flux per pixel values obtained with the fisheye lens from low (dark) to high (light), given a constant radiant source intensity.
  • Fig. 9c depicts the radiant intensity profile that the illumination source would need to have such that the radiant flux received at pixel level is constant across the entire array.
  • the density plot displays the profile of radiant source intensity values for a constant radiant flux per pixel values obtained with the ToF camera with a fisheye 150° lens. Dark indicates low values, and light indicates large values.
  • the density plot shows the gradient profile in the radiant intensity values of the illumination source. This gradient is not only dependent on the per pixel distance to the road, but also on the lens intrinsics (lens distortion included).
  • FIG. 10 schematically describes an exemplifying process of defining illumination parameters for each emitter (e.g. VCSEL or group of VCELs) of an illuminator.
  • scene information is acquired with pre-defined homogeneous illumination flux directed at the scene.
  • an intensity image comprising a respective radiant intensity per pixel is determined from the scene information.
  • an intensity map of the scene is computed analytically based on the acquired intensity per pixel, the pre-defined illumination flux emitted by the illuminator, the hardware specifications of the source (density of emitters, emitters positioning and spacing on the emitter array, optics such as collimation lens and/ or Diffractive Optical Elements, field of illumination, etc.), assumed distance to the scene, assumed reflectivity of the scene, and a camera model.
  • the camera model may for example consider aspects of the camera such as sensor parameters (pixel size, resolution), lens model (as produced by lens design software), etc.
  • the camera model may for example comprise effects from lens distortion.
  • Fig. 9c exemplifies such an ideal source radiant intensity map, where each radiant intensity value corresponds to a point-like source that, in turn, generates a radiant flux value in Fig. 9c. This is why the aspect ratio is shown identical between Fig. 9b and Fig. 9c, while in practice that will not be the case; in practice, the aspect ratio of Fig.
  • HFOI and VFOI Horizontal and the Vertical Field Of Illumination
  • illumination parameters are defined for each illuminator element (e.g. VCSEL) based on the computed ideal intensity map. Specifically, since in practice it is unfeasible to have a (point-like) source for every pixel, the method described in this embodiment performs a mapping between the FOI of each emitter, the covered sub-area on the assumed scene and the group of pixels that the area corresponds to. The described here also covers the case in which several pixels are covered by multiple emitters.
  • Fig. 11 schematically describes an exemplifying process of defining illumination parameters for each emitter (e.g. VCSEL or group of VCELs) of an illuminator based on a computed ideal intensity map.
  • the ideal intensity map as obtained by the process in Fig. 10 is sampled/segmented based on the emitters' locations. Each emitter corresponds to a group of pixels on the sensor (see Fig. 5).
  • the sampled/segmented ideal intensity map values are used to compute, for example through an interpolation between the sampled values, the illumination parameters for each corresponding emitter so that the illuminator, when applying the illumination parameters, will provide an illuminator tailored for the scene. It should be noted, however, that the derivation of the emitter driving parameters may be dependent on the specifics of the HW/ optics of the transmitter.
  • Fig. 12 schematically describes an exemplifying process of determining depth information of a scene based on illumination parameters that are optimized for the scene.
  • scene information is acquired with tailored illumination based on defined illumination parameters for the respective illuminator emitters as obtained by the process in Fig. 11.
  • the depth information of the scene is computed from the scene information.
  • different inclinations will apply different gradient illumination profiles. That is, should a plane surface (at which the camera looks at) change its inclination, then the illumination gradient will be changed as well such that the flux at the sensor side remains as constant as possible.
  • the system may rely on an algorithm including a feedback loop.
  • Such an algorithm starts by acquiring one frame of scene information with a uniform illumination profile (see 1001 in Fig. 10).
  • the radiant flux recorded on the sensor is then analyzed (see 1002 and 1003 in Fig. 10) and the information is sent back to the illumination source where the profile is modified accordingly (see 1004 in Fig. 10).
  • Several frames are then acquired with the computed gradient profile (see 1201 in Fig. 12).
  • an extra frame is acquired with uniform illumination profile as set out in Fig. 10, and a new gradient profile is computed based on the new radiant flux information recorded on the sensor side.
  • an offline calibration stage is foreseen.
  • This offline calibration may for example imply computing the illumination gradient for planes of different inclinations, i.e. performing an inclination sweep and pre-computing or estimating the deltas/ changes that need to be applied to the gradient when the scene inclination changes.
  • the embodiments described above are applicable in the context of a Spot ToF illuminator (dot pattern-based illuminators or other patterns).
  • spot illuminator In a so called “spot illuminator", the beams produced by the controllable nodes of the illuminator are not overlapping. This results in spot beams.
  • the embodiments described below in more detail provide an optimized illumination system for Spot Time-of-Flight enabled cameras ensuring a gradient illumination pattern. That is, the methods embodiments here can be combined with a Spot ToF solution to improve ambient light performance and mitigate multipath. The latter is particularly useful when an object comes in the field of view (e.g. another car).
  • Fig. 13 schematically shows a spot ToF imaging system which produces a spot pattern on a scene.
  • the spot ToF imaging system comprises an illumination unit 110, here a spot illuminator, which produces a pattern of spots 202 on a scene 107 comprising objects 203 and 204.
  • the spot illuminator may for example be configured according to the principles of the illuminator of Fig. 3 above, where, according to the spot illuminator, the beams produced by the controllable nodes of the illuminator are not overlapping.
  • An iToF camera 102 captures an image of the spot pattern on the scene 107.
  • the pattern of light spots projected onto the scene 107 by illumination unit 110 results in a corresponding pattern of light spots in the confidence image and depth image captured by the pixels of the image sensor (102a in Fig. 1) of iToF camera 102.
  • the light spots appear in the confidence image produced by iToF camera 102 as a spatial light pattern including high-intensity areas 201 (the light spots), and low-intensity areas 202.
  • the illumination unit 110 is positioned in the plane of the image sensor (102a in Fig. 1) of iToF camera 102.
  • This plane is also called ToF plane.
  • the illimitation unit 110 and the iToF camera 102 are positioned at a distance B from each other. This distance B is called baseline.
  • the scene 107 has distance Z from baseline B. In the embodiment of Fig. 13, for simplification, only a single distance Z of the scene 107 is shown. Flowever, every object 203, 204 or object point within the scene 107 may have an individual distance Z from baseline B.
  • the depth image of the scene captured by ToF camera 102 defines a depth value for each pixel of the depth image and thus provides depth information of scene 107 and objects 203, 204.
  • the light spots produced by illumination unit 110 are shown as dots and they have a circular shape. The embodiments are, however, not restricted to such dots.
  • the light spots produced by illumination unit 110 may have a rectangular or square shape or any other regular or irregular shape.
  • the light spots may have a spatial light intensity profile, for example, a Gaussian light intensity profile or the like.
  • the light spots produced by illumination unit 110 are shown as a regular grid pattern. Flowever, in alternative embodiment, the spot pattern produced by illumination unit 110 may be an irregular pattern.
  • the pattern of light spots projected onto the scene by a spot illuminator results in a corresponding pattern of light spots in the confidence image and depth image captured by the pixels of the image sensor.
  • the spots appear in the confidence image produced by iToF camera as a spatial light patern including high-intensity areas 201 (the light spots), and low-intensity areas.
  • the high-intensity area 201 defining a spot is also denoted as "spot region”.
  • the emiter configuration e.g. driving parameters or spacing as described with regard to Figs. 4, 5 and 6 above
  • the emiter configuration combines both the spot profile and the gradient profile as described with regard to Figs. 4 to 12 above.
  • spot ToF is combined with full-field (flooded light).
  • the combination of spots and full-field (flooded light) may for example be achieved by combining two or more illuminators, a dedicated spot illuminator and a dedicated flooded light illuminator, or by applying a dual illuminator with the capability to switch from a spot ToF state to a full-field (flooded light) state, as described below.
  • a combination of spots and full-field (flooded light) may be achieved by applying a dual illuminator with the capability to switch from a spot ToF state to a full-field (flooded light) state.
  • a dual illuminator in a first state (see Fig. 14a below), the spots and the optical stack at the transmiter side are defined according to a sampling of the gradient profile.
  • a second state in a second state (see Fig. 14a below), where full-field is applicable, the complete gradient profile can be achieved. The switching between these two states is here also called dual mode of operation.
  • Figs. 14a and 14b show example of creating a gradient illumination with a dual illuminator.
  • these Figures show exemplifying light distributions obtained by an illumination device which supports a dual mode of operation in which the illumination device can sequentially switch from a spot to a full-field state.
  • the dual mode of operation comprises a spot state (Fig. 14a) and a flooded state (Fig. 14b).
  • the illumination device switches between the spot state and the flooded state.
  • the spot state displayed in Fig. 14a is created by collimating the light of a VCSEL array of given emitters.
  • the collimated light coming from the VCSEL array is then odd-multiplied by a DOE (in Fig.
  • Fig. 14a a 1x3 DOE
  • the DOE is a diffractive optic element that acts as a beam splitter and that is used to multiply the VCSEL array dots vertically/horizontally. This creates tiles/ copies of the VCSEL array spots.
  • the three rectangles shown in Fig. 14a indicate the three DOE regions of the 1x3 DOE.
  • the VCSEL emitters are assumed to be driven by a signal creating the gradient illuminator as described with regard to the embodiments Figs. 4 to 12 above, i.e., some VCSEL emitters will receive different driving current in a gradient-like manner (per line or bigger batches of emiters).
  • the dashed horizontal lines indicate the borders of three exemplifying regions of different intensity levels.
  • the illumination profile changes in the vertical direction as indicated, by the arrow on the right.
  • the flooded gradient illuminator as displayed in Fig. 14b is achieved by defocusing the spots enough to create a sufficiently uniform light. This defocusing may for example be achieved with a diffuser or with a DOE.
  • the three rectangles indicate the three DOE regions of a 1x3 DOE.
  • Another example for defocusing is using LCDs or a defocusing lens.
  • the dashed horizontal lines indicate the borders of three exemplifying regions of different intensity levels.
  • the illumination profile changes in the vertical direction as indicated, by the arrow on the right.
  • Both the characteristics of the DOE and those of the (receiving) lens intrinsics may be accounted for.
  • the latter is enabled by incorporating in the formula of the radiant flux per pixel the angle between each pixel’s unit ray (projection vector) and the normal of the road.
  • the set of projection vectors (one per pixel) can be computed based on a lens model and a set of lens intrinsics, via a lens design file and ray tracing methods in an optical system design file, or other alternative methods.
  • each projection vector is determined in order to make the transformation between the value registered at the pixel coordinate and the corresponding value in 3D space.
  • the electronic device 1200 comprises a CPU 1201 as processor.
  • the electronic device 1200 further comprises an iToF sensor 1206 (e.g. sensor 110 of Figs. 1 and 2; or the illuminator device of Fig. 3) connected to the processor 1201.
  • the processor 1201 may for example implement a process of defining illumination parameters as described in Figs. 10 and 11 above, as well as a process of determining depth information of a scene based on illumination parameters that are optimized for the scene as described in Fig. 12.
  • the electronic device 1200 further comprises a user interface 1207 that is connected to the processor 1201. This user interface 1207 acts as a man -machine interface and enables a dialogue between an administrator and the electronic system. For example, an administrator may make configurations to the system using this user interface 1207.
  • the electronic device 1200 further comprises a Bluetooth interface
  • These units 1204, 1205 act as I/O interfaces for data communication with external devices. For example, video cameras with Ethernet, WLAN or Bluetooth connection may be coupled to the processor 1201 via these interfaces 1204,
  • the electronic device 1200 further comprises a data storage 1202, which may be the calibration storage described with regards to Fig. 7, and a data memory 1203 (here a RAM).
  • the data storage 1202 is arranged as a long-term storage, e.g. for storing the algorithm parameters for one or more use-cases, for recording iToF sensor data obtained from the iToF sensor 1206 the like.
  • the data memory 1203 is arranged to temporarily store or cache data or computer instructions for processing by the processor 1201.
  • the embodiments are not constrained by a particular lens, distance to the scene or inclination of the camera with respect to the plane of the road.
  • the optical power allocation and, implicitly, the values of the radiant intensity of the source can be profiled for any type of lens, camera-to-road distance and any inclination of the camera, while still respecting the gradient profile.
  • An illumination device comprising multiple emitters distributed over an emitting area, wherein the intensity profile of the emitters is characterized by a gradient distribution of radiant intensity values across the emitting area.
  • the illumination device of (1) wherein the illumination device is configured to independently drive the emitters or groups of emitters of the illumination device in order to provide the gradient distribution of radiant intensity values across the emitting area.
  • illumination device of (1) or (2) wherein the illumination device comprises individual DOEs for emitters or groups of emitters, the DOE's being configured to provide the gradient distribution of radiant intensity values across the emitting area.
  • illumination device of any one of (1) to (6), wherein the illumination device is a dual illuminator with the capability to switch from a spot ToF state to a full-field state.
  • the processor is configured to compute the intensity map of the scene based on the intensity per pixel in the intensity image, a pre-defmed illumination flux emitted by the emitters, and a camera model.
  • a ToF camera comprising the illumination device of any one of (1) to (12).
  • a computer-implemented method comprising determining an intensity image of a scene with an illumination device comprising multiple emitters distributed over an emitting area; computing an intensity map of the scene based on the intensity image, and to defining illumination parameters for respective emitters or groups of emitters of the illumination device based on the computed intensity map.

Abstract

An illumination device comprising multiple emitters distributed over an emitting area, wherein the intensity profile of the emitters is characterized by a gradient distribution of radiant intensity values across the emitting area.

Description

ILLUMINATION DEVICE AND METHOD FOR TIME-OF-FLIGHT CAMERAS
TECHNICAL FIELD
The present disclosure generally pertains to the field of Time-of- Flight imaging, and in particular to devices and methods for Time-of-Flight image processing.
TECHNICAL BACKGROUND
With the continuing development of autonomous driving, traditional 2D cameras are complemented by other camera technologies such as stereo cameras, IR cameras, RADAR, LiDAR, and Time-of- Flight (ToF) cameras.
A Time-of-Flight (ToF) camera is a range imaging camera system that determines the distance of objects by measuring the time of flight of a light signal between the camera and the object for each point of the image. Generally, a ToF camera has an illumination unit (a LED or VCSEL, Vertical- Cavity Surface-Emitting Laser) that illuminates a scene with modulated light. A pixel array in the ToF camera collects the light reflected from the scene and measures phase-shift which provides information on the travelling time of the light, and hence information on distance.
In indirect Time-of-Flight (iToF), three-dimensional (3D) images of a scene are captured. These images are also commonly referred to as “depth map”, or “depth image”, wherein each pixel of the image is attributed with a respective depth measurement. The depth image can be determined directly from a phase image, which is the collection of all phase delays determined in the pixels of the iToF camera.
Although there exist techniques for illuminating a scene in order to perform Time-of-Flight distance measurements, it is generally desirable to provide better techniques for illuminating a scene in order to perform Time-of-Flight distance measurements.
SUMMARY
According to a first aspect the disclosure provides an illumination device comprising multiple emitters distributed over an emitting area, wherein the intensity profile of the emitters is characterized by a gradient distribution of radiant intensity values across the emitting area.
According to a further aspect the disclosure provides a computer-implemented method comprising determining an intensity image of a scene with an illumination device comprising multiple emitters distributed over an emitting area; computing an intensity map of the scene based on the intensity image, and to defining illumination parameters for respective emitters or groups of emitters of the illumination device based on the computed intensity map. Further aspects are set forth in the dependent claims, the following description and the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments are explained byway of example with respect to the accompanying drawings, in which:
Fig. 1 schematically shows the basic operational principle of an indirect Time-of-Flight imaging system, which can be used for depth sensing or providing a distance measurement;
Fig. 2 provides a schematic representation of an exemplifying Time-of-Flight imaging system;
Fig. 3 schematically illustrates an illumination device that comprises an array of Vertical-Cavity Surface-Emitting Lasers (VCSELs);
Fig. 4 schematically illustrates an embodiment of a VCSEL illuminator comprising a vertical cavity surface emitting laser (VCSEL) array, column drivers and row enable switches for spot scanning illuminator;
Fig. 5 schematically displays a mapping that is defined between batches of one or more source emitters (e.g. VCSEL emitters) and batches of one or more pixels of a pixel array.
Fig. 6 schematically shows an example of the radiant intensity of light emitted by an illumination device;
Fig. 7 schematically illustrates an example of a ray distribution across various distances in an automotive scenario;
Figs. 8a and 8b schematically illustrate an exemplifying placement and inclination of camera equipped with a gradient illumination profile in an automotive scenario.
Fig. 9a depicts an exemplifying road depth profile registered by an iToF camera with a 150° wide- angle lens;
Fig. 9b depicts the radiant flux received by the iToF camera equipped with a fisheye 150° lens at pixel level when employing the classical approach of uniformly illuminating the scene.
Fig. 9c depicts the radiant intensity profile that the illumination source would need to have such that the radiant flux received at pixel level is constant across the entire array;
Fig. 10 schematically describes an exemplifying process of defining illumination parameters for each emitter (e.g. VCSEL or group of VCELs) of an illuminator; Fig. 11 schematically describes an exemplifying process of defining illumination parameters for each emitter (e.g. VCSEL or group of VCELs) of an illuminator based on a computed intensity map;
Fig. 12 schematically describes an exemplifying process of determining depth information of a scene based on illumination parameters that are optimized for the scene;
Fig. 13 schematically shows a spot iToF imaging system which produces a spot pattern on a scene;
Figs. 14a and 14b show exemplifying light distributions obtained by an illumination device which supports a dual mode of operation in which the illumination device can sequentially switch from a spot to fullfield state; and
Fig. 15 schematically describes an embodiment of an iToF device that can implement an illuminator providing a gradient intensity profile.
DETAILED DESCRIPTION OF EMBODIMENTS
Before a detailed description of the embodiments under reference of Fig. 1, general explanations are made.
The embodiments described below in more detail describe an illumination device comprising multiple emitters distributed over an emitting area, wherein the intensity profile of the emitters is characterized by a gradient distribution of radiant intensity values across the emitting area.
Other than the available approaches in autonomous driving sector which aim to provide a uniform illumination profile on the scene, the illumination device of the embodiments described below in more detail provides a profile which is characterized by a gradient distribution of radiant intensity values across the emitting area.
The emitters of the illumination device be any illumination units, that emit light, such as LEDs, Vertical-Cavity Surface-Emitting Lasers (VCSELs), or the like.
The emitting area may be understood as the area where emitters or groups of emitters of the illumination device are located.
Providing an optimized illumination system for Time-of-Flight cameras may ensure a gradient illumination pattern which avoids that objects at close distance from the camera are prone to saturation. Moreover, with the illumination device according to the embodiments, it may be avoided that objects that are far away are not illuminated with enough intensity to capture reliable depth information. The illumination device of the embodiments may provide an optimized allocation of the optical power of the source across the scene which in turn may result in a reduced risk of saturation for pixels corresponding to closer areas of the scene and in an increased the signal-to-noise ratio (SNR).
With a gradient distribution of radiant intensity as provided by the illumination device of the embodiments, the flux per pixel received by the sensor may be larger for the groups of pixels that correspond to closer areas of the road and significantly smaller for groups of pixels corresponding to areas of the road that are further from the camera. This may result not only in pixels that correspond to closer areas of the road saturating quicker but in overall reduced signal to noise ratio. As a consequence, a camera looking at a plane of a particular (given) inclination and illuminating the plane with a source whose radiant intensity values are characterized by a gradient profile may record a nearly constant radiant flux at every pixel of the sensor.
Moreover, the confidence recorded by an iToF camera with an illumination device according to the embodiments may have an almost constant value per pixel as well.
According to some embodiments, the illumination device is configured to independently drive emitters or groups of emitters in order to provide the gradient distribution of radiant intensity values across the emitting area.
According to other embodiments, the illumination device comprises individual DOEs for emitters or groups of emitters, the DOE's being configured to provide the gradient distribution of radiant intensity values across the emitting area.
The illumination device may be configured to apply different gradient illumination profiles for different scenes.
For example, the illumination device may be configured to apply different gradient illumination profiles for different inclinations. An inclination may for example be defined by the parameters pitch, yaw, and roll. A ToF camera with an illumination device having the given inclination with respect to a plane would record a constant flux per pixel.
Still further, the illumination device may be configured to implement an algorithm including a feedback loop in order to determine the intensity profile of the emitters.
In some embodiments, the emitters are configured to generate Spot ToF beams with a gradient illumination pattern. In other embodiments, the emitters are configured to generate full-field ToF beams with a gradient illumination pattern. Still further, the illumination device may also be a dual illuminator with the capability to switch from a spot ToF state to a full-field state. The illumination device may cooperate with a processor configured to determine an intensity image of a scene; compute an intensity map of the scene based on the intensity image, and to define illumination parameters for respective emitters or groups of emitters of the illumination device based on the computed intensity map.
The processor may be further configured to compute the intensity map of the scene based on the intensity per pixel in the intensity image, a pre-defmed illumination flux emitted by the emitters, and a camera model.
Still further, the processor may be configured to sample/ segment the intensity map based on the emitters location.
The embodiments also describe a ToF camera comprising the illumination device of claim 1.
Still further, the embodiments describe a computer-implemented method comprising determining an intensity image of a scene with an illumination device comprising multiple emitters distributed over an emitting area; computing an intensity map of the scene based on the intensity image, and defining illumination parameters for respective emitters or groups of emitters of the illumination device based on the computed intensity map.
The computer-implemented method may further comprise computing the intensity map of the scene based on the intensity per pixel in the intensity image, a pre-defined illumination flux emitted by the emitters, and a camera model.
An illumination device as provided by the embodiments can be used in a number of applications, especially in the autonomous driving sector. Specifically, the gradient illumination profile of the illumination device may ensure that, given a particular inclination of the camera and a particular distance from the camera to the road, the pixels recording information related to closer distances do not saturate, while those recording information related to larger distances acquire more reliable (i.e. accurate) information.
Illuminating the scene in a gradient-alike manner has the potential to increase SNR as well as depth accuracy, whether in the near, or far field. This is particularly important in the context of autonomous vehicles, where good depth accuracy is necessary at a wide range of distances in order to prevent accidents.
A gradient distribution of radiant intensity is beneficial compared to most traditional illumination sources in 3D sensing cameras which assume an almost constant radiant intensity which results in closer areas of the road having a larger radiance and farther road areas having a smaller radiance.
Operational principle of an indirect Time-of-Flight imaging system (iToF) Fig. 1 schematically shows the basic operational principle of an indirect Time-of-Flight imaging system which can be used for depth sensing. The iToF imaging system 101 includes an iToF camera, with an imaging sensor 102 having a matrix of pixels and a processor (CPU) 105. A scene 107 is actively illuminated with amplitude-modulated infrared light 108 at a predetermined wavelength using an illumination device 110, for instance with some light pulses of at least one predetermined modulation frequency generated by a timing generator 106. The amplitude-modulated infrared light 108 is reflected from objects within the scene 107. A lens 103 collects the reflected light 109 and forms an image of the objects within the scene 107 onto the imaging sensor 102. In indirect Time- of-Flight (iToF) the CPU 105 determines for each pixel a phase delay between the modulated light 108 and the reflected light 109.
This may be achieved by sampling a correlation wave between a demodulation signal 104 generated by a timing generator 106 and reflected light 109 that is captured by each respective pixel of the imaging sensor 102 and by sampling for each pixel a correlation wave between one or more shifted demodulation signals generated by the timing generator 106 (for example shifted about 0°, 90°, 180° and 270°) and the reflected light 109 that is captured by each respective pixel of the imaging sensor 102. This yields an in-phase component value (“I value”) for and quadrature component value (“Q- value”) for each pixel, so called I and Q values. Based on the I and Q values for each pixel a phase delay value f for each pixel may be determined as f = arctan which yields a phase image. The phase delay pis proportional to the object’s distance modulo the wavelength of the modulation frequency. The depth image can thus be determined directly from the phase image. Still further, based on the I and Q values an amplitude value and a confidence conf value may be determined for each pixel as conf = |/| + |z/| which yields the amplitude image and the confidence image.
Fig. 2 provides a schematic representation of an exemplifying Time-of-Flight imaging system. A processing unit (CPU) 105 is communicatively coupled to an imaging sensor 102. The processing unit (CPU) 105 controls the imaging sensor 102, e.g. by providing a modulation frequency generated by a timing generator (106 in Fig. 1) to the imaging sensor 102. The processing unit 105 reads out imaging data captured by the pixel array of the imaging sensor 102 from the imaging sensor 102. Still further, the processing unit (CPU) 105 controls an illumination device (in short "illuminator").
Time-of-Flight illuminators
Fig. 3 schematically illustrates an illumination device that comprises an array of Vertical-Cavity Surface-Emitting Lasers (VCSELs). This VCSEL illuminator may be used as the illuminator 110 of a ToF system such as described in Figs. 1 and 2 above. The VCSEL illuminator 110 comprises an array of VCSELs VC1N-VCMN which are grouped in M sub-sets Ll-LM, N column drivers Dl, D2, . . DN for driving the VCSEL array, and M row enable switches SW1-SWM, where N and M may for example be a number between 2 to 16 or any other number. Each VCSEL VC1N-VCMN may for example have an illumination power of 2W to 10W. In this embodiment the sub-sets Ll- LM are the rows of the VCSEL array. The VCSELs VC11, VC12, . . ., VC1N, VC14 of the first sub set LI are grouped in the first electrical line zone. The VCSELs VC21, VC22, VC23, . . ., VC2N of the second sub-set L2 are grouped in the second electrical line zone. The VCSELs VC31, VC32, VC33, . . ., VC3N of the Mth sub-set LM are grouped in the third electrical line zone. Each electrical line zone is electrically connected to the respective driver Dl, D2,..., DN and via the respective switches SW1-SWM to a supply voltage V. The supply voltage V supplies the power for generating a driving current, where the driving current is the current that is applied to the drivers Dl, D2, . . .,
DN and to the VCSEL array by turning on/ off the respective switch SW1- SWM. Each driver Dl, D2, . . ., DN receives a respective high modulation frequency signal HFM1, HFM2, . . ., HFMN to drive the VCSEL illuminator 401.
A diffractive optical element (DOE) (not shown in Fig. 3) may be disposed in front of the VCSEL array 401 in order to shape and split the VCSEL beams in an energy-efficient manner. A DOE may for example be implanted as an array of micro lenses.
Each VCSEL of the VCSEL array may be driven with individual driving parameters, in particular intensity. Background information for realizing an illuminator with the capability of driving the VCSELs independently of each other can for be found for example in international patent application WO 2020/026615 Al. As each independently controllable node of the illuminator 110 of Fig. 3 forms a beam (not shown in Fig. 3), the beams produced by such an illuminator are independently controllable.
In the embodiment of Fig. 3 an array of VCSELs is shown with separate drivers for the lines (rows). This may create a gradient profile in one dimension. According to alternative embodiments, separate drivers may also be provided for columns, or for both rows and columns, or any desired subgroups of VCELs according to a design choice.
The embodiments described here may involve "flooded light" or, alternatively, "spot ToF", or both concepts at the same time. In a so called "flooded light" illuminator, these beams produced by the controllable nodes are partly overlapping in order to create a continues illumination field. Defocused spot beams generate "flooded light" with a rather homogenous density distribution (see Fig. 14a below). To the contrary, according to the spot illuminator, the beams produced by the controllable nodes of the illuminator are not overlapping which results in a pattern of light spots projected onto the illuminated scene (see Figs. 13 and 14b below).
Illuminators with gradient distribution of radiant intensity Providing a gradient distribution of radiant intensity values across the emitting area can for example be implemented by accordingly tailoring the source emitters (corresponding to beams), or by accordingly tailoring the source’s DOE, or both.
For example, providing a gradient distribution of radiant intensity values across the emitting area may be achieved by setting the driving parameters for the source (e.g. VCSEL) emitters either independently or per batch of emitters. Setting the driving parameters for the source (e.g. VCSEL) emitters or the spacing between the emitters per batch of emitters is equivalent to defining regions of interest (ROIs) in the emitter area, with each region being defined by its own driving parameters and/or spacing between multiple emitters (where applicable). In order to define these regions of interest, i.e. batches of one or more emitters, one could for example map the emitter ROIs to the corresponding scene areas that are illuminated by the light emitted by each emitter ROI (see Fig. 4).
Fig. 4 shows an embodiment in which the source emitters 110 (here VCSELs) of an illumination device are tailored to provide a given gradient distribution on the pixel array 102. Groups (batches) of one or more emitters (here VCSELs) are mapped to scene areas (here a road) which they illuminate, while the reflected light 109 maps scene areas to groups (batches) of pixels on the pixel array 102 that record the scenes’ depth information. In this way, the scene areas are mapped to the corresponding groups of pixels registering the areas’ depth values.
Fig. 5 schematically displays a mapping that is defined between batches of one or more source emitters 110 (e.g. VCSEL emitters) and batches of one or more pixels of a pixel array 102. For example, the source emitters 110 can be driven individually (or as a group) depending on the sensor area that receives light information originating from them. The groups (batches) of one or more emitters 110 are driven (and/ or spaced) independently and in such a manner that the resulting source radiant intensity values have a gradient profile.
Fig. 6 schematically shows an example of the radiant intensity profile that the illumination source would need to have such that the radiant flux received at pixel level is nearly constant across the entire array. VCSEL emitters 110 of the illumination device produce emitted light 108 with a radiant intensity. The radiant intensity of the emitted light 108 is displayed on the right side of Fig. 6 as a density plot in Watt per steradian [W/ sr]. The emitter batches 110 are characterized by driving parameters (and/ or spacing between emitters) that enable a gradient illumination profile for the source as well as a constant radiant flux received at pixel level. In the example of Fig. 6, the radiant intensity values have a gradient profile. A more detailed explanation is given with regard to Fig. 9c below.
It should be noted that in the examples of Figs. 5 and 6 the pixel array (right side of Fig. 5) and the radiant intensity field of the VCSELs are shown with a similar aspect ratio. This is, however, only for illustrative reasons. In practice, the aspect ratio of a VCSEL array might be different from the aspect ratio of the sensor.
Alternatively, or in addition to setting the driving parameters for the source, providing a gradient distribution of radiant intensity values may also involve setting the spacing of the source (e.g. VCSEL) emitters either independently or per batch of emitters.
Still alternatively or in addition to setting the driving parameters for the source or setting the spacing of the source emitters, providing a gradient distribution of radiant intensity values may also involve configuring the optical part of the transmitter. For example, a DOE or a diffuser is constructed such that the (collimated) light emanated by the emitters is directed according to the gradient profile. One could have one DOE/ diffuser for the entire profile, or one DOE/ diffuser that enables the profile on the vertical axis and another that enables the profile on the horizontal axis, or another configuration.
In yet other embodiments the emitter structure is combined with the DOE. Firstly, a coarse set of emitters (spots) could be spaced and driven such that the emitted light corresponds to a sampling of the gradient profile. The optical elements (DOE, diffuser, other) could either multiply the emitter pattern to cover the entire field of illumination (see Figure 4), or multiply and mirror the emitter pattern (see Figure 5) should enough emitters be defined at VCSEL level and should a symmetric profile be considered in the lens at the receiver.
Examples of gradient distributions of radiant intensity
Fig. 7 schematically illustrates an example of a ray distribution across various distances in an (outdoor) automotive scenario. Different ranges (also called Regions of Interest, ROI, above) correspond to different radiant intensities. A first radiant intensity T (the highest radiant intensity) is applied in a long range region. A second radiant intensity T (the second highest radiant intensity) is applied in a medium range region. A third radiant intensity E (the third highest radiant intensity) is applied in a low range region. A fourth radiant intensity R (the lowest radiant intensity) is applied in a very low range region. According to the ray distribution described in Fig. 7, a distance-dependent illumination of the road is provided.
The intensity values given in the example of Fig. 7 are given for illustrative purpose only and are not binding. In alternative embodiments, the values and the number of predefined ranges may vary. Instead of a discrete number of regions, also a quasi "continuum" of intensity values may be applied (down to the level of individual rows/columns or even single illumination units of the illuminator array). Figs. 8a and 8b schematically illustrate an exemplifying placement and inclination of camera equipped with a gradient illumination profile in an automotive scenario. A ToF camera 81 that has a Field Of View (FOV) of 150° is attached to the bumper of car 80, approximately 60cm above the road (see Fig. 8b). ToF camera 81 is equipped with an illumination source gradient illumination profile as described in the embodiments above. The inclination of ToF camera 81 is given by a pitch of 42° such that — given the FOI — it covers a range of approximately [dmm; dmax] = [10cm; 10m] (see Fig. 8a). The Field Of Illumination (FOI) of the source is matched to the Field Of View (FOV) of the lens at the receiver side. That is, in the example given in Figs. 8a and 8b (using a fisheye 150° lens) and by matching the FOI of the illumination source to the FOV of the lens on the receiver side, a vertical FOI of 77° and a horizontal FOI of 108° are applied.
Fig. 9a depicts an exemplifying road depth profile registered by a ToF camera equipped with a fisheye 150° lens. The density plot displays the profile of depth values obtained with the fisheye lens from low (dark) to high (light) on a given road plane.
Fig. 9b depicts the radiant flux received by the ToF camera equipped with a fisheye 150° lens at pixel level when employing the classical approach of uniformly illuminating the scene. The density plot displays the profile of radiant flux per pixel values obtained with the fisheye lens from low (dark) to high (light), given a constant radiant source intensity.
Fig. 9c depicts the radiant intensity profile that the illumination source would need to have such that the radiant flux received at pixel level is constant across the entire array. The density plot displays the profile of radiant source intensity values for a constant radiant flux per pixel values obtained with the ToF camera with a fisheye 150° lens. Dark indicates low values, and light indicates large values. The density plot shows the gradient profile in the radiant intensity values of the illumination source. This gradient is not only dependent on the per pixel distance to the road, but also on the lens intrinsics (lens distortion included).
It should be noted that the example profiles in Figs. 9a, b, and c are given for illustrative purpose only. It should be noted here that the way in which the radiant intensity values are presented depends strongly on how the ToF system is characterized. Specifically, by writing the radiant flux for a particular pixel as a function of a particular radiant intensity value for the light emitted by the source. One way to understand the approach is: say there is a uniform light source for each pixel such that each pixel only receives light originating in its corresponding source; then the corresponding radiant intensity value for the coordinates of that pixel is the radiant intensity of the light emitted by its corresponding source.
Process Fig. 10 schematically describes an exemplifying process of defining illumination parameters for each emitter (e.g. VCSEL or group of VCELs) of an illuminator. At 1001, scene information is acquired with pre-defined homogeneous illumination flux directed at the scene. At 1002, an intensity image comprising a respective radiant intensity per pixel is determined from the scene information. At 1003, an intensity map of the scene is computed analytically based on the acquired intensity per pixel, the pre-defined illumination flux emitted by the illuminator, the hardware specifications of the source (density of emitters, emitters positioning and spacing on the emitter array, optics such as collimation lens and/ or Diffractive Optical Elements, field of illumination, etc.), assumed distance to the scene, assumed reflectivity of the scene, and a camera model. The camera model may for example consider aspects of the camera such as sensor parameters (pixel size, resolution), lens model (as produced by lens design software), etc. The camera model may for example comprise effects from lens distortion. In other words, knowing the hardware specifications of the source, assumed scene parameters, as well as the camera model, one can easily model the relationship between the received radiant flux per pixel (as in Fig. 9b) and the intensity that a point-like source would need to have to generate the radiant flux required on each pixel at the receiver side. Fig. 9c exemplifies such an ideal source radiant intensity map, where each radiant intensity value corresponds to a point-like source that, in turn, generates a radiant flux value in Fig. 9c. This is why the aspect ratio is shown identical between Fig. 9b and Fig. 9c, while in practice that will not be the case; in practice, the aspect ratio of Fig. 9c would be determined by the Horizontal and the Vertical Field Of Illumination (HFOI and VFOI) of the source. Note that, such an emitter- to -pixel mapping would be dependent not only on the assumed source parameters (distance, reflectivity, ambient light level, but also on the specifics of HW (such as the optics) of the transmitter and those of the receiver.
At 1004, illumination parameters are defined for each illuminator element (e.g. VCSEL) based on the computed ideal intensity map. Specifically, since in practice it is unfeasible to have a (point-like) source for every pixel, the method described in this embodiment performs a mapping between the FOI of each emitter, the covered sub-area on the assumed scene and the group of pixels that the area corresponds to. The described here also covers the case in which several pixels are covered by multiple emitters.
Fig. 11 schematically describes an exemplifying process of defining illumination parameters for each emitter (e.g. VCSEL or group of VCELs) of an illuminator based on a computed ideal intensity map. At 1101 the ideal intensity map as obtained by the process in Fig. 10 is sampled/segmented based on the emitters' locations. Each emitter corresponds to a group of pixels on the sensor (see Fig. 5). At 1102, the sampled/segmented ideal intensity map values are used to compute, for example through an interpolation between the sampled values, the illumination parameters for each corresponding emitter so that the illuminator, when applying the illumination parameters, will provide an illuminator tailored for the scene. It should be noted, however, that the derivation of the emitter driving parameters may be dependent on the specifics of the HW/ optics of the transmitter.
Fig. 12 schematically describes an exemplifying process of determining depth information of a scene based on illumination parameters that are optimized for the scene. At 1201, scene information is acquired with tailored illumination based on defined illumination parameters for the respective illuminator emitters as obtained by the process in Fig. 11. At 1202, the depth information of the scene is computed from the scene information.
According to the embodiments, different inclinations will apply different gradient illumination profiles. That is, should a plane surface (at which the camera looks at) change its inclination, then the illumination gradient will be changed as well such that the flux at the sensor side remains as constant as possible.
This can be achieved in several ways. For example, the system may rely on an algorithm including a feedback loop. Such an algorithm starts by acquiring one frame of scene information with a uniform illumination profile (see 1001 in Fig. 10). The radiant flux recorded on the sensor is then analyzed (see 1002 and 1003 in Fig. 10) and the information is sent back to the illumination source where the profile is modified accordingly (see 1004 in Fig. 10). Several frames are then acquired with the computed gradient profile (see 1201 in Fig. 12). As the scene may change, periodically an extra frame is acquired with uniform illumination profile as set out in Fig. 10, and a new gradient profile is computed based on the new radiant flux information recorded on the sensor side.
To reduce the computation load that results from on-the-fly operation, according to a further embodiment, an offline calibration stage is foreseen. This offline calibration may for example imply computing the illumination gradient for planes of different inclinations, i.e. performing an inclination sweep and pre-computing or estimating the deltas/ changes that need to be applied to the gradient when the scene inclination changes.
The processes described above allow for tailoring the source parameters such that as few pixels as possible saturate at close distances and as many pixels as possible have reliable information (i.e. high SNR).
Spot ToF illuminator
In particular, the embodiments described above are applicable in the context of a Spot ToF illuminator (dot pattern-based illuminators or other patterns).
In a so called "spot illuminator", the beams produced by the controllable nodes of the illuminator are not overlapping. This results in spot beams. The embodiments described below in more detail provide an optimized illumination system for Spot Time-of-Flight enabled cameras ensuring a gradient illumination pattern. That is, the methods embodiments here can be combined with a Spot ToF solution to improve ambient light performance and mitigate multipath. The latter is particularly useful when an object comes in the field of view (e.g. another car).
Fig. 13 schematically shows a spot ToF imaging system which produces a spot pattern on a scene. The spot ToF imaging system comprises an illumination unit 110, here a spot illuminator, which produces a pattern of spots 202 on a scene 107 comprising objects 203 and 204. The spot illuminator may for example be configured according to the principles of the illuminator of Fig. 3 above, where, according to the spot illuminator, the beams produced by the controllable nodes of the illuminator are not overlapping. An iToF camera 102 captures an image of the spot pattern on the scene 107. The pattern of light spots projected onto the scene 107 by illumination unit 110 results in a corresponding pattern of light spots in the confidence image and depth image captured by the pixels of the image sensor (102a in Fig. 1) of iToF camera 102. The light spots appear in the confidence image produced by iToF camera 102 as a spatial light pattern including high-intensity areas 201 (the light spots), and low-intensity areas 202.
In the embodiment of Fig. 13 the illumination unit 110 is positioned in the plane of the image sensor (102a in Fig. 1) of iToF camera 102. This plane is also called ToF plane. The illimitation unit 110 and the iToF camera 102 are positioned at a distance B from each other. This distance B is called baseline. The scene 107 has distance Z from baseline B. In the embodiment of Fig. 13, for simplification, only a single distance Z of the scene 107 is shown. Flowever, every object 203, 204 or object point within the scene 107 may have an individual distance Z from baseline B. The depth image of the scene captured by ToF camera 102 defines a depth value for each pixel of the depth image and thus provides depth information of scene 107 and objects 203, 204.
In the embodiment of Fig. 13, the light spots produced by illumination unit 110 (e.g. a spot illuminator, an edge emitting laser, a LED, etc.) are shown as dots and they have a circular shape. The embodiments are, however, not restricted to such dots. In alternative embodiments, the light spots produced by illumination unit 110 may have a rectangular or square shape or any other regular or irregular shape. In particular, the light spots may have a spatial light intensity profile, for example, a Gaussian light intensity profile or the like. Still further, in the embodiment of Fig. 1, the light spots produced by illumination unit 110 are shown as a regular grid pattern. Flowever, in alternative embodiment, the spot pattern produced by illumination unit 110 may be an irregular pattern.
Typically, the pattern of light spots projected onto the scene by a spot illuminator results in a corresponding pattern of light spots in the confidence image and depth image captured by the pixels of the image sensor. The spots appear in the confidence image produced by iToF camera as a spatial light patern including high-intensity areas 201 (the light spots), and low-intensity areas. In the following, the high-intensity area 201 defining a spot is also denoted as "spot region".
In the case of Spot ToF, the emiter configuration (e.g. driving parameters or spacing as described with regard to Figs. 4, 5 and 6 above) combines both the spot profile and the gradient profile as described with regard to Figs. 4 to 12 above.
Combination of full-field ToF with spot ToF
In yet other embodiments, spot ToF is combined with full-field (flooded light). The combination of spots and full-field (flooded light) may for example be achieved by combining two or more illuminators, a dedicated spot illuminator and a dedicated flooded light illuminator, or by applying a dual illuminator with the capability to switch from a spot ToF state to a full-field (flooded light) state, as described below.
A combination of spots and full-field (flooded light) may be achieved by applying a dual illuminator with the capability to switch from a spot ToF state to a full-field (flooded light) state. In a dual illuminator, in a first state (see Fig. 14a below), the spots and the optical stack at the transmiter side are defined according to a sampling of the gradient profile. In a second state (see Fig. 14a below), where full-field is applicable, the complete gradient profile can be achieved. The switching between these two states is here also called dual mode of operation.
Figs. 14a and 14b show example of creating a gradient illumination with a dual illuminator. In particular, these Figures show exemplifying light distributions obtained by an illumination device which supports a dual mode of operation in which the illumination device can sequentially switch from a spot to a full-field state. The dual mode of operation comprises a spot state (Fig. 14a) and a flooded state (Fig. 14b). In the dual mode of operation, the illumination device switches between the spot state and the flooded state. The spot state displayed in Fig. 14a is created by collimating the light of a VCSEL array of given emitters. The collimated light coming from the VCSEL array is then odd-multiplied by a DOE (in Fig. 14a a 1x3 DOE), where the DOE is a diffractive optic element that acts as a beam splitter and that is used to multiply the VCSEL array dots vertically/horizontally. This creates tiles/ copies of the VCSEL array spots. The three rectangles shown in Fig. 14a indicate the three DOE regions of the 1x3 DOE. The VCSEL emitters are assumed to be driven by a signal creating the gradient illuminator as described with regard to the embodiments Figs. 4 to 12 above, i.e., some VCSEL emitters will receive different driving current in a gradient-like manner (per line or bigger batches of emiters). The dashed horizontal lines indicate the borders of three exemplifying regions of different intensity levels. The illumination profile changes in the vertical direction as indicated, by the arrow on the right. The flooded gradient illuminator as displayed in Fig. 14b is achieved by defocusing the spots enough to create a sufficiently uniform light. This defocusing may for example be achieved with a diffuser or with a DOE. As in Fig. 14a, the three rectangles indicate the three DOE regions of a 1x3 DOE. Another example for defocusing is using LCDs or a defocusing lens. The dashed horizontal lines indicate the borders of three exemplifying regions of different intensity levels. The illumination profile changes in the vertical direction as indicated, by the arrow on the right.
Both the characteristics of the DOE and those of the (receiving) lens intrinsics (including distortion parameters) may be accounted for. The latter is enabled by incorporating in the formula of the radiant flux per pixel the angle between each pixel’s unit ray (projection vector) and the normal of the road. We note that, the set of projection vectors (one per pixel) can be computed based on a lens model and a set of lens intrinsics, via a lens design file and ray tracing methods in an optical system design file, or other alternative methods. For example, each projection vector is determined in order to make the transformation between the value registered at the pixel coordinate and the corresponding value in 3D space.
Implementation
Fig. 15 schematically describes an embodiment of an iToF camera that can implement an illuminator providing a gradient intensity profile. The electronic device 1200 comprises a CPU 1201 as processor. The electronic device 1200 further comprises an iToF sensor 1206 (e.g. sensor 110 of Figs. 1 and 2; or the illuminator device of Fig. 3) connected to the processor 1201. The processor 1201 may for example implement a process of defining illumination parameters as described in Figs. 10 and 11 above, as well as a process of determining depth information of a scene based on illumination parameters that are optimized for the scene as described in Fig. 12. The electronic device 1200 further comprises a user interface 1207 that is connected to the processor 1201. This user interface 1207 acts as a man -machine interface and enables a dialogue between an administrator and the electronic system. For example, an administrator may make configurations to the system using this user interface 1207. The electronic device 1200 further comprises a Bluetooth interface
1204, a WLAN interface 1205, and an Ethernet interface 1208. These units 1204, 1205 act as I/O interfaces for data communication with external devices. For example, video cameras with Ethernet, WLAN or Bluetooth connection may be coupled to the processor 1201 via these interfaces 1204,
1205, and 1208. The electronic device 1200 further comprises a data storage 1202, which may be the calibration storage described with regards to Fig. 7, and a data memory 1203 (here a RAM). The data storage 1202 is arranged as a long-term storage, e.g. for storing the algorithm parameters for one or more use-cases, for recording iToF sensor data obtained from the iToF sensor 1206 the like. The data memory 1203 is arranged to temporarily store or cache data or computer instructions for processing by the processor 1201.
It should be noted that the description above is only an example configuration. Alternative configurations may be implemented with additional or other sensors, storage devices, interfaces, or the like.
It should be noted that the embodiments are not constrained by a particular lens, distance to the scene or inclination of the camera with respect to the plane of the road. The optical power allocation and, implicitly, the values of the radiant intensity of the source can be profiled for any type of lens, camera-to-road distance and any inclination of the camera, while still respecting the gradient profile.
It should also be recognized that the embodiments describe methods with an exemplary ordering of method steps. The specific ordering of method steps is, however, given for illustrative purposes only and should not be construed as binding.
It should also be noted that the division of the electronic device of Fig. 15 into units is only made for illustration purposes and that the present disclosure is not limited to any specific division of functions in specific units. For instance, at least parts of the circuitry could be implemented by a respectively programmed processor, field programmable gate array (FPGA), dedicated circuits, and the like.
All units and entities described in this specification and claimed in the appended claims can, if not stated otherwise, be implemented as integrated circuit logic, for example, on a chip, and functionality provided by such units and entities can, if not stated otherwise, be implemented by software.
In so far as the embodiments of the disclosure described above are implemented, at least in part, using software-controlled data processing apparatus, it will be appreciated that a computer program providing such software control and a transmission, storage or other medium by which such a computer program is provided are envisaged as aspects of the present disclosure.
Note that the present technology can also be configured as described below:
(1) An illumination device comprising multiple emitters distributed over an emitting area, wherein the intensity profile of the emitters is characterized by a gradient distribution of radiant intensity values across the emitting area. (2) The illumination device of (1), wherein the illumination device is configured to independently drive the emitters or groups of emitters of the illumination device in order to provide the gradient distribution of radiant intensity values across the emitting area.
(3) The illumination device of (1) or (2), wherein the illumination device comprises individual DOEs for emitters or groups of emitters, the DOE's being configured to provide the gradient distribution of radiant intensity values across the emitting area.
(4) The illumination device of any one of (1) to (3), wherein the illumination device is configured to apply different gradient illumination profiles for different scenes.
(5) The illumination device of any one of (1) to (4), wherein the illumination device is configured to apply different gradient illumination profiles for different inclinations.
(6) The illumination device of any one of (1) to (5), wherein the illumination device is configured to implement an algorithm including a feedback loop in order to determine the intensity profile of the emitters.
(7) The illumination device of any one of (1) to (6), wherein the emitters are configured to generate Spot ToF beams with a gradient illumination pattern.
(8) The illumination device of any one of (1) to (6), wherein the emitters are configured to generate full-field ToF beams with a gradient illumination pattern.
(9) The illumination device of any one of (1) to (6), wherein the illumination device is a dual illuminator with the capability to switch from a spot ToF state to a full-field state.
(10) The illumination device of any one of (1) to (9), configured to cooperate with a processor, the processor being configured to determine an intensity image of a scene; compute an intensity map of the scene based on the intensity image, and to define illumination parameters for respective emitters or groups of emitters of the illumination device based on the computed intensity map. (11) The illumination device of (10), wherein the processor is configured to compute the intensity map of the scene based on the intensity per pixel in the intensity image, a pre-defmed illumination flux emitted by the emitters, and a camera model. (12) The illumination device of (10) or (11), wherein the processor is configured to sampling/ segment the of intensity map based on the emitters’ locations.
(13) A ToF camera comprising the illumination device of any one of (1) to (12). (14) A computer-implemented method comprising determining an intensity image of a scene with an illumination device comprising multiple emitters distributed over an emitting area; computing an intensity map of the scene based on the intensity image, and to defining illumination parameters for respective emitters or groups of emitters of the illumination device based on the computed intensity map.
(15) The computer-implemented method of (14), further comprising computing the intensity map of the scene based on the intensity per pixel in the intensity image, a pre-defmed illumination flux emitted by the emitters, and a camera model.

Claims

Claims
1. An illumination device comprising multiple emitters distributed over an emitting area, wherein the intensity profile of the emitters is characterized by a gradient distribution of radiant intensity values across the emitting area.
2. The illumination device of claim 1, wherein the illumination device is configured to independently drive the emitters or groups of emitters of the illumination device in order to provide the gradient distribution of radiant intensity values across the emitting area.
3. The illumination device of claim 1, wherein the illumination device comprises individual DOEs for emitters or groups of emitters, the DOE's being configured to provide the gradient distribution of radiant intensity values across the emitting area.
4. The illumination device of claim 1, wherein the illumination device is configured to apply different gradient illumination profiles for different scenes.
5. The illumination device of claim 1, wherein the illumination device is configured to apply different gradient illumination profiles for different inclinations.
6. The illumination device of claim 1, wherein the illumination device is configured to implement an algorithm including a feedback loop in order to determine the intensity profile of the emitters.
7. The illumination device of claim 1, wherein the emitters are configured to generate Spot ToF beams with a gradient illumination pattern.
8. The illumination device of claim 1, wherein the emitters are configured to generate full-field ToF beams with a gradient illumination pattern.
9. The illumination device of claim 1, wherein the illumination device is a dual illuminator with the capability to switch from a spot ToF state to a full-field state.
10. The illumination device of claim 1, configured to cooperate with a processor, the processor being configured to determine an intensity image of a scene; compute an intensity map of the scene based on the intensity image, and to define illumination parameters for respective emitters or groups of emitters of the illumination device based on the computed intensity map.
11. The illumination device of claim 10, wherein the processor is configured to compute the intensity map of the scene based on the intensity per pixel in the intensity image, a pre-defmed illumination flux emitted by the emitters, and a camera model.
12. The illumination device of claim 10, wherein the processor is configured to sampling/ segment the of intensity map based on the emitters’ locations.
13. A ToF camera comprising the illumination device of claim 1.
14. A computer-implemented method comprising determining an intensity image of a scene with an illumination device comprising multiple emitters distributed over an emitting area; computing an intensity map of the scene based on the intensity image, and to defining illumination parameters for respective emitters or groups of emitters of the illumination device based on the computed intensity map.
15. The computer-implemented method of claim 14, further comprising computing the intensity map of the scene based on the intensity per pixel in the intensity image, a pre-defmed illumination flux emitted by the emitters, and a camera model.
PCT/EP2022/057341 2021-03-26 2022-03-21 Illumination device and method for time-of-flight cameras WO2022200269A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22716406.8A EP4314881A1 (en) 2021-03-26 2022-03-21 Illumination device and method for time-of-flight cameras

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP21165208 2021-03-26
EP21165208.6 2021-03-26

Publications (1)

Publication Number Publication Date
WO2022200269A1 true WO2022200269A1 (en) 2022-09-29

Family

ID=75252402

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/057341 WO2022200269A1 (en) 2021-03-26 2022-03-21 Illumination device and method for time-of-flight cameras

Country Status (2)

Country Link
EP (1) EP4314881A1 (en)
WO (1) WO2022200269A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019020395A1 (en) * 2017-07-24 2019-01-31 Lumileds Holding B.V. Vcsel assembly
US20190195991A1 (en) * 2017-12-22 2019-06-27 Denso Corporation Distance measuring apparatus, recognizing apparatus, and distance measuring method
WO2020026615A1 (en) 2018-08-01 2020-02-06 ソニーセミコンダクタソリューションズ株式会社 Light source device, imaging device, and sensing module
US20200064642A1 (en) * 2018-08-27 2020-02-27 Lumentum Operations Llc Lens array to disperse zero-order beams of an emitter array on a diffractive optical element
WO2020187175A1 (en) * 2019-03-21 2020-09-24 深圳市光鉴科技有限公司 Light projection system and light projection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019020395A1 (en) * 2017-07-24 2019-01-31 Lumileds Holding B.V. Vcsel assembly
US20190195991A1 (en) * 2017-12-22 2019-06-27 Denso Corporation Distance measuring apparatus, recognizing apparatus, and distance measuring method
WO2020026615A1 (en) 2018-08-01 2020-02-06 ソニーセミコンダクタソリューションズ株式会社 Light source device, imaging device, and sensing module
US20200064642A1 (en) * 2018-08-27 2020-02-27 Lumentum Operations Llc Lens array to disperse zero-order beams of an emitter array on a diffractive optical element
WO2020187175A1 (en) * 2019-03-21 2020-09-24 深圳市光鉴科技有限公司 Light projection system and light projection method

Also Published As

Publication number Publication date
EP4314881A1 (en) 2024-02-07

Similar Documents

Publication Publication Date Title
US11662433B2 (en) Distance measuring apparatus, recognizing apparatus, and distance measuring method
US20210311171A1 (en) Improved 3d sensing
US11307293B2 (en) Ranging sensor
CA3017819C (en) Lidar based 3-d imaging with varying illumination intensity
CN111722241B (en) Multi-line scanning distance measuring system, method and electronic equipment
CN110325879B (en) System and method for compressed three-dimensional depth sensing
JP7089586B2 (en) LIDAR signal collection
CN108377380B (en) Image scanning system and method thereof
CA3024510A1 (en) Multiple pixel scanning lidar
JP2021510417A (en) LIDAR-based distance measurement with layered power control
TW201421074A (en) Depth imaging method and apparatus with adaptive illumination of an object of interest
CN110007289B (en) Motion artifact reduction method based on time-of-flight depth camera
WO2020115017A1 (en) Patterned illumination for three dimensional imaging
WO2020221188A1 (en) Synchronous tof discrete point cloud-based 3d imaging apparatus, and electronic device
CN114647084A (en) MEMS galvanometer based extended reality projection with eye tracking
CN112799080A (en) Depth sensing device and method
US20210389465A1 (en) Electronic device, method and computer program
EP3226024B1 (en) Optical 3-dimensional sensing system and method of operation
WO2022200269A1 (en) Illumination device and method for time-of-flight cameras
CN111025329A (en) Depth camera based on flight time and three-dimensional imaging method
US20220171036A1 (en) Methods and devices for peak signal detection
EP3835720B1 (en) Method for multipath error compensation and multipath error-compensated indirect time of flight range calculation apparatus
US20230408694A1 (en) Segmented flash lidar using stationary reflectors
EP3913754A1 (en) Light source for structured light, structured light projection apparatus and system
CN113126111B (en) Time-of-flight module and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22716406

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022716406

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022716406

Country of ref document: EP

Effective date: 20231026