WO2024132463A1 - Device and method to detect refractive objects - Google Patents
Device and method to detect refractive objects Download PDFInfo
- Publication number
- WO2024132463A1 WO2024132463A1 PCT/EP2023/083922 EP2023083922W WO2024132463A1 WO 2024132463 A1 WO2024132463 A1 WO 2024132463A1 EP 2023083922 W EP2023083922 W EP 2023083922W WO 2024132463 A1 WO2024132463 A1 WO 2024132463A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- depth
- measurement
- sensor
- imaging device
- processor
- Prior art date
Links
- 238000000034 method Methods 0.000 title description 144
- 238000005259 measurement Methods 0.000 claims abstract description 309
- 230000003287 optical effect Effects 0.000 claims abstract description 118
- 238000003384 imaging method Methods 0.000 claims abstract description 105
- 238000005305 interferometry Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 description 28
- 238000012937 correction Methods 0.000 description 27
- 238000001514 detection method Methods 0.000 description 18
- 238000013459 approach Methods 0.000 description 13
- 238000003860 storage Methods 0.000 description 10
- 210000005252 bulbus oculi Anatomy 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000013507 mapping Methods 0.000 description 7
- 238000000691 measurement method Methods 0.000 description 7
- 210000001508 eye Anatomy 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 230000001427 coherent effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000015654 memory Effects 0.000 description 3
- 239000000758 substrate Substances 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000003750 conditioning effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000009313 farming Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000003116 impacting effect Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000005693 optoelectronics Effects 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000012780 transparent material Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/46—Indirect determination of position data
- G01S17/48—Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/40—Means for monitoring or calibrating
- G01S7/4052—Means for monitoring or calibrating by simulation of echoes
- G01S7/4082—Means for monitoring or calibrating by simulation of echoes using externally generated reference signals, e.g. via remote reflector or transponder
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- 3D-sensors may include facial recognition and authentication in modern smartphones, factory automation for Industry 5.0, systems for electronic payments, augmented reality (AR), virtual reality (VR), internet-of-things (loT) environments, and the like.
- Various technologies have been developed to gather three-dimensional information of a scene, for example based on time-of-flight of emitted light, based on structured light patterns, based on stereo vision, etc. Improvements in 3D-sensors may thus be of particular relevance for the further advancement of several technologies.
- FIG.1A shows depth sensing in a scenario without a refractive surface in a schematic representation, according to various aspects
- FIG. IB shows depth sensing in a scenario with a refractive surface in a schematic representation, according to various aspects
- FIG.2A shows an imaging device in a schematic representation, according to various aspects
- FIG.2B shows an imaging device in a schematic representation, according to various aspects
- FIG.3A and FIG.3B each shows a schematic flow diagram of a method of detecting transparent objects with depth measurements
- FIG.4A and FIG.4B each shows a depth sensor configured to carry out an optical path length measurement, in a schematic representation, according to various aspects
- FIG.4C and FIG.4D each shows a depth sensor configured to carry out a disparity -based depth measurement, in a schematic representation, according to various aspects
- FIG.5A, FIG.5B, and FIG.5C each shows a respective imaging device in which a first depth sensor and a second depth sensor share at least one common component in a schematic representation, according to various aspects;
- FIG.6A and FIG.6B each shows a respective imaging device in which a first depth sensor and a second depth sensor share at least one common component in a schematic representation, according to various aspects;
- FIG.7 A shows a calibration device in a schematic representation, according to various aspects
- FIG.7B and FIG.7C each shows a schematic flow diagram of a method of calibrating depth sensors, according to various aspects.
- FIG.8 A, FIG.8B, and FIG.8C illustrate the results of a simulation showing the principle of the differential-measurement to detect objects having at least a refractive portion. Description
- 3D-sensing such as via structured light, active stereo vision, or time-of-flight systems.
- each of these techniques allows generating or reconstructing three-dimensional information about a scene, e.g., as a three-dimensional image, a depth map, or a three-dimensional point cloud.
- 3D-sensing allows determining information about objects present in the scene, such as their position in the three-dimensional space, their shape, their orientation, and the like.
- Exemplary applications of active 3D-sensing include their use in automotive, e.g., to assist autonomous driving, and in portable devices (e.g., smartphones, tablets, and the like) to implement various functionalities such as face or object recognition, autofocusing, gaming activities, etc.
- portable devices e.g., smartphones, tablets, and the like
- functionalities such as face or object recognition, autofocusing, gaming activities, etc.
- the present disclosure may be based on the realization that a transparent object may introduce different types of distortions for different detection methods.
- a transparent object may thus influence in a different manner the results of different detection methods, in particular of detection based on measuring the optical path length and detection based on disparity-calculations.
- the strategy described herein may thus be based on analyzing the differences in the detection results of different detection methods to determine the presence (and accordingly other properties, such as position, shape, etc.) of transparent objects in the scene.
- the strategy described herein may be based on exploiting the different inaccuracy introduced by transparent objects on two different sensing technologies (e.g., disparity-map based and optical-path length based), to highlight them by means of a differential depth measurement in a sensor integrating both technologies.
- This approach enables addressing such objects with an embedded solution, which may thus provide a compact and robust device with applications both in industry (e.g., robotic manipulation of objects) and in consumer market (e.g., for eye-tracking).
- an imaging device may include a processor configured to: compare a first result of a first depth measurement with a second result of a second depth measurement, wherein the first depth measurement is carried out via an optical path length measurement, wherein the second depth measurement is carried out via a disparity-based depth measurement, wherein the first depth measurement and the second depth measurement are carried out in a field of view common to the first depth measurement and the second depth measurement; and determine, based on a result of the comparison, whether the common field of view includes at least one object having a refractive portion.
- a method may include: comparing a first result of a first depth measurement with a second result of a second depth measurement, wherein the first depth measurement is carried out via an optical path length measurement, wherein the second depth measurement is carried out via a disparity-based depth measurement, wherein the first depth measurement and the second depth measurement are carried out in a field of view common to the first depth measurement and the second depth measurement; and determining, based on a result of the comparison, whether the common field of view includes at least one object having a refractive portion.
- the first depth measurement and the second depth measurement may provide the same result (e.g., after calibration).
- the presence of a transparent object may instead introduce an error in the measurements that is different for the first depth measurement and the second depth measurement. Determining in which region(s) of the field of view the two depth measurements provide different results may thus allow determining that in such region(s) at least one transparent object may be present.
- the approach described herein may work optimally in the specific case of a bulk full object (such as a lens, or a water filled container) in close proximity of a background Lambertian target.
- the first depth measurement and the second depth measurement may generate, as result, a respective depth map of the common field of view.
- the comparison of the first result with the second result may provide a differential depth map representing, for each coordinate of the common field of view, a difference between the depth values of the depth maps.
- the differential depth map may thus provide a compact and easy to process representation of the comparison of the depth measurements.
- any comparison between two quantities that may be expressed by the numerical difference between the two quantities may also expressed by a different quantity, such as the ratio. It is therefore understood that reference herein to a result “representing a difference” or references to a “differential quantity” may include different types of representation, e.g.
- a direct numerical difference a ratio, or other ways of representing a result of a comparison between two quantities.
- a result there may be various ways of representing a result that defines univocally an equality condition (if present), and potentially allows quantitative comparisons.
- the specific representation may be adapted according to the desired processing or for any other reason related to the specific application. As an example, the representation more convenient for numerical processing may be selected.
- the imaging device may include a first depth sensor configured to carry out the optical path length measurement, and a second depth sensor configured to carry out the disparity -based depth measurement.
- the first depth sensor and the second depth sensor may share one or more components, thus allowing to provide a space- and cost-efficient arrangement of the imaging device.
- the first depth sensor may be a self-mixing interferometer including a light source (e.g., a projector) into which light from the field of view is reflected to cause a modulation of the emitted light (in amplitude and frequency).
- the light source of the self-mixing interferometer may additionally be used as light source for the disparity -based depth measurement, thus providing an efficient utilization of the device components.
- the first depth sensor and the second depth sensor may be calibrated with respect to one another.
- the calibration may be carried out upon deployment of the imaging device (e.g., after fabrication, for example at the factory) and/or may be repeated “on the field”, e.g. at regular time intervals or in correspondence of predefined events.
- calibration may be repeated when the vehicle undergoes maintenance (e.g., in a garage), or before a trip.
- the sensor-to-sensor calibration allows gathering an understanding of how the results should match, so as to enable identifying differences in the result when a transparent object is present in the scene.
- an imaging device may include: a first depth sensor configured to carry out an optical path length measurement; a second depth sensor configured to carry out a disparity-based depth measurement; and a processor configured to: control the first depth sensor to carry out the optical path length measurement in a field of view including one or more predefined objects; control the second depth sensor to carry out the disparity -based depth measurement in the field of view including the one or more predefined objects; and generate calibration data representative of one or more calibration parameters for matching first depth values obtained via the optical path length measurement to corresponding second depth values obtained via the disparity -based depth measurement.
- a calibration method may include: carrying out a first depth measurement via an optical path length measurement in a field of view including one or more predefined objects; carrying out a second depth measurement via a disparity -based depth measurement in the field of view including the one or more predefined objects; and generating calibration data representative of one or more calibration parameters for matching first depth values obtained via the optical path length measurement to corresponding second depth values obtained via the disparity-based depth measurement.
- the one or more predefined objects may be disposed at known distances from the first and second depth sensors so to allow calibrating for any inaccuracy that may be introduced by the emitter optics and/or receiver optics of the sensors.
- the one or more predefined objects may not include any transparent object, so as to allow calibrating the sensors in an “error free” scenario in which the two sensors should ideally provide the same depth results.
- FIG.1A and FIG. IB illustrate depth sensing in a scenario 100a without a refractive surface (FIG.1A), and in a scenario 100b with a refractive surface (FIG. IB), in a schematic representation, according to various aspects.
- FIG.1A illustrate depth sensing in a scenario 100a without a refractive surface
- FIG. IB illustrate depth sensing in a scenario 100b without a refractive surface
- FIG. IB illustrate depth sensing in a scenario 100a without a refractive surface
- FIG. IB illustrate depth sensing in a scenario 100b without a refractive surface
- disparity-based methods such as structured light or stereo vision, identify depth by triangulation of illuminator rays with camera rays or rays from two different cameras, respectively, using the pinhole camera model.
- the position of a point on the illuminator e.g., an emitter pixel emitting a light ray
- di may be intended as a projection point representative of the propagation angle from the illuminator to the point P.
- di may be representative of the ray angle from the illuminator to P, though the specific factor relating the angle of the ray to di would be given, for example, by the camera properties.
- the position of a point on the camera (e.g., a receiver pixel receiving the emitted light ray) with respect to the camera center is denoted as de and with the numeral reference 104.
- the position of an object (at a distance Zw, 106) on which the emitted light is reflected is denoted as P and with the numeral reference 108.
- the baseline illustratively the (center-to-center) distance between the illuminator and the camera, is denoted as BL and with the numeral reference 110.
- the focal length f is denoted with the numeral reference 112.
- a depth value for the object may be derived according to equation (1) as,
- the quantity di actually used for the depth triangulation is typically acquired by the imaging camera, so that the effective focal length appearing in equation (1) is the one of the camera, regardless of the illuminator angular field of view.
- any rectification of raw data based on camera calibration (especially in stereo vision) prior to depth acquisition, and calibrated projector view image (far field acquisition) in structured light methods may be considered.
- An optical path based method may instead be based on a readout of the signal phase or time-of-flight information to extract the round-trip optical path to the object P to get the object range (vector distance).
- the round-trip optical path may be S1+S2, where Si is the optical path towards the object P, and S2 is the optical path back towards the sensor.
- Si is the optical path towards the object P
- S2 is the optical path back towards the sensor.
- the signal phase delay for a given wave vector k carrying the modulation may be expressed via equation (3) as,
- a ⁇ p k S 1 + S 2 ), and may be used to extract information on the range of the point P, according to calculations known in the art.
- the depth Z w can be extracted from the optical path, and leads in principle to the same depth as the disparity-based method.
- FIG. IB illustrates a scenario in which a lens-like transparent object is present and embeds the same target point P, 108, shown in FIG.1A.
- the lens-like transparent object may have a refractive surface 114, which causes a distortion in the behavior of the light, both in the path towards the target point P, and in the path back towards the sensor.
- the illuminator ray path, as well as the location of the image of P on the camera sensor will be distorted in a way that is dependent on the refractive surface geometry and the refractive index value, according to Snell’s law at the interface. This will impact the disparity value, and correspondingly the depth value.
- a variation in the position of the point on the illuminator (denoted as di’, 102b) and of the point on the camera (denoted as de’, 104b) may be observed.
- the position di of a specific pattern feature e.g., a dot center
- the presence of a refractive surface would result in a variation of only the position of the point projection on the camera (de’), still impacting the disparity value.
- optical path length will be also impacted by propagation in the medium.
- the speed of light and the wave vector in the medium are altered by a factor equal to the refractive index, according to the following equations (4) and (5),
- the effective total optical path length, A may thus be given by the optical path outside the transparent object, and the optical path inside the transparent object, according to equation (6) as,
- the propagation in the transparent object thus causes a delay in the arrival time of the pulse in the case of direct time-of-flight, or a corresponding increase of the signal phase delay.
- the refractive medium as well as the interface geometry will introduce an error in the depth estimation that is different for the two techniques (except for specific geometries).
- an object including at least one refractive surface 114 may introduce an error for the optical path length measurement due to the different optical path when light propagates in the object (illustratively, when light enters the transparent surface and is reflected back passing again through the transparent surface).
- the object including at least one refractive surface 114 may also introduce an error for the disparity-based measurement due a tilting of the light rays with respect to the scenario in FIG.1A.
- the present disclosure may be based on the realization that such errors may be exploited, rather than being eliminated with post-processing techniques, to gather information about such usually problematic objects, as discussed in further detail below.
- FIG.2A shows an imaging device 200 in a schematic representation, according to various aspects.
- the imaging device 200 may be configured according to the differential approach described herein.
- the imaging device 200 may include a processor 202 and storage 204 (e.g., one or more memories) coupled to the processor 202.
- the storage 204 may be configured to store instructions (e.g., software instructions) executed by the processor 202.
- the instructions may cause the processor 202 to perform a method 210 of detecting the presence of transparent objects in a scene, described in further detail below. Aspects described with respect to a configuration of the processor 202 may also apply to the method 210, and vice versa.
- An “imaging device” may also be referred to herein as “detection device”, or “3D-sensor”.
- the imaging device 200 may be implemented for any suitable three-dimensional sensing application.
- the imaging device 200 may be used for eye tracking.
- an eye tracker may include one or more imaging devices 200.
- the differential approach proposed herein has been found particularly suitable to track the movement of eyes in view of the capability of providing correct detection even in presence of the transparent portion of the corneal lens.
- the applications of the imaging device 200 and, in general, of the strategy described herein, are not limited to eye tracking.
- Other exemplary application scenarios may include the use of the imaging device in a vehicle, in an indoor monitoring system, in a smart farming system, in an industrial robot, and the like.
- the processor 202 may be configured to compare a first result 212 of a first depth measurement with a second result 214 of a second depth measurement.
- the first depth measurement and the second depth measurement may be of two different types, each influenced in a different manner by the presence of transparent objects in the imaged scene.
- one of the first depth measurement or the second depth measurement may be an optical path length measurement, and the other one of the first depth measurement or the second depth measurement may be a disparity-based depth measurement.
- first depth measurement being an optical path length measurement
- second depth measurement being a disparity-based depth measurement
- a “depth measurement” may be a measurement configured to deliver three-dimensional information (illustratively, depth information) about a scene, e.g. a measurement capable of providing three-dimensional information about the objects present in the scene.
- a “depth measurement” may thus allow determining (e.g., measuring, or calculating) three-dimensional coordinates of an object present in the scene, illustratively a horizontal-coordinate (x), vertical-coordinate (y), and depth-coordinate (z).
- a “depth measurement” may be illustratively understood as a distance measurement configured to provide a distance measurement of the objects present in the scene, e.g. configured to determine a distance at which an object is located with respect to a reference point (e.g., with respect to the imaging device 200).
- the distance at which an object is located with respect to a reference point (e.g., with respect to the imaging device 200) considering the three-dimensional coordinates may in general be referred to as “range”.
- Another way of expressing such distance may be as a “depth”, which may be or include the distance at which the object is located with respect to a reference plane passing through (e.g., the plane orthogonal to optical axis of the imaging device 200, e.g., the plane passing through the optical aperture of the imaging device 200).
- a reference plane passing through e.g., the plane orthogonal to optical axis of the imaging device 200, e.g., the plane passing through the optical aperture of the imaging device 200.
- depth may be or include the projection of the range along the device optical axis (Z).
- references herein to a “range” may apply in a corresponding manner to a “depth” (of the object), and vice versa.
- a “depth” may be converted to a corresponding “range” and a “range” may be converted to a corresponding “depth” according to calculations known in the art, so that the aspects described in relation to a “range” may be correspondingly valid for a “depth”, and vice versa.
- a disparity-based method may directly deliver, as a result, a depth
- an optical path-based method may deliver, as a result, a “range” (assuming that the baseline is negligible, or approximately the range if the baseline is considered).
- An “optical path length measurement” may be a measurement configured to derive depth-information by measuring the optical path travelled by light.
- an “optical path length measurement” may be a measurement configured to determine a depth position of an object (illustratively, a distance at which the object is located), by measuring the length of the optical path that emitted light travels to reach the object and then back to reach the sensor (e.g., a camera, a photo diode, etc.).
- the strategy described herein may be applied to any depth measurement method based on deriving a depth value of an object from the length of the optical path travelled by the emitted/reflected light.
- the optical path length measurement may be or include a direct time-of-flight measurement.
- the optical path length measurement may be or include an indirect time-of-flight measurement.
- the optical path length measurement may be or include an interferometry measurement, e.g. a self-mixing interferometry measurement.
- aspects described herein may in general be applied or generalized to any signal processing technique sensitive to the optical path length. Further examples to which the aspects described herein may be applied may include amplitude-modulated continuous wave (AMCW)-based measurements, and/or frequency- modulated continuous wave (FMCW)-based measurements.
- AMCW amplitude-modulated continuous wave
- FMCW frequency- modulated continuous wave
- a “disparity-based” measurement may be a measurement configured to derive depth-information based on a triangulation of the emitted/reflected light.
- a “disparity-based” measurement may be a measurement configured to determine a depth position of an object based on a difference in distance between corresponding image points and the center of projector/camera.
- a “disparity -based” measurement may also be referred to herein as “triangulation-based” depth measurement.
- the strategy described herein may be applied to any depth measurement based on deriving a depth value of an object from triangulation of the emitted/reflected light.
- the disparity-based depth measurement may include a depth measurement based on structured light.
- the disparity-based depth measurement may include a stereo vision measurement.
- the first depth measurement and the second depth measurement may be carried out in a field of view common to the first depth measurement and the second depth measurement.
- a first field of view of the first depth measurement e.g., as defined by a corresponding first depth sensor, see FIG.2B
- a second field of view of the second depth measurement e.g., as defined by a corresponding second depth sensor, see FIG.2B
- This may be a simple configuration which allows a simpler processing of the results (e.g., a simpler comparison).
- the first field of view of the first depth measurement and the second field of view of the second depth measurement may have only an overlap with one another.
- the overlapping region may be the common field of view of the depth measurements.
- the first field of view and the second field of view may be shifted with respect to one another, e.g. in the horizontal direction and/or vertical direction.
- the first field of view may be greater than the second field of view and may (completely) contain the second field of view.
- the second field of view may be the common field of view of the depth measurements.
- the first field of view may be the common field of view of the depth measurements.
- the first result 212 of the first depth measurement and the second result 214 of the second depth measurement may refer to the respective first field of view and second field of view.
- the common part of the fields of view may be considered, which may correspond to the entire first and second field of view (e.g., to an overall field of view of the imaging device 200), or to a portion of the first and/or second field of view, as discussed above.
- the first depth measurement and the second depth measurement may be carried out simultaneously (in other words, concurrently) with one another. This may ensure a faster processing of the information and a faster completion of the process.
- the first depth measurement and the second depth measurement may be carried out one after the other, e.g. in a short temporal sequence, for example within less than 5 seconds, for example within less than 1 second, for example within less than 100 milliseconds.
- This other configuration may provide sharing one or more components and re-utilizing them for the two methods.
- the processor 202 may be configured to determine whether the common field of view includes at least one object having a refractive portion.
- the processor 202 may be configured to determine (e.g., identify, or calculate) one or more differences between the first result 212 and the second result 214, which differences may be indicative of the presence of one or more objects having a refractive portion in the field of view common to the first and second depth measurement.
- the first result may include first depth values, each corresponding to respective (x-y) coordinates of the common field of view (e.g., each first depth value may be indicative of a depth value at the x-y coordinates).
- the second result may include second depth values, each corresponding to respective (x-y) coordinates of the common field of view.
- the processor 202 may be configured to determine one or more differences between first depth values and the second depth values (illustratively, at the same coordinates of the common field of view).
- the processor 202 may be configured to determine whether the common field of view includes at least one object having a refractive portion based on the one or more differences. As an example, the processor 202 may be configured to determine that at least one object with a refractive portion is present in the common field of view in the case that a difference between a first depth value obtained via the first depth measurement and a second depth value obtained via the second depth measurement is in a predefined range (e.g., is greater than a predefined threshold). The predefined range may selected such that a difference in the predefined range is indicative of a difference in the depth values caused by a refractive surface.
- a predefined range may selected such that a difference in the predefined range is indicative of a difference in the depth values caused by a refractive surface.
- the way the results 212, 214 are compared may be adapted depending on the specific layout.
- the world-to-image projection may be different for the two depth measurements, except the exact same camera is used for both acquisitions where the mapping is an identity and can be bypassed.
- the mapping between the results 212, 214 may be established in different ways depending on the convenience.
- the processor 202 may be configured to remap the coordinates of one acquisition to the other acquisition (either path based depth to correspondence based depth, or the opposite).
- the processor 202 may be configured to project the depth images into the same world coordinates creating a 3D point cloud map.
- a differential depth calibration may also be used to correct for residual errors and ensure that the same depth map is generated when there are no refractive objects (see also FIG.7A to FIG.7C).
- a depth map representation may be used to provide information on the common field of view in a format that is compact and simple to process.
- the first result 212 may be or include a first depth map of the common field of view
- the second result may be or include a second depth map of the common field of view.
- a “depth map” may include a plurality of data points, each corresponding to a respective horizontal coordinate, vertical coordinate, and depth value (illustratively a depth-coordinate) of the common field of view.
- the processor 202 may be configured to determine whether the common field of view includes an object having a refractive portion based on a difference between the first depth map and the second depth map. For example, the processor 202 may be configured to generate the output signal 216 based on a difference between the first depth map and the second depth map. In this scenario, the output signal 216 may be or represent a differential depth map.
- the differential depth map may include a plurality of data points, each corresponding to a respective horizontal coordinate, vertical coordinate, and depth value of the common field of view, and the depth value may represent a difference between a first depth value of the first depth map and a second depth value of the second depth map at these coordinates.
- the differential depth map may be representative, for each coordinate in the common field of view, of whether the common field of view includes at that coordinate at least one object having a refractive portion.
- a depth value of the differential depth map at that coordinate is representative of the common field of view including at least one object having a refractive portion at that coordinate in the case that the depth value is within a predefined range, e.g. in the case that the difference between the first depth value and the second depth value at that coordinate is greater than a predefined threshold.
- the differential output may be a map returning zero, or a positive match, when there are no refractive lens-like objects, and the two depth methods return the same value, or it can return a non-zero value (or depth mismatch) where some lensing effect occurred.
- the processor 202 may be configured to apply a differential calibration map to either (or both) of the two acquired depths based on the calibration method outlined in FIG.7A to FIG.7C.
- the analysis of the depth maps may be understood as an algorithm that aims to establish differences between the two depth acquisitions, depending on the type of data that has been acquired.
- the processor 202 may be configured to determine (e.g., calculate) a straight difference of the two depth maps.
- more refined methods may be provided, e.g. the processor 202 may be configured to determine (e.g., derive) a 3D point cloud map and/or to carry out a mesh reconstruction prior to comparing the depth maps.
- the processor 202 may be configured to account for the resolution and precision limits of both used techniques to provide a more accurate estimation.
- the object having a refractive portion may be or include any type of object which is at least partially transparent.
- the object may be or include any type of object having a transparent surface through which light may propagate.
- the at least one object having a refractive portion may be or include a lens-like object.
- Exemplary objects that may be detected according to the strategy proposed herein may include: an eye (having a corneal lens as refractive portion), a headlight, a pair of glasses, a bottle, etc.
- the processor 202 may be configured to generate, based on the result of the comparison, an output signal 216 representative of whether the common field of view includes at least one object having a refractive portion.
- the output signal 216 may be representative of various information on the common field of view, such as a number of objects having a refractive portion present in the common field of view, a position (e.g., x-y coordinates) of objects having a refractive portion present in the common field of view, a shape of objects having a refractive portion present in the common field of view, and the like.
- the output signal 216 may be a differential depth map representative of a difference between the first depth map and the second depth map.
- the output signal 216 may be provided for further processing, e.g. at the processor 202 or at other processors or circuits external to (and separate from) the processor 202, such as a processing unit of a smartphone, a central control unit of a vehicle, and the like.
- the output signal 216 may thus provide additional information that may not be obtained in a simple manner with individual depth measurements, and may thus be used to enable or enhance a mapping of the scene.
- the processor 202 may be configured to use the output signal 216 for a depth correction of the first result of the first depth measurement and/or for a depth correction of the second result of the second depth measurement.
- the information on the presence (and location, shape, etc.) of transparent objects may allow correcting the results of the depth measurements to take into account for such “error-inducing” objects.
- the processor 202 may be configured to apply any suitable data conditioning to provide the output signal, e.g. according to a desired application.
- the processor 202 may be configured to apply an application-specific algorithm to process the output signal 216.
- the processor 202 may be configured to generate a logical map to highlight the depth mismatch locations in the image.
- the processor 202 may be configured to use a mathematical or learning-based model to reconstruct the original shape and orientation.
- the processor 202 may be configured to apply Boolean masks as input to high resolution (e.g., RGB) images to highlight features, or to define contours.
- the processor 202 may be configured to use the generated data as an input to a depth correction algorithm enabling to correct either the acquired depth maps.
- a depth correction algorithm enabling to correct either the acquired depth maps.
- other types of data may be used to enhance the processing of the results 212, 214 of the depth-measurements.
- the processor 202 or, alternatively, any other external processor, may be configured to further process the results 212, 214 including additional information taken from sensors data (e.g., other types of information originating from the depth-sensors).
- the processor 202 may be configured to measure the distortion induced to some encoded feature of the projection pattern from the intensity images acquired by one of the cameras (e.g., relative distance and shape of the pattern features) to return additional information on the refractive surface geometry, position and orientation, or to improve the measurement accuracy.
- FIG.2A and FIG.2B reference is made to a scenario with two depth measurement methods (and, accordingly, two depth sensors) that are differently influenced by transparent objects. This may be the most relevant use case, as it allows implementing the strategy proposed herein in a simple setup. It is however understood that, in principle, the results of more than two implementations of different types of depth measurement methods may be considered. For example, more than one optical path length measurement and/or more than one disparity-based depth measurement may be considered, providing a more accurate determination of the presence of transparent objects in the scene.
- the processor 202 may thus be configured to compare the first result 212 and the second result 214 with a third result of a third depth measurement carried out in a field of view common to the three depth measurements, and determine whether the common field of view include at least one object having a refractive portion based on a result of the comparison.
- the third depth measurement may be a further optical path length measurement or a further disparity-based depth measurement. The same may apply for a fourth depth measurement, etc.
- FIG.2B shows the imaging device 200 in a schematic representation, according to various aspects.
- the imaging device 200 may include (at least) a first depth sensor 206 configured to carry out the first depth measurement, and a second depth sensor 208 configured to carry out the second depth measurement.
- the imaging device 200 is illustrated as a single “spatially contained” component.
- the depth sensors 206, 208 and the processor 202 may be formed (e.g., integrated) on a single substrate (e.g., a single printed circuit board), e.g. the first depth sensor 206 and the second depth sensor 208 may be integrated onto a same substrate.
- a depth sensor 206, 208 may also be referred to herein as three-dimensional sensor, or 3D-sensor.
- a general configuration of depth sensors to carry out depth measurements according to optical length path or disparity-information is known in the art. A more detailed description will be provided in relation to FIG.4A to FIG.6B to discuss relevant configurations for the present disclosure.
- the depth sensors 206, 208 and/or the processor 202 may also be disposed on different substrates or, more in general, may be spatially separated from one another.
- the processor 202 may be located at a remote location, illustratively in the “cloud”, and may provide the processing of the results 212, 214 of the depth measurements in remote.
- the depth sensors 206, 208 may be disposed not in close proximity with one another, e.g. may be disposed at the right-hand side and left-hand side of a computer monitor, or at the right headlight and left headlight of a vehicle, or the like. Also in this “scattered” configuration, however, the depth sensors 206, 208 and the processor 202 may still be understood to be part of the imaging device 200.
- the imaging device 200 may include more than two depth sensors 206, 208, as discussed in relation to FIG.2A.
- the imaging device 200 may include a third depth sensor configured to carry out a third depth measurement (e.g., a further optical path length measurement or disparity-based measurement).
- the imaging device 200 may further include a fourth depth sensor, etc.
- FIG.2A and FIG.2B is simplified for the purpose of illustration, and that the imaging device 200 may include additional components with respect to those shown, e.g. one or more amplifiers to amplify a signal representing the received light, one or more noise filters, and the like.
- the first depth sensor 206 may be configured to carry out the optical path length measurement and deliver a first output signal representative of a result 218 of the optical path length measurement to the processor 202.
- the second depth sensor 208 may be configured to carry out the disparity -based depth measurement and deliver a second output signal representative of a result 220 of the disparity-based depth measurement to the processor 202.
- a depth sensor 206, 208 may be configured to transmit the respective output signal to the processor in any suitable manner, e.g. via wired- or wireless-communications.
- the output signal of a depth sensor 206, 208 may be in any suitable form to allow processing by the processor 202.
- the output signal of a depth sensor 206, 208 may be an analog signal
- the imaging device 200 e.g., as part of the processor 202
- the processor 202 may be configured to provide digital signal processing of the output signals of the depth sensors 206, 208.
- the processor 202 may be configured to provide analog processing of the (analog) output signals of the depth sensors 206, 208.
- a depth sensor 206, 208 may include an analog-to-digital converter to deliver (directly) a digital output signal to the processor 202.
- the first depth sensor 206 may be configured to implement any suitable depth measurement based on optical path length.
- the first depth sensor 206 may be configured as a direct time-of-flight sensor.
- the first depth sensor 206 may be configured as an indirect time-of-flight sensor.
- the first depth sensor 206 may be configured as a self-mixing interferometer.
- Other examples may include the first depth sensor 206 being configured to carry out an amplitude- modulated continuous wave (AMCW)-based measurements, and/or a frequency-modulated continuous wave (FMCW)-based measurements.
- AMCW amplitude- modulated continuous wave
- FMCW frequency-modulated continuous wave
- the first depth sensor 206 may thus be configured to acquire information on the length of the optical round trip “illuminator-to-sensor camera” by using a technique that is either sensitive to the time required for a short light pulse to propagate along the full path, or to the accumulated phase of the light itself, or of some modulation of the same.
- the second depth sensor 208 may be configured to implement any suitable depth measurement based on disparity-calculations.
- the second depth sensor 208 may be configured as a structured-light depth sensor. In this configuration, the intensity projected by a structured light illuminator that reflects on the scene is imaged on a sensor through the optical layers of a sensor camera.
- the second depth sensor 208 may be configured based on an approach involving more than one camera, e.g. the second depth sensor 208 may be configured as a stereo vision sensor.
- the output of an optical path length measurement and/or of a disparity-based depth measurement may be further processed to derive depth values from the measurement.
- Each depth sensor 206, 208 may be configured to apply any signal processing and/or statistics (for example, electrical delay to distance, peak recognition, fringe counting, etc.) suitable to obtain distance information, as well as use calibration information.
- signal processing and/or statistics for example, electrical delay to distance, peak recognition, fringe counting, etc.
- the implementation of such signal processing and/or statistics may be known in the art for the various methods.
- the output of an optical path length measurement and/or of a disparity-based depth measurement may be further corrected to take into account for possible distortions introduced by the emitter optics, receiver optics, cameras, illuminators, and the like. Such corrections may be carried out by the depth sensors 206, 208 prior to delivering the output to the processor 202.
- the result 218 of the optical path length measurement may (directly) be or include the first result 212 of the first depth measurement
- the result 220 of the disparity-based depth measurement may (directly) be or include the second result 214 of the second depth measurement.
- the processor 202 may be configured to carry out a correction of the result 218 of the optical path length measurement and/or of the result 220 of the disparity-based depth measurement.
- This configuration may provide a simpler setup for the depth sensors, transferring the processing load at the processor 202.
- the processor 202 may be configured to determine (e.g., derive) the first result 212 of the first depth measurement from the result 218 of the optical path length measurement, e.g. by applying a predefined (first) correction.
- the processor 202 may be configured to determine (e.g., derive) the second result 214 of the second depth measurement from the result 220 of the disparity-based depth measurement, e.g. by applying a predefined (second) correction.
- the (first) correction of the optical path length measurement may include one or more corrections.
- the target range vector distance from the sensor
- the target range may be half of the total round trip distance from illuminator to detector. This may be corrected in scenarios where the baseline is relevant and the target is close enough that parallax contributions may be significant.
- the predefined correction may include, additionally or alternatively, a range-to-depth correction (or geometrical distortion correction), where the Z component of the range is extracted, resulting in a depth map.
- the range-to-depth correction may provide avoiding an over-estimate of the depth for depth information acquired at large field of view.
- the (second) correction of the disparity-based depth measurement may include one or more corrections.
- images may be rectified and undistorted by means of calibration data in order to identify correspondences between the projector pattern and the pattern projected on the scene and imaged by the camera.
- a depth map may then be extracted from the projector-camera pattern disparity.
- stereo vision correspondence may be equivalently used, e.g. the pattern projected on the scene may be acquired by two cameras of known distance, and correspondence is established between such images. Methods to identify and quantify correspondence can vary depending on the specific pattern, as known in the art.
- the processor 202 may be different from the processors (e.g., processing units) of the individual components (illustratively, the individual depth sensors 206, 208).
- the processor 202 may be configured to receive raw data as an input from the component sensing units, and/or pre-processed depth data when such components embed independent processing units.
- the processor 202 may receive both raw data and depth data, e.g. in case a first depth processing is done by (dedicated) individual units, and a second processing re-using the raw data is done by the processor 202.
- the processor 202 may be configured to process data to generate depth information, when not yet provided by component embedded independent processing units, using information from the different component units.
- the processor 202 may be configured to map coordinates from the depth maps generated by one sensor input or one processing step, to the depth map generated by another sensor input or another processing step of the system. Additionally or alternatively, the processor 202 may be configured to map depth coordinates to real 3D world coordinates. The processor 202 may be configured to process depth information from two different sensing methods and generate an output that depends on the difference of the two depths generated by the two sensing methods. The processor 202 may be configured to further process the differential data to generate an application-specific output.
- FIG.3A and FIG.3B each shows a schematic flow diagram of a method 300a, 300b of detecting transparent objects with depth measurements.
- the method 300a, 300b may be an exemplary implementation of the method 210 carried out by the processor 202.
- the method 300a may include, in 310 comparing a first result (e.g., the result 212) of a first depth measurement with a second result (e.g., the result 214) of a second depth measurement.
- the first depth measurement may be carried out via an optical path length measurement
- the second depth measurement may be carried out via a disparity -based depth measurement.
- the first depth measurement and the second depth measurement may be carried out in a common field of view.
- the method 300a may further include acquiring the first result and the second result.
- the method 300a may further include carrying out an optical path length measurement to determine (e.g., derive) the first result, and carrying out a disparity-based depth measurement to determine the second result.
- the method 300a may include, for example, carrying out the optical path length measurement and the disparity-based depth measurement in parallel with one another, e.g. simultaneously with one another.
- the method 300a may include carrying out one of the optical path length measurement or the disparity -based depth measurement, and, after having carried out the one of the optical path length measurement or the disparity -based depth measurement, carrying out the other one of the optical path length measurement or the disparity-based depth measurement.
- the method 300a may further include, in 320, determining, based on a result of the comparison, whether the common field of view includes at least one object having a refractive portion.
- the method 300a may include determining whether a transparent object is present in the common field of view based on differences in the depth values obtained via the two depth measurements.
- the method 300a may further include determining one or more properties of the at least one object (if present) such as its location, its shape, its orientation, and the like.
- the method 300a may include generating a differential output from the first result and the second result.
- the first result may be or include a first depth map of the common field of view
- the second result may be or include a second depth map of the common field of view.
- the method 300a may include generating a differential depth map from the first depth map and the second depth map (e.g., by subtracting the first depth map from the second depth map, or vice versa, or via a more elaborate approach).
- FIG.3B shows method 300b which may be an exemplary implementation of the method 300a, e.g. including exemplary steps that may be present in the method 300a.
- the method 300b may include, in 330, acquiring an optical path length, e.g. the method 300b may include carrying out an optical path length measurement, for example based on time-of-flight or phase of the emitted light.
- the method 300b may further include, in 340, extracting depth information from the optical path length.
- the method 300b may include applying one or more corrections to the optical path length to obtain depth value(s) from the measured optical path length(s).
- the method 300b may include, in 350, acquiring structured light images, e.g. the method 300b may include carrying out a disparity-based depth measurement.
- the method 300b may further include, in 350, extracting depth information from the structured light images, e.g. using a correspondence-based algorithm.
- the method 300b may include applying one or more corrections to the structured light images to obtain depth values from the acquired structured light images.
- optical path length 330 and the acquisition of structured light images, 350, and the corresponding derivation of depth-information may be carried out in parallel with one another, or in sequence, as discussed above.
- the method 300b may further include, in 370, establishing a mapping between the depth-information obtained via the optical path length measurement and the depth-information obtained via the disparity-based depth measurement.
- the method 300b may include, in 370, establishing a mapping between depth map coordinates generated via the optical path length measurement and via the disparity-based depth measurement
- the method 300b may further include, in 380, comparing the results obtained via the optical path length measurement and via the disparity -based depth measurement. At 380 the method 300b may further include carrying out depth-calibration and correction, if desired. As a result of the comparison, the method 300b may include generating a differential output, e.g. a differential depth map.
- the method 300b may further include, in 390, carrying out further processing of the differential output, e.g. of the differential depth map or error map. This may provide generating an output (e.g., an output signal), which may be used for further applications, as discussed in relation to FIG.2A.
- the additional processing may also optionally include using some additional features of data acquired by the sensors 206 and 208, such as spatial features of the measured pattern intensity, or velocity information from phase shifts caused by the Doppler effect.
- FIG.4A and FIG.4B each shows a (first) depth sensor 400a, 400b configured to carry out an optical path length measurement.
- the depth sensor 400a, 400b may be an exemplary realization of the first depth sensor 206 described in relation to FIG.2B.
- FIG.4C and FIG.4D each shows a (second) depth sensor 450a, 450b configured to carry out a disparity-based depth measurement.
- the depth sensor 450a, 450b may be an exemplary realization of the second depth sensor 208 described in relation to FIG.2B.
- FIG.4A to FIG.4D may be simplified for the purpose of illustration, and the depth sensor 400a, 400b, 450a, 450b may include additional component with respect to those shown (e.g., a processor, a time-to-digital converter, an amplifier, a filter, and the like).
- additional component e.g., a processor, a time-to-digital converter, an amplifier, a filter, and the like.
- the (first) depth sensor 400a in FIG.4A may be configured to carry out a time-of-flight measurement, e.g. the depth sensor 400a may be a direct time-of-flight sensor or an indirect time-of-flight sensor.
- the (first) depth sensor 400b in FIG.4B may be configured to carry out a self-mixing interferometry measurement, e.g. the depth sensor 400b may be a self-mixing interferometer.
- the (second) depth sensor 450a in FIG.4C may be configured to carry out a depth measurement based on structured light.
- the (second) depth sensor 450b in FIG.4D may be configured to carry out a depth measurement based on stereo vision, e.g. the depth sensor 450b may be a stereo vision sensor (e.g., a sensor configured to carry out active stereo vision measurements).
- a depth sensor 400a, 400b, 450a, 450b may include, at the emitter side, an illuminator 402, 412, 452, 462, and emitter optics 406, 416, 456, 466.
- An illuminator 402, 412, 452, 462 may be or include a light source configured to emit light, and the emitter optics 406, 416, 456, 466 may be configured to direct the emitted light in a field of view of the depth sensor 400a, 400b, 450a, 450b.
- An illuminator 402, 412, 452, 462 may be configured to emit light having a predefined wavelength, for example in the visible range (e.g., from about 380 nm to about 700 nm), infrared and/or near-infrared range (e.g., in the range from about 700 nm to about 5000 nm, for example in the range from about 860 nm to about 1600 nm, or for example at 905 nm or 1550 nm), or ultraviolet range (e.g., from about 100 nm to about 400 nm).
- the visible range e.g., from about 380 nm to about 700 nm
- infrared and/or near-infrared range e.g., in the range from about 700 nm to about 5000 nm, for example in the range from about 860 nm to about 1600 nm, or for example at 905 nm or 1550 nm
- ultraviolet range e.g.
- an illuminator 402, 412, 452, 462 may be or may include an optoelectronic light source (e.g., a laser source).
- an illuminator 402, 412, 452, 462 may include one or more light emitting diodes.
- an illuminator 402, 412, 452, 462 may include one or more laser diodes, e.g. one or more edge emitting laser diodes or one or more vertical cavity surface emitting laser diodes.
- an illuminator 402, 412, 452, 462 may include a plurality of emitter pixels, e.g.
- an illuminator 402, 412, 452, 462 may include an emitter array having a plurality of emitter pixels.
- the plurality of emitter pixels may be or may include a plurality of laser diodes.
- an illuminator 402, 412, 452, 462 may include an array of sources of coherent light.
- an illuminator 402, 412, 452, 462 may include an array of electronic devices that monitor the output intensity or junction voltage of each source of coherent light.
- an illuminator 402, 412, 452, 462 may be a projector.
- the illuminator 402 may be configured to emit individual light pulses (e.g., individual laser pulses) for a direct time-of-flight measurement, or may be configured to emit continuous modulated light, e.g. continuous light having an amplitude modulation or frequency modulation, for an indirect time-of-flight measurement.
- the illuminator 402 may be configured to emit light pulses at regular intervals.
- the illuminator 402 may be configured to emit light pulses grouped in bursts or in a more complex temporal pattern.
- the illuminator 412 may be or include a laser source (e.g., a laser diode), and the depth sensor 400b may include optics 416 configured to direct the light collected from the field of view of the depth sensor 400b into the illuminator 412.
- the illuminator 412 may be configured to emit continuous modulated light, e.g. continuous light having a frequency modulation. In this configuration, the light reflected back into the cavity induces a modulation of the laser light properties (e.g., an amplitude modulation and a frequency modulation), or of other electrical characteristics of the source (e.g., a modulation of the laser diode junction voltage).
- the illuminator 452, 462 (and/or emitter optics 456, 466 of the depth sensor 450a, 450b) may be configured to emit (e.g., project) a predefined light pattern, for example a dot pattern.
- the projected pattern may include predefined features (e.g., dots) whose displacement at the receiver side may be used to determine the depth at which the feature has been reflected.
- the illuminator 452, 462 may be configured to emit a light pattern including pattern features that along the orthogonal direction are larger than the expected orthogonal direction shift (e.g., vertical stripes or elliptical/elongated dots).
- the illuminator 452, 462 may be configured to emit a complex pattern that allows reconstructing the original vertical position using some features (for instance a dot pattern with a square grid superimposed).
- the illuminator 452, 462 (and/or the emitter optics 456, 466) may be configured to emit pattern elements including encoded information (such as a specific modulation, or a symbol) allowing the computer vision algorithm to reconstruct the correspondence, or more refined methods can be designed based on the application itself to establish correspondence in the presence of orthogonal shift.
- the orthogonal shift when measured, may include additional information on the refractive objects. In various aspects, such information may be stored and used in the following steps in addition to the differential depth map.
- the illuminator 402, 412 of the first depth sensor 400a, 400b may be configured to generate pulsed signals with a precise timing or, alternatively a modulated signal.
- the illuminator 412 e.g., corresponding optics 416) may be configured to collect the light reflected from the scene and inject it into the illuminator (e.g., into the source(s) of coherent light).
- the illuminator 452, 462 of the second depth sensor 450a, 450b may be configured to generate some pattern at infinite distance or, alternatively, at some finite distance. The pattern may encode information to further simplify feature matching in the presence of distortion.
- a (second) depth sensor 450a, 450b configured for disparity-based measurements may include more than one illuminator 452, 462, e.g. more than one projector.
- a further (second) projector may allow generating more complex light patterns.
- the second illuminator may be configured to generate a pattern different from the first illuminator 452, 462 or alternatively, a smooth pattern similar to a homogeneous irradiance.
- the optics associated with the first illuminator 452, 462 and array of sources may be tiled to enable the functionalities of a second illuminator in the first illuminator 452, 462.
- a depth sensor 400a, 400b, 450a, 450b may include, at the receiver side a light sensor 404, 414, 454, 464 configured to generate a sensing signal representative of the light impinging onto the sensor 404, 414, 454, 464.
- a depth sensor 400a, 450a, 450b may include receiver optics 408, 458, 468 configured to collect the light reflected from the field of view and direct the collected light onto the sensor 404, 454, 464, e.g. in case of disparity-based measurements the receiver optics 458, 468 may be configured to form an image of the projected pattern onto the sensor.
- a light sensor 404, 414, 454, 464 may be configured to be sensitive for the emitted light, e.g. may be configured to be sensitive in a predefined wavelength range, for example in the visible range (e.g., from about 380 nm to about 700 nm), infrared and/or near infrared range (e.g., in the range from about 700 nm to about 5000 nm, for example in the range from about 860 nm to about 1600 nm, or for example at 905 nm or 1550 nm), or ultraviolet range (e.g., from about 100 nm to about 400 nm).
- the visible range e.g., from about 380 nm to about 700 nm
- infrared and/or near infrared range e.g., in the range from about 700 nm to about 5000 nm, for example in the range from about 860 nm to about 1600 nm, or for example at 905 nm
- a light sensor 404, 414, 454, 464 may include one or more light sensing areas, for example a light sensor 404, 414, 454, 464 may include one or more photo diodes.
- a light sensor 404, 414, 454, 464 may include at least one of a PIN photo diode, an avalanche photo diode (APD), a single-photon avalanche photo diode (SPAD), or a silicon photomultiplier (SiPM).
- the light sensor 404 may include one or more single-photon avalanche photo diodes.
- the single-photon avalanche photo diodes allow generating a strong (avalanche) signal upon reception of single photons impinging on the photo diodes, thus providing a high responsivity and a fast optical response.
- the light sensor 404 may be configured to store time-resolved detection information or to detect the phase of a modulated signal.
- the light sensor 414 may be or include a photo diode that receives light from the laser cavity, rather than from the field of view of the depth sensor 400b.
- the signal of the photo diode is representative of the modulation of the laser light, and thus indicative of the distance at which an object reflecting the light may be located.
- a depth sensor 450b may include a further light sensor, e.g. a second imaging camera, with corresponding optics.
- the further light sensor may have, in some aspects, a resolution greater than the first imaging camera, and may return a respective intensity map.
- a depth sensor 400a, 400b, 450a, 450b may include a processor, e.g. a processing unit, configured to extract depth information from the light sensing data delivered by the light sensor 404, 414, 454, 464, e.g. by means of a suitable algorithm.
- a compact arrangement of the depth sensors of an imaging device may be provided.
- one or more components at the emitter side and/or receiver side may be shared between the depth sensors (e.g., between a first depth sensor 206, 400a, 400b and a second depth sensor 208, 450a, 450b), or among more than two depth sensors.
- This configuration may provide an efficient utilization of the system resources, and thus a space- and cost-efficient arrangement of an imaging device.
- At least one light sensor may be shared between/among the depth sensors, e.g. between the first depth sensor and the second depth sensor.
- at least one illuminator may be shared between/among the depth sensors, e.g. between the first depth sensor and the second depth sensor. Exemplary configurations of this shared arrangement are illustrated in FIG.5A to FIG.5C.
- the light sensor used for the optical path measurement e.g., a SPAD camera
- intensity acquisition e.g., summing up the events
- a dedicated high-resolution camera may be used instead.
- FIG.5A, FIG.5B, and FIG.5C each shows a respective imaging device 500a, 500b, 500c in which a first depth sensor and a second depth sensor share at least one common component.
- the imaging device 500a, 500b, 500c may be an exemplary realization of the imaging device 200, e.g. may represent an exemplary configuration of the first and second depth sensors 206, 208, 400a, 400b, 450a, 450b.
- the imaging device 500a, 500b may include a single illuminator 502 (with corresponding emitter optics) shared between a time-of-flight-based measurement and a structured-light-based measurement.
- the imaging device 500a, 500b may include a first light sensor 504 for the time-of-flight measurement, illustratively a time-of-flight camera module.
- the imaging device 500a, 500b may include a second light sensor 506 for the structured-light imaging, illustratively an imaging camera module.
- the common illuminator 502 may be configured to emit light both for the time-of-flight measurement (e.g., single light pulses, or continuous modulated light) and for the structured-light imaging (e.g., a predefined light pattern, such as a dot pattern).
- the common illuminator 502 and the light sensors 504, 506 may be disposed aligned along a same direction (as shown in FIG.5 A), for example aligned along the horizontal direction.
- the illuminator 502 and the light sensors 504, 506 may be disposed at an angle, e.g. with an orthogonal arrangement, as shown in FIG.5B.
- the arrangement of the illuminator 502 and the light sensors 504, 506 may be selected according to a desired configuration of the imaging device, e.g. to take into account fabrication constraints or application constraints.
- a configuration with two light sensors 504, 506, e.g. with two imaging cameras, may in general be used to implement stereo vision 3D sensing methods.
- this arrangement may provide a first baseline (Baseline ToF) between the illuminator 502 and the light sensor 504 used for the time-of-flight measurement, and a second baseline (Baseline SL) between the illuminator 502 and the light sensor 506 used for the structured-light imaging.
- the baselines may be adapted according to the desired configuration of the imaging device 500a, 500b, for example the baselines may be equal to one another (illustratively, may have the same length), or may be different from one another.
- the imaging device 500c may include a single illuminator 512 (with corresponding emitter optics) shared between a self-mixing interferometric measurement and a structured-light-based measurement.
- the imaging device 500c may include a (single) light sensor 514 for the structured-light imaging, illustratively an imaging camera module.
- the self-mixing interferometric detection may be provided by the illuminator 512 itself, by the reflected light being injected therein.
- the optics may be thus configured to direct the light collected from the field of view into the illuminator 512 during the self-mixing interferometric measurement, and may be configured to direct the light collected from the field of view onto the light sensor 514 during the structured-light imaging measurement.
- the light sensor 514 may also be configured to perform a time-of-flight measurement (e.g., including timing circuitry), in addition to the structured light measurement.
- the illuminator 512 may have a configuration different from a self-mixing interferometer.
- FIG.6A shows an imaging device 600a including a self-mixing interferometry based sensor
- FIG.6B shows an imaging device 600b including a time-of-flight-based sensor, in a schematic representation, according to various aspects.
- FIG.6A and FIG.6B show an exemplary configuration of the imaging device 200, 500a, 500b, 500c and corresponding depth sensors.
- an object 650a, 650b having a refractive portion may be located at a distance Zrarget, 652a, 652b from the imaging device 600a, 600b.
- the imaging device 600a in FIG.6A may include a camera optical layer 602a including a stack of optical components (e.g., camera, IR filters, pupils etc.) and configured to focus on the sensor 604a an image of the regular pattern projected from the illuminator module 606a (e.g., from a VCSEL source 608a) onto the target 650a.
- the illuminator to camera distance defines a baseline value for structured light depth extraction.
- the imaging device 600a may further include an illuminator optical layer 610a.
- the illuminator optical layer 610a may be or include a stack of optical components that defines, for each VCSEL, an image point on or in close proximity of the target surface(e.g., the illuminator optical layer 610a may be or include an array of micro-lenses, e.g. a ML A, on top of the VCSELs), or at infinity.
- the illuminator optical layer 610a may be configured to define, for each VCSEL, a propagation direction (e.g., the illuminator optical layer 610a may be or include a prism array tilting each ray).
- the illuminator optical layer 610a may be tiled to allow part of the VCSEL to create a different type of illumination, including far field patterns and/or flood illumination.
- the combination of optical components of the illuminator optical layer 610a may be configured to define a planar or curved surface on which the VCSEL emitting facets are imaged. The light scattered from such points is imaged back into the VCSEL facet.
- the VCSEL source 608a may include frequency modulation, direct or by means of other parameters (e.g., by means of a current modulation).
- the illuminator module 606a may additionally include array of electronic devices that monitor the output intensity or junction voltage of each VCSEL.
- the imaging device 600b in FIG.6B may include a camera optical layer 602b including a stack of optical components (e.g., camera, IR filters, pupils etc.) configured to focus on the sensor 604b an image of the regular pattern projected from the illuminator module 606b onto the target 650b.
- the camera sensor 604b may include light-sensitive pixels and underlying combining electronics, and may be configured to enable time-resolved event detection and time binning statistics, and/or discrimination of the phase of a modulation in the light emitted by the illuminator 606b, such as amplitude.
- the camera sensor 604b may further be configured to enable the acquisition of the irradiance of the regular pattern image that is projected onto the time of flight, and that can be used for disparity-based depth evaluation.
- the illuminator to camera distance defines a baseline value for structured light depth extraction.
- the imaging device 600b may further include an illuminator optical layer 610b.
- the illuminator optical layer 610b may be or include a stack of optical components that are configured and positioned with respect to the VCSEL source 608b to project a structured light pattern onto the scene.
- the VCSEL source 608b may include a driver to generate time-resolved pulses and for time synchronization with the camera.
- the driver may be configured to generate a time-modulation of a quantity such as the amplitude of the laser signal.
- the imaging device 600b may further include a second illuminator in addition to the first structured light illuminator.
- the second illuminator may be configured to emit a flood pattern.
- the second illuminator may include a driver with the same time and modulation properties of the driver of the first illuminator.
- a calibration of the depth sensors of an imaging device may be provided.
- the calibration may ensure an accurate mapping of the results obtained with the different depth measurement methods, to allow for a more accurate identification of the different errors induced by refractive portions of objects in the field of view.
- the first depth sensor of an imaging device e.g., the first depth sensor 206
- the second depth sensor of the imaging device e.g., the second depth sensor 208.
- FIG .7A shows a calibration device 700 in a schematic representation, according to various aspects.
- the calibration device 700 may include a processor 702 and storage 704 (e.g., one or more memories) coupled to the processor 702.
- the storage 704 may be configured to store instructions (e.g., software instructions) executed by the processor 702.
- the instructions may cause the processor 702 to perform a method 710 of calibrating depth sensors, described in further detail below. Aspects described with respect to a configuration of the processor 702 may also apply to the method 710, and vice versa.
- the calibration device 700 may be a dedicated device for imaging applications.
- the calibration device 700 may be part of an imaging device (e.g., of the imaging device 200).
- an imaging device may be configured to carry out the calibration of its depth sensors.
- the processor 702 may be, in various aspects, a processor of an imaging device, e.g. the processor 202 of the imaging device 200 may additionally be configured to carry out the calibration described in the following in relation to the processor 702 (for the depth sensors 206, 208).
- the processor 702 may be configured to control a calibration of a first depth sensor 706 with respect to a second depth sensor 708, e.g. a sensor-to-sensor calibration.
- the first depth sensor 706 may be configured as the first depth sensor 206 described in FIG.2B, e.g. the first depth sensor 706 may be configured to carry out an optical path length measurement.
- the second depth sensor 708 may be configured as the second depth sensor 208 described in FIG.2B, e.g. the second depth sensor 708 may be configured to carry out a disparity -based depth measurement.
- the processor 702 may be configured to derive calibration information based on the output of the depth sensors 706, 708 in a known scenario.
- the processor 702 may be configured to control the depth sensors 706, 708 to carry out the respective depth-measurement in a (common) field of view having known properties, e.g. a field of view including objects with predefined (e.g., known) properties.
- the processor 702 may be configured to control the first depth sensor 706 to carry out the optical path length measurement in a predefined field of view, e.g. in a field of view including one or more predefined objects.
- the processor 702 may be configured to control the second depth sensor 708 to carry out the disparity -based depth measurement in the predefined field of view (illustratively, in the field of view including the one or more predefined objects).
- the one or more predefined objects may illustratively be or include one or more objects having predefined properties.
- the one or more predefined objects may have a predefined shape, a predefined orientation, and/or predefined location within the field of view.
- the predefined properties may be known to the processor 702 (e.g., may be stored in the storage 704).
- the one or more predefined objects may be free of refractive portions.
- the one or more predefined objects may be completely non-transparent.
- the one or more predefined objects may have refractive portions disposed facing away from the depth sensors 706, 708, e.g. disposed in such a manner that the refractive portions are not illuminated by the light emitted by the depth sensors 706, 708.
- the processor 702 may be configured to calibrate the depth sensors 706, 708 with respect to one another based on respective results 712, 714 of the corresponding depth measurements, e.g. based on respective depth maps generated by the depth sensors 706, 708.
- the processor 702 may be configured to generate calibration data 716 based on the results 712, 714 of the depth measurements.
- the calibration data 716 may include or represent one or more calibration parameters.
- the one or more calibration parameters may include or define adjustment values for matching first depth values obtained via the optical path length measurement to corresponding second depth values obtained via the disparity-based depth measurement.
- the one or more calibration parameters may represent how to adjust the first result 712 of the first depth measurement to match the second result 714 of the second depth measurement, or vice versa.
- the one or more calibration parameters may represent adjustment values to adjust the first result 712 so as to obtain the same depth values as for the second result 714, or vice versa.
- the processor 702 may be configured to store the calibration data 716, e.g. in the storage 704 (e.g., in the storage 204 of the imaging device 200).
- the calibration data may be or include a calibration map for matching a first depth map obtained via the optical path length measurement to a second depth map obtained via the disparity-based depth measurement.
- the calibration map may include a plurality of adjustment values (also referred to as calibration values) at respective (x-y) coordinates of the predefined field of view.
- the calibration map may thus represent, for each coordinate, a calibration (illustratively, a correction) to be applied to the first result 712 for matching the second result 714, or vice versa.
- the calibration map may be representative, for each coordinate of the field of view, of a calibration parameter for modifying a depth value of the first depth measurement at that coordinate and/or for modifying a depth value of the second depth measurement at that coordinate.
- the results of a depth measurement carried out by a depth sensor may be corrected using the calibration data for the depth sensors prior to carrying out the comparison and related analysis to identify transparent objects.
- the processor of an imaging device may be configured to modify the first result of a first depth measurement and/or the second result of a second depth measurement based on calibration data representative of a calibration of the first depth measurement with respect to the second depth measurement. This may apply for example to the processor 202 of the imaging device 200, configured to calibrate the first result 212 with respect to the second result 214, or vice versa, prior to comparing the results 212, 214 with one another.
- FIG.7B and FIG.7C each shows a schematic flow diagram 700b, 700c of a calibration method, e.g. of a method for calibrating depth sensors.
- the method 700a, 700b may be an exemplary implementation of the method 710 carried out by the processor 702.
- the method 700b may include, in 710, carrying out a first depth measurement via an optical path length measurement in a predefined field of view, e.g. in a field of view including one or more predefined objects.
- the method 700b may include, in 720, carrying out a second depth measurement via a disparity-based depth measurement in the predefined field of view, e.g. in the field of view including the one or more predefined objects.
- the optical path length measurement and the disparity-based depth measurement may be carried out in parallel (e.g., simultaneously) with one another, or in a sequence.
- the method 700b may further include, in 730, generating calibration data based on the results of the first depth measurement and second depth measurement.
- the method 700b may include generating calibration data representative of one or more calibration parameters for matching first depth values obtained via the optical path length measurement to corresponding second depth values obtained via the disparity -based depth measurement.
- FIG.7C shows method 700c which may be an exemplary implementation of the method 700b, e.g. including exemplary steps that may be present in the method 700c.
- a calibration method 700b, 700c may be performed in factory and/or runtime at regular intervals or when some other condition triggers it.
- the calibration method 700c may include blocks analogous to an ordinary acquisition, e.g. analogous to the method 300a, 300b, with the difference that the depth-measurement is carried out in a predefined field of view. For example, two depth maps may be acquired on a scene with no refractive objects and optimally with full coverage of the field of view. It may be expected that the differential depth acquired on such scenes would be zero on a fully calibrated system.
- differential depth that is still observed is stored as a differential calibration map, and is used to correct subsequent acquisitions, as mentioned above.
- the differential output may be further processed to produce a more complex calibration.
- the differential output may be used as an input to a function depending on other system parameters (such as the sensor operating temperature), or using the measured depth data itself to compute the correction for each point of the sensor.
- the method 700c may include, in 735, acquiring an optical path length on a predefined field of view, e.g. the method 700c may include carrying out an optical path length measurement, for example based on time-of-flight or phase of the emitted light.
- the method 700c may include acquiring an optical path length on a scene with no refractive objects.
- the method 700c may further include, in 745, extracting depth information from the optical path length on the predefined field of view, e.g. on the scene with no refractive objects.
- the method 700c may include applying one or more corrections to the optical path length to obtain depth value(s) from the measured optical path length(s).
- the method 700c may further include, in 775, establishing a matching between the results generated via the two methods, e.g. the method 700c may include establishing a mapping between depth map coordinates generated via the two methods.
- the method 700c may further include, in 785, analyzing the results generated via the two methods to generate calibration data, e.g. the method 700c may include comparing the two depth acquisitions and generate a differential output.
- the method 700c may further include, in 795, generating calibration data, e.g. generating a calibration map, for the depth sensors.
- FIG.8A illustrates a modeled object 800 having a refractive portion 802.
- the modeled object 800 may be an eye having an eyeball surface 804, a corneal lens 802 defining a refractive surface (e.g., with refractive index n of about 1,336), and an iris surface 806.
- the model considers thus a two-dimensional geometrical ray tracing of an eyeball sphere with a corneal lens surface and allows evaluating the distortion of the iris plane 806 caused by the corneal lens 802, when using a structured light approach.
- FIG.8 further shows the rays 808 propagating from the projector 810, and the rays 812 propagating back into the camera 814 (e.g., a pinhole camera).
- the camera 814 e.g., a pinhole camera
- the example shows the case of an eyeball whose closest edge to the camera is about 25 mm.
- the eyeball is rotated by 60 degrees with respect to the optical axis. Depth may be estimated from the expected disparity on the camera as well as from the total optical path for different gaze angle rotations.
- FIG.8B and FIG.8C show plots 820b, 820c illustrating the results of the simulation.
- the plot 820b in FIG.8B shows the simulated results of disparity -based depth measurement, half optical path length (ToF) measurement, and depth-to-range corrected path length.
- ToF half optical path length
- the correction was applied using an analogous modeled object without corneal lens, and deriving a correction factor for each point between the ToF and the disparity based depth. Such correction factor was then applied to the modeled object with corneal lens.
- the plots 820b show that outside of the corneal region 802, the corrected path- length is consistent with the structured-light approach (the lines are perfectly overlapping), while in the corneal region 802 the two return different results.
- the plot 820c in FIG.8C shows the difference in depth information returned from each of the two methods, for an eyeball with corneal lens or an eyeball without corneal lens.
- the numerical difference between the two is zero at any position where there is no lens, while a non-zero value highlights the position of the corneal lens and contains some information on the shape and geometrical layout.
- Such output may be further post-processed by an additional software to extract meaningful information on the object, perform a selection, or used as selection mask to apply some correction to the same depth maps, or configure device operation for subsequent acquisitions.
- a computer program may be provided, including instructions which, when the program is executed by a computer, cause the computer to carry out any one of the methods described herein, e.g. any one of the methods 210, 300a, 300b, 710, 700b, 700c.
- processor as used herein may be understood as any kind of technological entity that allows handling of data. The data may be handled according to one or more specific functions that the processor may execute. Further, a processor as used herein may be understood as any kind of circuit, e.g., any kind of analog or digital circuit. A processor may thus be or include an analog circuit, digital circuit, mixed-signal circuit, logic circuit (e.g., a hard-wired logic circuit or a programmable logic circuit), microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), Digital Signal Processor (DSP), Field Programmable Gate Array (FPGA), integrated circuit, Application Specific Integrated Circuit (ASIC), etc., or any combination thereof. It is understood that any two (or more) of the processors detailed herein may be realized as a single entity with equivalent functionality or the like, and conversely that any single processor detailed herein may be realized as two (or more) separate entities with equivalent functionality or the like.
- logic circuit e.g., a hard-wired logic circuit or a
- the phrase “at least one” and “one or more” may be understood to include a numerical quantity greater than or equal to one (e.g., one, two, three, four,tinct, etc.).
- the phrase “at least one of’ with regard to a group of elements may be used herein to mean at least one element from the group consisting of the elements.
- the phrase “at least one of’ with regard to a group of elements may be used herein to mean a selection of: one of the listed elements, a plurality of one of the listed elements, a plurality of individual listed elements, or a plurality of a multiple of individual listed elements.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The present disclosure relates to an imaging device (200) including: a processor (202) configured to: compare a first result (212) of a first depth measurement with a second result (214) of a second depth measurement, wherein the first depth measurement is carried out via an optical path length measurement, wherein the second depth measurement is carried out via a disparity-based depth measurement, wherein the first depth measurement and the second depth measurement are carried out in a field of view common to the first depth measurement and the second depth measurement; and generate, based on a result of the comparison, an output signal (216) representative of whether the common field of view comprises at least one object having a refractive portion.
Description
DEVICE AND METHOD TO DETECT REFRACTIVE OBJECTS
Technical Field
[0001] The present disclosure relates generally to an imaging device adapted to carry out a differential measurement to detect the presence of transparent objects in a scene, and methods thereof (e.g., a method of detecting the presence of transparent objects in a scene).
Background
[0002] In general, devices capable of capturing three-dimensional (3D) information within a scene are of great importance for a variety of application scenarios, both in industrial- as well as in home-settings. Application examples of 3D-sensors may include facial recognition and authentication in modern smartphones, factory automation for Industry 5.0, systems for electronic payments, augmented reality (AR), virtual reality (VR), internet-of-things (loT) environments, and the like. Various technologies have been developed to gather three-dimensional information of a scene, for example based on time-of-flight of emitted light, based on structured light patterns, based on stereo vision, etc. Improvements in 3D-sensors may thus be of particular relevance for the further advancement of several technologies.
Brief Description of the Drawings
[0003] In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various aspects of the invention are described with reference to the following drawings, in which:
FIG.1A shows depth sensing in a scenario without a refractive surface in a schematic representation, according to various aspects;
FIG. IB shows depth sensing in a scenario with a refractive surface in a schematic representation, according to various aspects;
FIG.2A shows an imaging device in a schematic representation, according to various aspects;
FIG.2B shows an imaging device in a schematic representation, according to various aspects;
FIG.3A and FIG.3B each shows a schematic flow diagram of a method of detecting transparent objects with depth measurements;
FIG.4A and FIG.4B each shows a depth sensor configured to carry out an optical path length measurement, in a schematic representation, according to various aspects;
FIG.4C and FIG.4D each shows a depth sensor configured to carry out a disparity -based depth measurement, in a schematic representation, according to various aspects;
FIG.5A, FIG.5B, and FIG.5C each shows a respective imaging device in which a first depth sensor and a second depth sensor share at least one common component in a schematic representation, according to various aspects;
FIG.6A and FIG.6B each shows a respective imaging device in which a first depth sensor and a second depth sensor share at least one common component in a schematic representation, according to various aspects;
FIG.7 A shows a calibration device in a schematic representation, according to various aspects;
FIG.7B and FIG.7C each shows a schematic flow diagram of a method of calibrating depth sensors, according to various aspects; and
FIG.8 A, FIG.8B, and FIG.8C illustrate the results of a simulation showing the principle of the differential-measurement to detect objects having at least a refractive portion.
Description
[0004] The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and aspects in which the invention may be practiced. These aspects are described in sufficient detail to enable those skilled in the art to practice the invention. Other aspects may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the invention. The various aspects are not necessarily mutually exclusive, as some aspects may be combined with one or more other aspects to form new aspects.
[0005] Various strategies exist to implement active 3D-sensing, such as via structured light, active stereo vision, or time-of-flight systems. In general, each of these techniques allows generating or reconstructing three-dimensional information about a scene, e.g., as a three-dimensional image, a depth map, or a three-dimensional point cloud. For example, 3D-sensing allows determining information about objects present in the scene, such as their position in the three-dimensional space, their shape, their orientation, and the like. Exemplary applications of active 3D-sensing include their use in automotive, e.g., to assist autonomous driving, and in portable devices (e.g., smartphones, tablets, and the like) to implement various functionalities such as face or object recognition, autofocusing, gaming activities, etc.
[0006] An issue common to the various strategies for implementing three-dimensional sensing is the detection of transparent objects, which cause a variation in the behavior of the light that may lead to an inaccurate detection (see FIG. IB). Transparent objects, in general, are intrinsically difficult for standard 3D sensors to produce accurate depth estimates. Some state of the art approaches tackle the issue with machine learning, while others exploit edge detection in RGB images. These techniques are however resource-intensive and require an extensive post-processing of the acquired data.
[0007] The present disclosure may be based on the realization that the distortion introduced by a transparent object, usually considered a detrimental effect, may be advantageously exploited to implement a simple, yet accurate detection strategy. Illustratively, the present disclosure may be based on the realization that a transparent object may introduce different types of distortions for different detection methods. A transparent object may thus influence in a different manner the results of different detection methods, in particular of detection based on measuring the optical path length and detection based on disparity-calculations. The strategy described herein may thus be based on analyzing the differences in the detection results of different detection methods to determine the presence (and accordingly other properties, such as position, shape, etc.) of transparent objects in the scene.
[0008] The strategy described herein may be based on exploiting the different inaccuracy introduced by transparent objects on two different sensing technologies (e.g., disparity-map based and optical-path length based), to highlight them by means of a differential depth measurement in a sensor integrating both technologies. This approach enables addressing such objects with an embedded solution, which may thus provide a compact and robust device with applications both in industry (e.g., robotic manipulation of objects) and in consumer market (e.g., for eye-tracking).
[0009] According to various aspects, an imaging device may include a processor configured to: compare a first result of a first depth measurement with a second result of a second depth measurement, wherein the first depth measurement is carried out via an optical path length measurement, wherein the second depth measurement is carried out via a disparity-based depth measurement, wherein the first depth measurement and the second depth measurement are carried out in a field of view common to the first depth measurement and the second depth measurement; and determine, based on a result of the comparison, whether the common field of view includes at least one object having a refractive portion.
[0010] According to various aspects, a method may include: comparing a first result of a first depth measurement with a second result of a second depth measurement, wherein the first depth measurement is carried out via an optical path length measurement, wherein the second depth measurement is carried out via a disparity-based depth measurement, wherein the first depth measurement and the second depth measurement are carried out in a field of view common to the first depth measurement and the second depth measurement; and determining, based on a result of the comparison, whether the common field of view includes at least one object having a refractive portion.
[0011] In absence of transparent objects, the first depth measurement and the second depth measurement may provide the same result (e.g., after calibration). The presence of a transparent object may instead introduce an error in the measurements that is different for the first depth measurement and the second depth measurement. Determining in which region(s) of the field of view the two depth measurements provide different results may thus allow determining that in such region(s) at least one transparent object may be present. Without limitation, the approach described herein may work optimally in the specific case of a bulk full object (such as a lens, or a water filled container) in close proximity of a background Lambertian target.
[0012] According to various aspects, the first depth measurement and the second depth measurement may generate, as result, a respective depth map of the common field of view. The comparison of the first result with the second result may provide a differential depth map representing, for each coordinate of the common field of view, a difference between the depth values of the depth maps. The differential depth map may thus provide a compact and easy to process representation of the comparison of the depth measurements. In general, any comparison between two quantities that may be expressed by the numerical difference between the two quantities may also expressed by a different quantity, such as the ratio. It is therefore understood that reference herein to a result “representing a difference” or references to a “differential quantity” may include different types of representation, e.g. a direct numerical
difference, a ratio, or other ways of representing a result of a comparison between two quantities. Illustratively, there may be various ways of representing a result that defines univocally an equality condition (if present), and potentially allows quantitative comparisons. The specific representation may be adapted according to the desired processing or for any other reason related to the specific application. As an example, the representation more convenient for numerical processing may be selected.
[0013] According to various aspects, the imaging device may include a first depth sensor configured to carry out the optical path length measurement, and a second depth sensor configured to carry out the disparity -based depth measurement. In an exemplary configuration, the first depth sensor and the second depth sensor may share one or more components, thus allowing to provide a space- and cost-efficient arrangement of the imaging device. As an example, which may provide a particularly compact arrangement, the first depth sensor may be a self-mixing interferometer including a light source (e.g., a projector) into which light from the field of view is reflected to cause a modulation of the emitted light (in amplitude and frequency). The light source of the self-mixing interferometer may additionally be used as light source for the disparity -based depth measurement, thus providing an efficient utilization of the device components.
[0014] According to various aspects, the first depth sensor and the second depth sensor may be calibrated with respect to one another. The calibration may be carried out upon deployment of the imaging device (e.g., after fabrication, for example at the factory) and/or may be repeated “on the field”, e.g. at regular time intervals or in correspondence of predefined events. As an example, considering the use of the imaging device for automotive, calibration may be repeated when the vehicle undergoes maintenance (e.g., in a garage), or before a trip. The sensor-to-sensor calibration allows gathering an understanding of how the results should match, so as to enable identifying differences in the result when a transparent object is present in the scene.
[0015] According to various aspects, an imaging device may include: a first depth sensor configured to carry out an optical path length measurement; a second depth sensor configured to carry out a disparity-based depth measurement; and a processor configured to: control the first depth sensor to carry out the optical path length measurement in a field of view including one or more predefined objects; control the second depth sensor to carry out the disparity -based depth measurement in the field of view including the one or more predefined objects; and generate calibration data representative of one or more calibration parameters for matching first depth values obtained via the optical path length measurement to corresponding second depth values obtained via the disparity -based depth measurement.
[0016] According to various aspects, a calibration method may include: carrying out a first depth measurement via an optical path length measurement in a field of view including one or more predefined objects; carrying out a second depth measurement via a disparity -based depth measurement in the field of view including the one or more predefined objects; and generating calibration data representative of one or more calibration parameters for matching first depth values obtained via the optical path length measurement to corresponding second depth values obtained via the disparity-based depth measurement.
[0017] For example, the one or more predefined objects may be disposed at known distances from the first and second depth sensors so to allow calibrating for any inaccuracy that may be introduced by the emitter optics and/or receiver optics of the sensors. As another example, the one or more predefined objects may not include any transparent object, so as to allow calibrating the sensors in an “error free” scenario in which the two sensors should ideally provide the same depth results.
[0018] FIG.1A and FIG. IB illustrate depth sensing in a scenario 100a without a refractive surface (FIG.1A), and in a scenario 100b with a refractive surface (FIG. IB), in a schematic representation, according to various aspects. In general, the principles of disparity -based
methods and optical path length methods are well known in the art. A brief description is provided herein to introduce aspects relevant for the present disclosure.
[0019] In general, assuming a rectified system, disparity-based methods, such as structured light or stereo vision, identify depth by triangulation of illuminator rays with camera rays or rays from two different cameras, respectively, using the pinhole camera model. In FIG.1A, the position of a point on the illuminator (e.g., an emitter pixel emitting a light ray) with respect to the illuminator center is denoted as di and with the numeral reference 102. Illustratively, di may be intended as a projection point representative of the propagation angle from the illuminator to the point P. In general, di may be representative of the ray angle from the illuminator to P, though the specific factor relating the angle of the ray to di would be given, for example, by the camera properties.
[0020] The position of a point on the camera (e.g., a receiver pixel receiving the emitted light ray) with respect to the camera center is denoted as de and with the numeral reference 104. The position of an object (at a distance Zw, 106) on which the emitted light is reflected is denoted as P and with the numeral reference 108. The baseline, illustratively the (center-to-center) distance between the illuminator and the camera, is denoted as BL and with the numeral reference 110. The focal length f is denoted with the numeral reference 112.
[0021] In this configuration, and assuming with no loss of generality that the effective focal lengths defining the projection points di and de match, a depth value for the object may be derived according to equation (1) as,
[0022] In structured light, the quantity di actually used for the depth triangulation is typically acquired by the imaging camera, so that the effective focal length appearing in equation (1) is the one of the camera, regardless of the illuminator angular field of view. In general, any rectification of raw data based on camera calibration (especially in stereo vision) prior to depth
acquisition, and calibrated projector view image (far field acquisition) in structured light methods may be considered.
[0023] An optical path based method may instead be based on a readout of the signal phase or time-of-flight information to extract the round-trip optical path to the object P to get the object range (vector distance). Illustratively the round-trip optical path may be S1+S2, where Si is the optical path towards the object P, and S2 is the optical path back towards the sensor. For instance, for time-of-flight the distance r to the object may be derived according to equation (2) as,
(2) r = ^ where c is the speed of light in a given medium (e.g., in air), and t is the round-trip time of the light emitted and then reflected back by the object into the sensor.
[0024] Considering an optical path measurement based on the signal phase, the signal phase delay for a given wave vector k carrying the modulation may be expressed via equation (3) as,
(3) A<p = k S1 + S2), and may be used to extract information on the range of the point P, according to calculations known in the art.
[0025] After applying range to depth correction, the depth Zw can be extracted from the optical path, and leads in principle to the same depth as the disparity-based method.
[0026] FIG. IB illustrates a scenario in which a lens-like transparent object is present and embeds the same target point P, 108, shown in FIG.1A. Illustratively, the lens-like transparent object may have a refractive surface 114, which causes a distortion in the behavior of the light, both in the path towards the target point P, and in the path back towards the sensor.
[0027] In this scenario 100b, the illuminator ray path, as well as the location of the image of P on the camera sensor will be distorted in a way that is dependent on the refractive surface geometry and the refractive index value, according to Snell’s law at the interface. This will impact the disparity value, and correspondingly the depth value. Illustratively, a variation in the
position of the point on the illuminator (denoted as di’, 102b) and of the point on the camera (denoted as de’, 104b) may be observed. In some instances, the position di of a specific pattern feature (e.g., a dot center) may be stored in the device memory and not be acquired during operation. In such cases, the presence of a refractive surface would result in a variation of only the position of the point projection on the camera (de’), still impacting the disparity value.
[0028] The optical path length will be also impacted by propagation in the medium. The speed of light and the wave vector in the medium are altered by a factor equal to the refractive index, according to the following equations (4) and (5),
(4) v' = - n
(5) k' = nk
[0029] When electromagnetic radiation propagates in a transparent material with an index of refraction > 1 the speed of light in the material reduces by the same amount, causing a delay in the propagation, and an over-estimate of the geometrical path length. Additionally, the wave vector is increased by the same factor, increasing the phase accumulation for phase sensitive methods, compared to propagation in air.
[0030] The effective total optical path length, A, may thus be given by the optical path outside the transparent object, and the optical path inside the transparent object, according to equation (6) as,
(6) yl = S1 + S2 + n(Si + S^)
[0031] The propagation in the transparent object thus causes a delay in the arrival time of the pulse in the case of direct time-of-flight, or a corresponding increase of the signal phase delay. Generally speaking, the refractive medium as well as the interface geometry will introduce an error in the depth estimation that is different for the two techniques (except for specific geometries).
[0032] By way of illustration, an object including at least one refractive surface 114 (illuminated by the sensors) may introduce an error for the optical path length measurement due
to the different optical path when light propagates in the object (illustratively, when light enters the transparent surface and is reflected back passing again through the transparent surface). The object including at least one refractive surface 114 may also introduce an error for the disparity-based measurement due a tilting of the light rays with respect to the scenario in FIG.1A.
[0033] The present disclosure may be based on the realization that such errors may be exploited, rather than being eliminated with post-processing techniques, to gather information about such usually problematic objects, as discussed in further detail below.
[0034] FIG.2A shows an imaging device 200 in a schematic representation, according to various aspects. The imaging device 200 may be configured according to the differential approach described herein. The imaging device 200 may include a processor 202 and storage 204 (e.g., one or more memories) coupled to the processor 202. The storage 204 may be configured to store instructions (e.g., software instructions) executed by the processor 202. The instructions may cause the processor 202 to perform a method 210 of detecting the presence of transparent objects in a scene, described in further detail below. Aspects described with respect to a configuration of the processor 202 may also apply to the method 210, and vice versa. An “imaging device” may also be referred to herein as “detection device”, or “3D-sensor”.
[0035] In general, the imaging device 200 may be implemented for any suitable three-dimensional sensing application. In an exemplary configuration, which may represent the most relevant use case of the strategy described herein, the imaging device 200 may be used for eye tracking. According to various aspects, an eye tracker may include one or more imaging devices 200. The differential approach proposed herein has been found particularly suitable to track the movement of eyes in view of the capability of providing correct detection even in presence of the transparent portion of the corneal lens.
[0036] It is however understood that the applications of the imaging device 200 and, in general, of the strategy described herein, are not limited to eye tracking. Other exemplary application
scenarios may include the use of the imaging device in a vehicle, in an indoor monitoring system, in a smart farming system, in an industrial robot, and the like.
[0037] In accordance with the method 210, the processor 202 may be configured to compare a first result 212 of a first depth measurement with a second result 214 of a second depth measurement. The first depth measurement and the second depth measurement may be of two different types, each influenced in a different manner by the presence of transparent objects in the imaged scene. In particular, one of the first depth measurement or the second depth measurement may be an optical path length measurement, and the other one of the first depth measurement or the second depth measurement may be a disparity-based depth measurement.
[0038] In the present disclosure, reference is made to the first depth measurement being an optical path length measurement and to the second depth measurement being a disparity-based depth measurement. It is however understood that the terms “first” and “second” are used merely to indicate the different methods, without implying any order or ranking between the methods (e.g., any temporal order, priority order, etc.).
[0039] A “depth measurement” may be a measurement configured to deliver three-dimensional information (illustratively, depth information) about a scene, e.g. a measurement capable of providing three-dimensional information about the objects present in the scene. A “depth measurement” may thus allow determining (e.g., measuring, or calculating) three-dimensional coordinates of an object present in the scene, illustratively a horizontal-coordinate (x), vertical-coordinate (y), and depth-coordinate (z). A “depth measurement” may be illustratively understood as a distance measurement configured to provide a distance measurement of the objects present in the scene, e.g. configured to determine a distance at which an object is located with respect to a reference point (e.g., with respect to the imaging device 200).
[0040] The distance at which an object is located with respect to a reference point (e.g., with respect to the imaging device 200) considering the three-dimensional coordinates may in general be referred to as “range”. Another way of expressing such distance may be as a “depth”,
which may be or include the distance at which the object is located with respect to a reference plane passing through (e.g., the plane orthogonal to optical axis of the imaging device 200, e.g., the plane passing through the optical aperture of the imaging device 200). Illustratively, the
“depth” may be or include the projection of the range along the device optical axis (Z).
[0041] It is understood that references herein to a “range” (of an object) may apply in a corresponding manner to a “depth” (of the object), and vice versa. Illustratively, a “depth” may be converted to a corresponding “range” and a “range” may be converted to a corresponding “depth” according to calculations known in the art, so that the aspects described in relation to a “range” may be correspondingly valid for a “depth”, and vice versa. For example, a disparity-based method may directly deliver, as a result, a depth, whereas an optical path-based method may deliver, as a result, a “range” (assuming that the baseline is negligible, or approximately the range if the baseline is considered).
[0042] An “optical path length measurement” may be a measurement configured to derive depth-information by measuring the optical path travelled by light. Illustratively, an “optical path length measurement” may be a measurement configured to determine a depth position of an object (illustratively, a distance at which the object is located), by measuring the length of the optical path that emitted light travels to reach the object and then back to reach the sensor (e.g., a camera, a photo diode, etc.).
[0043] In general, the strategy described herein may be applied to any depth measurement method based on deriving a depth value of an object from the length of the optical path travelled by the emitted/reflected light. For example, the optical path length measurement may be or include a direct time-of-flight measurement. As another example, the optical path length measurement may be or include an indirect time-of-flight measurement. As a further example, the optical path length measurement may be or include an interferometry measurement, e.g. a self-mixing interferometry measurement. These three examples have been found to provide for an efficient implementation of the differential approach described herein, e.g. for an efficient
integration in compact devices (e.g., together with suitable sensors for the second depth measurement method). It is however understood that the aspects described herein may in general be applied or generalized to any signal processing technique sensitive to the optical path length. Further examples to which the aspects described herein may be applied may include amplitude-modulated continuous wave (AMCW)-based measurements, and/or frequency- modulated continuous wave (FMCW)-based measurements.
[0044] A “disparity-based” measurement may be a measurement configured to derive depth-information based on a triangulation of the emitted/reflected light. Illustratively, a “disparity-based” measurement may be a measurement configured to determine a depth position of an object based on a difference in distance between corresponding image points and the center of projector/camera. A “disparity -based” measurement may also be referred to herein as “triangulation-based” depth measurement.
[0045] In general, the strategy described herein may be applied to any depth measurement based on deriving a depth value of an object from triangulation of the emitted/reflected light. For example, the disparity-based depth measurement may include a depth measurement based on structured light. As another example, the disparity-based depth measurement may include a stereo vision measurement. These two examples have been found to provide for an efficient implementation of the differential approach described herein, e.g. for an efficient integration in compact devices (e.g., together with suitable sensors for the first depth measurement method). [0046] In general, the first depth measurement and the second depth measurement may be carried out in the same scene, e.g. in the same field of view. Illustratively, the first depth measurement and the second depth measurement may be carried out in a field of view common to the first depth measurement and the second depth measurement. For example, a first field of view of the first depth measurement (e.g., as defined by a corresponding first depth sensor, see FIG.2B) and a second field of view of the second depth measurement (e.g., as defined by a corresponding second depth sensor, see FIG.2B) may coincide with one another. This may be
a simple configuration which allows a simpler processing of the results (e.g., a simpler comparison).
[0047] As another example, the first field of view of the first depth measurement and the second field of view of the second depth measurement may have only an overlap with one another. In this scenario, the overlapping region may be the common field of view of the depth measurements. For example, the first field of view and the second field of view may be shifted with respect to one another, e.g. in the horizontal direction and/or vertical direction.
[0048] As a further example, the first field of view may be greater than the second field of view and may (completely) contain the second field of view. In this scenario, the second field of view may be the common field of view of the depth measurements. In a corresponding manner, in case the second field of view is greater than the first field of view and (completely) contains the first field of view, the first field of view may be the common field of view of the depth measurements.
[0049] In general, the first result 212 of the first depth measurement and the second result 214 of the second depth measurement may refer to the respective first field of view and second field of view. For the processing described herein, the common part of the fields of view may be considered, which may correspond to the entire first and second field of view (e.g., to an overall field of view of the imaging device 200), or to a portion of the first and/or second field of view, as discussed above.
[0050] In an exemplary configuration, the first depth measurement and the second depth measurement may be carried out simultaneously (in other words, concurrently) with one another. This may ensure a faster processing of the information and a faster completion of the process. In other aspects, the first depth measurement and the second depth measurement may be carried out one after the other, e.g. in a short temporal sequence, for example within less than 5 seconds, for example within less than 1 second, for example within less than
100 milliseconds. This other configuration may provide sharing one or more components and re-utilizing them for the two methods.
[0051] Based on the comparison of the first result 212 with the second result 214, the processor 202 may be configured to determine whether the common field of view includes at least one object having a refractive portion. The processor 202 may be configured to determine (e.g., identify, or calculate) one or more differences between the first result 212 and the second result 214, which differences may be indicative of the presence of one or more objects having a refractive portion in the field of view common to the first and second depth measurement. Illustratively, the first result may include first depth values, each corresponding to respective (x-y) coordinates of the common field of view (e.g., each first depth value may be indicative of a depth value at the x-y coordinates). In a corresponding manner, the second result may include second depth values, each corresponding to respective (x-y) coordinates of the common field of view. The processor 202 may be configured to determine one or more differences between first depth values and the second depth values (illustratively, at the same coordinates of the common field of view).
[0052] The processor 202 may be configured to determine whether the common field of view includes at least one object having a refractive portion based on the one or more differences. As an example, the processor 202 may be configured to determine that at least one object with a refractive portion is present in the common field of view in the case that a difference between a first depth value obtained via the first depth measurement and a second depth value obtained via the second depth measurement is in a predefined range (e.g., is greater than a predefined threshold). The predefined range may selected such that a difference in the predefined range is indicative of a difference in the depth values caused by a refractive surface.
[0053] The way the results 212, 214 are compared may be adapted depending on the specific layout. In general, the world-to-image projection may be different for the two depth measurements, except the exact same camera is used for both acquisitions where the mapping
is an identity and can be bypassed. The mapping between the results 212, 214 may be established in different ways depending on the convenience. In some aspects, the processor 202 may be configured to remap the coordinates of one acquisition to the other acquisition (either path based depth to correspondence based depth, or the opposite). In other aspects, the processor 202 may be configured to project the depth images into the same world coordinates creating a 3D point cloud map. A differential depth calibration, either performed once and stored in the device (e.g., in the storage 204), or runtime, may also be used to correct for residual errors and ensure that the same depth map is generated when there are no refractive objects (see also FIG.7A to FIG.7C).
[0054] According to various aspects, a depth map representation may be used to provide information on the common field of view in a format that is compact and simple to process. In this scenario, the first result 212 may be or include a first depth map of the common field of view, and the second result may be or include a second depth map of the common field of view. A “depth map” may include a plurality of data points, each corresponding to a respective horizontal coordinate, vertical coordinate, and depth value (illustratively a depth-coordinate) of the common field of view.
[0055] In this configuration the processor 202 may be configured to determine whether the common field of view includes an object having a refractive portion based on a difference between the first depth map and the second depth map. For example, the processor 202 may be configured to generate the output signal 216 based on a difference between the first depth map and the second depth map. In this scenario, the output signal 216 may be or represent a differential depth map. The differential depth map may include a plurality of data points, each corresponding to a respective horizontal coordinate, vertical coordinate, and depth value of the common field of view, and the depth value may represent a difference between a first depth value of the first depth map and a second depth value of the second depth map at these coordinates.
[0056] Illustratively, the differential depth map may be representative, for each coordinate in the common field of view, of whether the common field of view includes at that coordinate at least one object having a refractive portion. In general, for each coordinate in the common field of view, a depth value of the differential depth map at that coordinate is representative of the common field of view including at least one object having a refractive portion at that coordinate in the case that the depth value is within a predefined range, e.g. in the case that the difference between the first depth value and the second depth value at that coordinate is greater than a predefined threshold. For example, the differential output may be a map returning zero, or a positive match, when there are no refractive lens-like objects, and the two depth methods return the same value, or it can return a non-zero value (or depth mismatch) where some lensing effect occurred. In some aspects, the processor 202 may be configured to apply a differential calibration map to either (or both) of the two acquired depths based on the calibration method outlined in FIG.7A to FIG.7C.
[0057] By way of illustration, the analysis of the depth maps may be understood as an algorithm that aims to establish differences between the two depth acquisitions, depending on the type of data that has been acquired. According to various aspects, the processor 202 may be configured to determine (e.g., calculate) a straight difference of the two depth maps. In other aspects, more refined methods may be provided, e.g. the processor 202 may be configured to determine (e.g., derive) a 3D point cloud map and/or to carry out a mesh reconstruction prior to comparing the depth maps. In general, the processor 202 may be configured to account for the resolution and precision limits of both used techniques to provide a more accurate estimation.
[0058] The object having a refractive portion (e.g., a refractive surface) may be or include any type of object which is at least partially transparent. Illustratively, the object may be or include any type of object having a transparent surface through which light may propagate. For example, the at least one object having a refractive portion may be or include a lens-like object.
Exemplary objects that may be detected according to the strategy proposed herein may include: an eye (having a corneal lens as refractive portion), a headlight, a pair of glasses, a bottle, etc. [0059] According to various aspects, the processor 202 may be configured to generate, based on the result of the comparison, an output signal 216 representative of whether the common field of view includes at least one object having a refractive portion. The output signal 216 may be representative of various information on the common field of view, such as a number of objects having a refractive portion present in the common field of view, a position (e.g., x-y coordinates) of objects having a refractive portion present in the common field of view, a shape of objects having a refractive portion present in the common field of view, and the like. In some aspects, the output signal 216 may be a differential depth map representative of a difference between the first depth map and the second depth map.
[0060] The output signal 216 may be provided for further processing, e.g. at the processor 202 or at other processors or circuits external to (and separate from) the processor 202, such as a processing unit of a smartphone, a central control unit of a vehicle, and the like. The output signal 216 may thus provide additional information that may not be obtained in a simple manner with individual depth measurements, and may thus be used to enable or enhance a mapping of the scene. As an example, the processor 202 may be configured to use the output signal 216 for a depth correction of the first result of the first depth measurement and/or for a depth correction of the second result of the second depth measurement. Illustratively, the information on the presence (and location, shape, etc.) of transparent objects may allow correcting the results of the depth measurements to take into account for such “error-inducing” objects.
[0061] According to various aspects, the processor 202 may be configured to apply any suitable data conditioning to provide the output signal, e.g. according to a desired application. For example, the processor 202 may be configured to apply an application-specific algorithm to process the output signal 216.
[0062] As an example, the processor 202 may be configured to generate a logical map to highlight the depth mismatch locations in the image. As another example, the processor 202 may be configured to use a mathematical or learning-based model to reconstruct the original shape and orientation. As a further example, the processor 202 may be configured to apply Boolean masks as input to high resolution (e.g., RGB) images to highlight features, or to define contours. As a further example, the processor 202 may be configured to use the generated data as an input to a depth correction algorithm enabling to correct either the acquired depth maps. [0063] In various aspects, other types of data may be used to enhance the processing of the results 212, 214 of the depth-measurements. The processor 202 or, alternatively, any other external processor, may be configured to further process the results 212, 214 including additional information taken from sensors data (e.g., other types of information originating from the depth-sensors). For example, the processor 202 may be configured to measure the distortion induced to some encoded feature of the projection pattern from the intensity images acquired by one of the cameras (e.g., relative distance and shape of the pattern features) to return additional information on the refractive surface geometry, position and orientation, or to improve the measurement accuracy.
[0064] In the context of the present disclosure, e.g. in FIG.2A and FIG.2B, reference is made to a scenario with two depth measurement methods (and, accordingly, two depth sensors) that are differently influenced by transparent objects. This may be the most relevant use case, as it allows implementing the strategy proposed herein in a simple setup. It is however understood that, in principle, the results of more than two implementations of different types of depth measurement methods may be considered. For example, more than one optical path length measurement and/or more than one disparity-based depth measurement may be considered, providing a more accurate determination of the presence of transparent objects in the scene.
[0065] In various aspects, the processor 202 may thus be configured to compare the first result 212 and the second result 214 with a third result of a third depth measurement carried out in a
field of view common to the three depth measurements, and determine whether the common field of view include at least one object having a refractive portion based on a result of the comparison. The third depth measurement may be a further optical path length measurement or a further disparity-based depth measurement. The same may apply for a fourth depth measurement, etc.
[0066] FIG.2B shows the imaging device 200 in a schematic representation, according to various aspects. As shown in FIG.2B, the imaging device 200 may include (at least) a first depth sensor 206 configured to carry out the first depth measurement, and a second depth sensor 208 configured to carry out the second depth measurement. In the schematic representation in FIG.2B, the imaging device 200 is illustrated as a single “spatially contained” component. For example, the depth sensors 206, 208 and the processor 202 may be formed (e.g., integrated) on a single substrate (e.g., a single printed circuit board), e.g. the first depth sensor 206 and the second depth sensor 208 may be integrated onto a same substrate. This may be a preferred configuration to provide a robust and compact arrangement for the imaging device. A depth sensor 206, 208 may also be referred to herein as three-dimensional sensor, or 3D-sensor. A general configuration of depth sensors to carry out depth measurements according to optical length path or disparity-information is known in the art. A more detailed description will be provided in relation to FIG.4A to FIG.6B to discuss relevant configurations for the present disclosure.
[0067] It is however understood that, in principle, the depth sensors 206, 208 and/or the processor 202 may also be disposed on different substrates or, more in general, may be spatially separated from one another. For example, the processor 202 may be located at a remote location, illustratively in the “cloud”, and may provide the processing of the results 212, 214 of the depth measurements in remote. As another example, the depth sensors 206, 208 may be disposed not in close proximity with one another, e.g. may be disposed at the right-hand side and left-hand side of a computer monitor, or at the right headlight and left headlight of a vehicle, or the like.
Also in this “scattered” configuration, however, the depth sensors 206, 208 and the processor 202 may still be understood to be part of the imaging device 200.
[0068] It is also understood that, in principle, the imaging device 200 may include more than two depth sensors 206, 208, as discussed in relation to FIG.2A. For example, the imaging device 200 may include a third depth sensor configured to carry out a third depth measurement (e.g., a further optical path length measurement or disparity-based measurement). As another example, the imaging device 200 may further include a fourth depth sensor, etc.
[0069] It is also understood that the representation in FIG.2A and FIG.2B is simplified for the purpose of illustration, and that the imaging device 200 may include additional components with respect to those shown, e.g. one or more amplifiers to amplify a signal representing the received light, one or more noise filters, and the like.
[0070] The first depth sensor 206 may be configured to carry out the optical path length measurement and deliver a first output signal representative of a result 218 of the optical path length measurement to the processor 202. In a corresponding manner, the second depth sensor 208 may be configured to carry out the disparity -based depth measurement and deliver a second output signal representative of a result 220 of the disparity-based depth measurement to the processor 202.
[0071] A depth sensor 206, 208 may be configured to transmit the respective output signal to the processor in any suitable manner, e.g. via wired- or wireless-communications. The output signal of a depth sensor 206, 208 may be in any suitable form to allow processing by the processor 202. For example, the output signal of a depth sensor 206, 208 may be an analog signal, and the imaging device 200 (e.g., as part of the processor 202) may include an analog-to-digital converter to deliver a digital signal to the processor 202, and the processor 202 may be configured to provide digital signal processing of the output signals of the depth sensors 206, 208. In another exemplary configuration, the processor 202 may be configured to provide analog processing of the (analog) output signals of the depth sensors 206, 208. As a
further exemplary configuration, a depth sensor 206, 208 may include an analog-to-digital converter to deliver (directly) a digital output signal to the processor 202.
[0072] As discussed in relation to FIG.2A, the first depth sensor 206 may be configured to implement any suitable depth measurement based on optical path length. For example, the first depth sensor 206 may be configured as a direct time-of-flight sensor. As another example, the first depth sensor 206 may be configured as an indirect time-of-flight sensor. As a further example, the first depth sensor 206 may be configured as a self-mixing interferometer. Other examples may include the first depth sensor 206 being configured to carry out an amplitude- modulated continuous wave (AMCW)-based measurements, and/or a frequency-modulated continuous wave (FMCW)-based measurements. The first depth sensor 206 may thus be configured to acquire information on the length of the optical round trip “illuminator-to-sensor camera” by using a technique that is either sensitive to the time required for a short light pulse to propagate along the full path, or to the accumulated phase of the light itself, or of some modulation of the same.
[0073] The second depth sensor 208 may be configured to implement any suitable depth measurement based on disparity-calculations. For example, the second depth sensor 208 may be configured as a structured-light depth sensor. In this configuration, the intensity projected by a structured light illuminator that reflects on the scene is imaged on a sensor through the optical layers of a sensor camera. As another example, the second depth sensor 208 may be configured based on an approach involving more than one camera, e.g. the second depth sensor 208 may be configured as a stereo vision sensor.
[0074] In general, the output of an optical path length measurement and/or of a disparity-based depth measurement may be further processed to derive depth values from the measurement. Each depth sensor 206, 208 may be configured to apply any signal processing and/or statistics (for example, electrical delay to distance, peak recognition, fringe counting, etc.) suitable to
obtain distance information, as well as use calibration information. The implementation of such signal processing and/or statistics may be known in the art for the various methods.
[0075] According to various aspects, the output of an optical path length measurement and/or of a disparity-based depth measurement may be further corrected to take into account for possible distortions introduced by the emitter optics, receiver optics, cameras, illuminators, and the like. Such corrections may be carried out by the depth sensors 206, 208 prior to delivering the output to the processor 202. In this scenario, the result 218 of the optical path length measurement may (directly) be or include the first result 212 of the first depth measurement, and the result 220 of the disparity-based depth measurement may (directly) be or include the second result 214 of the second depth measurement.
[0076] In another configuration the processor 202 may be configured to carry out a correction of the result 218 of the optical path length measurement and/or of the result 220 of the disparity-based depth measurement. This configuration may provide a simpler setup for the depth sensors, transferring the processing load at the processor 202. Illustratively, in various aspects, the processor 202 may be configured to determine (e.g., derive) the first result 212 of the first depth measurement from the result 218 of the optical path length measurement, e.g. by applying a predefined (first) correction. In a corresponding manner, in various aspects, the processor 202 may be configured to determine (e.g., derive) the second result 214 of the second depth measurement from the result 220 of the disparity-based depth measurement, e.g. by applying a predefined (second) correction.
[0077] The (first) correction of the optical path length measurement, e.g. applied by the sensor 206 (e.g., a processor of the sensor 206) or by the processor 202, may include one or more corrections. For a far-away object, where the illuminator-camera baseline is negligible, and assuming propagation in air, the target range (vector distance from the sensor) may be half of the total round trip distance from illuminator to detector. This may be corrected in scenarios where the baseline is relevant and the target is close enough that parallax contributions may be
significant. As a further example, the predefined correction may include, additionally or alternatively, a range-to-depth correction (or geometrical distortion correction), where the Z component of the range is extracted, resulting in a depth map. The range-to-depth correction may provide avoiding an over-estimate of the depth for depth information acquired at large field of view.
[0078] The (second) correction of the disparity-based depth measurement, e.g. applied by the sensor 208 (e.g., by a processor of the sensor 208) or by the processor 202, may include one or more corrections. For example, for structured light illumination, images may be rectified and undistorted by means of calibration data in order to identify correspondences between the projector pattern and the pattern projected on the scene and imaged by the camera. Using the triangulation approach, a depth map may then be extracted from the projector-camera pattern disparity. In case two or more cameras are used, stereo vision correspondence may be equivalently used, e.g. the pattern projected on the scene may be acquired by two cameras of known distance, and correspondence is established between such images. Methods to identify and quantify correspondence can vary depending on the specific pattern, as known in the art.
[0079] By way of illustration, the processor 202 may be different from the processors (e.g., processing units) of the individual components (illustratively, the individual depth sensors 206, 208). The processor 202 may be configured to receive raw data as an input from the component sensing units, and/or pre-processed depth data when such components embed independent processing units. For example, the processor 202 may receive both raw data and depth data, e.g. in case a first depth processing is done by (dedicated) individual units, and a second processing re-using the raw data is done by the processor 202. The processor 202 may be configured to process data to generate depth information, when not yet provided by component embedded independent processing units, using information from the different component units. The processor 202 may be configured to map coordinates from the depth maps generated by one sensor input or one processing step, to the depth map generated by another sensor input or
another processing step of the system. Additionally or alternatively, the processor 202 may be configured to map depth coordinates to real 3D world coordinates. The processor 202 may be configured to process depth information from two different sensing methods and generate an output that depends on the difference of the two depths generated by the two sensing methods. The processor 202 may be configured to further process the differential data to generate an application-specific output.
[0080] FIG.3A and FIG.3B each shows a schematic flow diagram of a method 300a, 300b of detecting transparent objects with depth measurements. Illustratively, the method 300a, 300b may be an exemplary implementation of the method 210 carried out by the processor 202.
[0081] As shown in FIG.3 A, the method 300a may include, in 310 comparing a first result (e.g., the result 212) of a first depth measurement with a second result (e.g., the result 214) of a second depth measurement. The first depth measurement may be carried out via an optical path length measurement, and the second depth measurement may be carried out via a disparity -based depth measurement. The first depth measurement and the second depth measurement may be carried out in a common field of view. In some aspects (see also FIG.3B), the method 300a may further include acquiring the first result and the second result. Illustratively, the method 300a may further include carrying out an optical path length measurement to determine (e.g., derive) the first result, and carrying out a disparity-based depth measurement to determine the second result.
[0082] The method 300a may include, for example, carrying out the optical path length measurement and the disparity-based depth measurement in parallel with one another, e.g. simultaneously with one another. As another example, the method 300a may include carrying out one of the optical path length measurement or the disparity -based depth measurement, and, after having carried out the one of the optical path length measurement or the disparity -based depth measurement, carrying out the other one of the optical path length measurement or the disparity-based depth measurement.
[0083] The method 300a may further include, in 320, determining, based on a result of the comparison, whether the common field of view includes at least one object having a refractive portion. Illustratively, the method 300a may include determining whether a transparent object is present in the common field of view based on differences in the depth values obtained via the two depth measurements. In various aspects, the method 300a may further include determining one or more properties of the at least one object (if present) such as its location, its shape, its orientation, and the like.
[0084] As an example (see also FIG.3B), the method 300a may include generating a differential output from the first result and the second result. In various aspects, the first result may be or include a first depth map of the common field of view, and the second result may be or include a second depth map of the common field of view. The method 300a may include generating a differential depth map from the first depth map and the second depth map (e.g., by subtracting the first depth map from the second depth map, or vice versa, or via a more elaborate approach). [0085] FIG.3B shows method 300b which may be an exemplary implementation of the method 300a, e.g. including exemplary steps that may be present in the method 300a.
[0086] The method 300b may include, in 330, acquiring an optical path length, e.g. the method 300b may include carrying out an optical path length measurement, for example based on time-of-flight or phase of the emitted light. The method 300b may further include, in 340, extracting depth information from the optical path length. Illustratively, the method 300b may include applying one or more corrections to the optical path length to obtain depth value(s) from the measured optical path length(s).
[0087] The method 300b may include, in 350, acquiring structured light images, e.g. the method 300b may include carrying out a disparity-based depth measurement. The method 300b may further include, in 350, extracting depth information from the structured light images, e.g. using a correspondence-based algorithm. Illustratively, the method 300b may include applying one
or more corrections to the structured light images to obtain depth values from the acquired structured light images.
[0088] The acquisition of optical path length 330, and the acquisition of structured light images, 350, and the corresponding derivation of depth-information may be carried out in parallel with one another, or in sequence, as discussed above.
[0089] The method 300b may further include, in 370, establishing a mapping between the depth-information obtained via the optical path length measurement and the depth-information obtained via the disparity-based depth measurement. For example, the method 300b may include, in 370, establishing a mapping between depth map coordinates generated via the optical path length measurement and via the disparity-based depth measurement
[0090] The method 300b may further include, in 380, comparing the results obtained via the optical path length measurement and via the disparity -based depth measurement. At 380 the method 300b may further include carrying out depth-calibration and correction, if desired. As a result of the comparison, the method 300b may include generating a differential output, e.g. a differential depth map.
[0091] The method 300b may further include, in 390, carrying out further processing of the differential output, e.g. of the differential depth map or error map. This may provide generating an output (e.g., an output signal), which may be used for further applications, as discussed in relation to FIG.2A. The additional processing may also optionally include using some additional features of data acquired by the sensors 206 and 208, such as spatial features of the measured pattern intensity, or velocity information from phase shifts caused by the Doppler effect.
[0092] In the following, with reference to FIG.4A to FIG.6B, possible configurations of depth sensors for implementing the strategy described herein (e.g., possible configurations of the first depth sensor 206 and second depth sensor 208) will be illustrated. In general, the components (e.g., optical, electrical, mechanical) to carry out an optical path length measurement and/or a
disparity-based measurement may be known in the art. A brief description is provided herein to introduce aspects and possible configurations relevant for the present disclosure.
[0093] FIG.4A and FIG.4B each shows a (first) depth sensor 400a, 400b configured to carry out an optical path length measurement. Illustratively, the depth sensor 400a, 400b may be an exemplary realization of the first depth sensor 206 described in relation to FIG.2B. FIG.4C and FIG.4D each shows a (second) depth sensor 450a, 450b configured to carry out a disparity-based depth measurement. Illustratively, the depth sensor 450a, 450b may be an exemplary realization of the second depth sensor 208 described in relation to FIG.2B. It is understood that the representation in FIG.4A to FIG.4D may be simplified for the purpose of illustration, and the depth sensor 400a, 400b, 450a, 450b may include additional component with respect to those shown (e.g., a processor, a time-to-digital converter, an amplifier, a filter, and the like).
[0094] The (first) depth sensor 400a in FIG.4A may be configured to carry out a time-of-flight measurement, e.g. the depth sensor 400a may be a direct time-of-flight sensor or an indirect time-of-flight sensor. The (first) depth sensor 400b in FIG.4B may be configured to carry out a self-mixing interferometry measurement, e.g. the depth sensor 400b may be a self-mixing interferometer.
[0095] The (second) depth sensor 450a in FIG.4C may be configured to carry out a depth measurement based on structured light. The (second) depth sensor 450b in FIG.4D may be configured to carry out a depth measurement based on stereo vision, e.g. the depth sensor 450b may be a stereo vision sensor (e.g., a sensor configured to carry out active stereo vision measurements).
[0096] In general, a depth sensor 400a, 400b, 450a, 450b may include, at the emitter side, an illuminator 402, 412, 452, 462, and emitter optics 406, 416, 456, 466. An illuminator 402, 412, 452, 462 may be or include a light source configured to emit light, and the emitter optics 406,
416, 456, 466 may be configured to direct the emitted light in a field of view of the depth sensor 400a, 400b, 450a, 450b.
[0097] An illuminator 402, 412, 452, 462 may be configured to emit light having a predefined wavelength, for example in the visible range (e.g., from about 380 nm to about 700 nm), infrared and/or near-infrared range (e.g., in the range from about 700 nm to about 5000 nm, for example in the range from about 860 nm to about 1600 nm, or for example at 905 nm or 1550 nm), or ultraviolet range (e.g., from about 100 nm to about 400 nm). In some aspects, an illuminator 402, 412, 452, 462 may be or may include an optoelectronic light source (e.g., a laser source). As an example, an illuminator 402, 412, 452, 462 may include one or more light emitting diodes. As another example an illuminator 402, 412, 452, 462 may include one or more laser diodes, e.g. one or more edge emitting laser diodes or one or more vertical cavity surface emitting laser diodes. In various aspects, an illuminator 402, 412, 452, 462 may include a plurality of emitter pixels, e.g. an illuminator 402, 412, 452, 462 may include an emitter array having a plurality of emitter pixels. For example, the plurality of emitter pixels may be or may include a plurality of laser diodes. For example, an illuminator 402, 412, 452, 462 may include an array of sources of coherent light. According to various aspects, an illuminator 402, 412, 452, 462 may include an array of electronic devices that monitor the output intensity or junction voltage of each source of coherent light. In various aspects, an illuminator 402, 412, 452, 462 may be a projector.
[0098] For time-of-flight measurements, the illuminator 402 may be configured to emit individual light pulses (e.g., individual laser pulses) for a direct time-of-flight measurement, or may be configured to emit continuous modulated light, e.g. continuous light having an amplitude modulation or frequency modulation, for an indirect time-of-flight measurement. In an exemplary configuration, the illuminator 402 may be configured to emit light pulses at regular intervals. As another example, the illuminator 402 may be configured to emit light pulses grouped in bursts or in a more complex temporal pattern.
[0099] For self-interferometric measurements, the illuminator 412 may be or include a laser source (e.g., a laser diode), and the depth sensor 400b may include optics 416 configured to direct the light collected from the field of view of the depth sensor 400b into the illuminator 412. The illuminator 412 may be configured to emit continuous modulated light, e.g. continuous light having a frequency modulation. In this configuration, the light reflected back into the cavity induces a modulation of the laser light properties (e.g., an amplitude modulation and a frequency modulation), or of other electrical characteristics of the source (e.g., a modulation of the laser diode junction voltage).
[00100] For structured light measurement or stereo vision measurement, the illuminator 452, 462 (and/or emitter optics 456, 466 of the depth sensor 450a, 450b) may be configured to emit (e.g., project) a predefined light pattern, for example a dot pattern. The projected pattern may include predefined features (e.g., dots) whose displacement at the receiver side may be used to determine the depth at which the feature has been reflected.
[00101] In an exemplary configuration, which may provide a more accurate disparity-based depth measurement, the illuminator 452, 462 (and/or the emitter optics 456, 466) may be configured to emit a light pattern including pattern features that along the orthogonal direction are larger than the expected orthogonal direction shift (e.g., vertical stripes or elliptical/elongated dots). In another configuration, the illuminator 452, 462 (and/or the emitter optics 456, 466) may be configured to emit a complex pattern that allows reconstructing the original vertical position using some features (for instance a dot pattern with a square grid superimposed). In another configuration, the illuminator 452, 462 (and/or the emitter optics 456, 466) may be configured to emit pattern elements including encoded information (such as a specific modulation, or a symbol) allowing the computer vision algorithm to reconstruct the correspondence, or more refined methods can be designed based on the application itself to establish correspondence in the presence of orthogonal shift. The orthogonal shift, when measured, may include additional information on the refractive objects. In various aspects, such
information may be stored and used in the following steps in addition to the differential depth map.
[00102] In general, the illuminator 402, 412 of the first depth sensor 400a, 400b may be configured to generate pulsed signals with a precise timing or, alternatively a modulated signal. For self-interferometry, the illuminator 412 (e.g., corresponding optics 416) may be configured to collect the light reflected from the scene and inject it into the illuminator (e.g., into the source(s) of coherent light). The illuminator 452, 462 of the second depth sensor 450a, 450b may be configured to generate some pattern at infinite distance or, alternatively, at some finite distance. The pattern may encode information to further simplify feature matching in the presence of distortion.
[00103] In various aspects, a (second) depth sensor 450a, 450b configured for disparity-based measurements may include more than one illuminator 452, 462, e.g. more than one projector. A further (second) projector may allow generating more complex light patterns. For example, the second illuminator may be configured to generate a pattern different from the first illuminator 452, 462 or alternatively, a smooth pattern similar to a homogeneous irradiance. As another possibility the optics associated with the first illuminator 452, 462 and array of sources may be tiled to enable the functionalities of a second illuminator in the first illuminator 452, 462.
[00104] In general, a depth sensor 400a, 400b, 450a, 450b may include, at the receiver side a light sensor 404, 414, 454, 464 configured to generate a sensing signal representative of the light impinging onto the sensor 404, 414, 454, 464. A depth sensor 400a, 450a, 450b may include receiver optics 408, 458, 468 configured to collect the light reflected from the field of view and direct the collected light onto the sensor 404, 454, 464, e.g. in case of disparity-based measurements the receiver optics 458, 468 may be configured to form an image of the projected pattern onto the sensor.
[00105] A light sensor 404, 414, 454, 464 may be configured to be sensitive for the emitted light, e.g. may be configured to be sensitive in a predefined wavelength range, for example in the visible range (e.g., from about 380 nm to about 700 nm), infrared and/or near infrared range (e.g., in the range from about 700 nm to about 5000 nm, for example in the range from about 860 nm to about 1600 nm, or for example at 905 nm or 1550 nm), or ultraviolet range (e.g., from about 100 nm to about 400 nm). A light sensor 404, 414, 454, 464 may include one or more light sensing areas, for example a light sensor 404, 414, 454, 464 may include one or more photo diodes. As examples, a light sensor 404, 414, 454, 464 may include at least one of a PIN photo diode, an avalanche photo diode (APD), a single-photon avalanche photo diode (SPAD), or a silicon photomultiplier (SiPM).
[00106] For example, for time-of-flight measurements, the light sensor 404 may include one or more single-photon avalanche photo diodes. The single-photon avalanche photo diodes (SPADs) allow generating a strong (avalanche) signal upon reception of single photons impinging on the photo diodes, thus providing a high responsivity and a fast optical response. In general, for time-of-flight measurements, the light sensor 404 may be configured to store time-resolved detection information or to detect the phase of a modulated signal.
[00107] As another example, for self-interferometric measurement, the light sensor 414 may be or include a photo diode that receives light from the laser cavity, rather than from the field of view of the depth sensor 400b. In this configuration, the signal of the photo diode is representative of the modulation of the laser light, and thus indicative of the distance at which an object reflecting the light may be located.
[00108] As a further example, for disparity-based measurements, the light sensor 464 may be configured to return an intensity map. In case of stereo vision, a depth sensor 450b may include a further light sensor, e.g. a second imaging camera, with corresponding optics. The further light sensor may have, in some aspects, a resolution greater than the first imaging camera, and may return a respective intensity map.
[00109] Although not shown, in various aspects a depth sensor 400a, 400b, 450a, 450b may include a processor, e.g. a processing unit, configured to extract depth information from the light sensing data delivered by the light sensor 404, 414, 454, 464, e.g. by means of a suitable algorithm.
[00110] According to various aspects, a compact arrangement of the depth sensors of an imaging device (e.g., of the imaging device 200) may be provided. Illustratively, in various aspects one or more components at the emitter side and/or receiver side may be shared between the depth sensors (e.g., between a first depth sensor 206, 400a, 400b and a second depth sensor 208, 450a, 450b), or among more than two depth sensors. This configuration may provide an efficient utilization of the system resources, and thus a space- and cost-efficient arrangement of an imaging device.
[00111] As an exemplary configuration, at least one light sensor may be shared between/among the depth sensors, e.g. between the first depth sensor and the second depth sensor. As another exemplary configuration, additionally or alternatively, at least one illuminator may be shared between/among the depth sensors, e.g. between the first depth sensor and the second depth sensor. Exemplary configurations of this shared arrangement are illustrated in FIG.5A to FIG.5C. In an exemplary configuration, the light sensor used for the optical path measurement (e.g., a SPAD camera) may also be used for intensity acquisition (e.g., summing up the events), so that the same acquisition data may be used. In other aspects, a dedicated high-resolution camera may be used instead.
[00112] FIG.5A, FIG.5B, and FIG.5C each shows a respective imaging device 500a, 500b, 500c in which a first depth sensor and a second depth sensor share at least one common component. The imaging device 500a, 500b, 500c may be an exemplary realization of the imaging device 200, e.g. may represent an exemplary configuration of the first and second depth sensors 206, 208, 400a, 400b, 450a, 450b.
[00113] As shown in FIG.5A and FIG.5B, the imaging device 500a, 500b may include a single illuminator 502 (with corresponding emitter optics) shared between a time-of-flight-based measurement and a structured-light-based measurement. In this configuration, the imaging device 500a, 500b may include a first light sensor 504 for the time-of-flight measurement, illustratively a time-of-flight camera module. The imaging device 500a, 500b may include a second light sensor 506 for the structured-light imaging, illustratively an imaging camera module. The common illuminator 502 may be configured to emit light both for the time-of-flight measurement (e.g., single light pulses, or continuous modulated light) and for the structured-light imaging (e.g., a predefined light pattern, such as a dot pattern).
[00114] The common illuminator 502 and the light sensors 504, 506 may be disposed aligned along a same direction (as shown in FIG.5 A), for example aligned along the horizontal direction. As another example, the illuminator 502 and the light sensors 504, 506 may be disposed at an angle, e.g. with an orthogonal arrangement, as shown in FIG.5B. The arrangement of the illuminator 502 and the light sensors 504, 506 may be selected according to a desired configuration of the imaging device, e.g. to take into account fabrication constraints or application constraints. A configuration with two light sensors 504, 506, e.g. with two imaging cameras, may in general be used to implement stereo vision 3D sensing methods.
[00115] As shown in FIG.5A and FIG.5B, this arrangement may provide a first baseline (Baseline ToF) between the illuminator 502 and the light sensor 504 used for the time-of-flight measurement, and a second baseline (Baseline SL) between the illuminator 502 and the light sensor 506 used for the structured-light imaging. The baselines may be adapted according to the desired configuration of the imaging device 500a, 500b, for example the baselines may be equal to one another (illustratively, may have the same length), or may be different from one another.
[00116] As shown in FIG.5C, the imaging device 500c may include a single illuminator 512 (with corresponding emitter optics) shared between a self-mixing interferometric measurement
and a structured-light-based measurement. In this configuration, the imaging device 500c may include a (single) light sensor 514 for the structured-light imaging, illustratively an imaging camera module. The self-mixing interferometric detection may be provided by the illuminator 512 itself, by the reflected light being injected therein. In the imaging device 500c, the optics may be thus configured to direct the light collected from the field of view into the illuminator 512 during the self-mixing interferometric measurement, and may be configured to direct the light collected from the field of view onto the light sensor 514 during the structured-light imaging measurement. Alternatively, the light sensor 514 may also be configured to perform a time-of-flight measurement (e.g., including timing circuitry), in addition to the structured light measurement. In such configuration, the illuminator 512 may have a configuration different from a self-mixing interferometer.
[00117] FIG.6A shows an imaging device 600a including a self-mixing interferometry based sensor, and FIG.6B shows an imaging device 600b including a time-of-flight-based sensor, in a schematic representation, according to various aspects. Illustratively, FIG.6A and FIG.6B show an exemplary configuration of the imaging device 200, 500a, 500b, 500c and corresponding depth sensors. In the exemplary scenario in FIG.6A and FIG.6B an object 650a, 650b having a refractive portion (illustratively, a refractive interface) may be located at a distance Zrarget, 652a, 652b from the imaging device 600a, 600b.
[00118] The imaging device 600a in FIG.6A may include a camera optical layer 602a including a stack of optical components (e.g., camera, IR filters, pupils etc.) and configured to focus on the sensor 604a an image of the regular pattern projected from the illuminator module 606a (e.g., from a VCSEL source 608a) onto the target 650a. The illuminator to camera distance defines a baseline value for structured light depth extraction.
[00119] The imaging device 600a may further include an illuminator optical layer 610a. The illuminator optical layer 610a may be or include a stack of optical components that defines, for each VCSEL, an image point on or in close proximity of the target surface(e.g., the illuminator
optical layer 610a may be or include an array of micro-lenses, e.g. a ML A, on top of the VCSELs), or at infinity. The illuminator optical layer 610a may be configured to define, for each VCSEL, a propagation direction (e.g., the illuminator optical layer 610a may be or include a prism array tilting each ray). In an exemplary configuration, the illuminator optical layer 610a may be tiled to allow part of the VCSEL to create a different type of illumination, including far field patterns and/or flood illumination. The combination of optical components of the illuminator optical layer 610a may be configured to define a planar or curved surface on which the VCSEL emitting facets are imaged. The light scattered from such points is imaged back into the VCSEL facet.
[00120] The VCSEL source 608a may include frequency modulation, direct or by means of other parameters (e.g., by means of a current modulation). The illuminator module 606a may additionally include array of electronic devices that monitor the output intensity or junction voltage of each VCSEL.
[00121] The imaging device 600b in FIG.6B may include a camera optical layer 602b including a stack of optical components (e.g., camera, IR filters, pupils etc.) configured to focus on the sensor 604b an image of the regular pattern projected from the illuminator module 606b onto the target 650b. The camera sensor 604b may include light-sensitive pixels and underlying combining electronics, and may be configured to enable time-resolved event detection and time binning statistics, and/or discrimination of the phase of a modulation in the light emitted by the illuminator 606b, such as amplitude. The camera sensor 604b may further be configured to enable the acquisition of the irradiance of the regular pattern image that is projected onto the time of flight, and that can be used for disparity-based depth evaluation. The illuminator to camera distance defines a baseline value for structured light depth extraction.
[00122] The imaging device 600b may further include an illuminator optical layer 610b. The illuminator optical layer 610b may be or include a stack of optical components that are configured and positioned with respect to the VCSEL source 608b to project a structured light
pattern onto the scene. The VCSEL source 608b may include a driver to generate time-resolved pulses and for time synchronization with the camera. For example the driver may be configured to generate a time-modulation of a quantity such as the amplitude of the laser signal.
[00123] In various aspects, although not shown in FIG.6B, the imaging device 600b may further include a second illuminator in addition to the first structured light illuminator. The second illuminator may be configured to emit a flood pattern. For example, the second illuminator may include a driver with the same time and modulation properties of the driver of the first illuminator.
[00124] According to various aspects, a calibration of the depth sensors of an imaging device (e.g., of the first and second depth sensors 206, 208 of the imaging device 200) may be provided. The calibration may ensure an accurate mapping of the results obtained with the different depth measurement methods, to allow for a more accurate identification of the different errors induced by refractive portions of objects in the field of view. Illustratively, in some aspects, the first depth sensor of an imaging device (e.g., the first depth sensor 206) may be calibrated with respect to the second depth sensor of the imaging device (e.g., the second depth sensor 208).
[00125] Aspects related to the calibration will be described in further detail in relation to FIG.7A to FIG.7C. It is understood that the calibration may be carried out, in general, independently of the subsequent detection and comparison of the results of the different depth measurement methods, or may be combined with such detection and comparison. It is also understood that the calibration described in relation to two depth sensors may be extended in a corresponding manner to a scenario with more than two depth sensors.
[00126] FIG .7A shows a calibration device 700 in a schematic representation, according to various aspects. The calibration device 700 may include a processor 702 and storage 704 (e.g., one or more memories) coupled to the processor 702. The storage 704 may be configured to store instructions (e.g., software instructions) executed by the processor 702. The instructions may cause the processor 702 to perform a method 710 of calibrating depth sensors, described
in further detail below. Aspects described with respect to a configuration of the processor 702 may also apply to the method 710, and vice versa.
[00127] In general, the calibration device 700 may be a dedicated device for imaging applications. In other aspects, the calibration device 700 may be part of an imaging device (e.g., of the imaging device 200). Illustratively, in various aspects, an imaging device may be configured to carry out the calibration of its depth sensors. Further illustratively, the processor 702 may be, in various aspects, a processor of an imaging device, e.g. the processor 202 of the imaging device 200 may additionally be configured to carry out the calibration described in the following in relation to the processor 702 (for the depth sensors 206, 208).
[00128] In general, the processor 702 may be configured to control a calibration of a first depth sensor 706 with respect to a second depth sensor 708, e.g. a sensor-to-sensor calibration. The first depth sensor 706 may be configured as the first depth sensor 206 described in FIG.2B, e.g. the first depth sensor 706 may be configured to carry out an optical path length measurement. The second depth sensor 708 may be configured as the second depth sensor 208 described in FIG.2B, e.g. the second depth sensor 708 may be configured to carry out a disparity -based depth measurement.
[00129] In general, the processor 702 may be configured to derive calibration information based on the output of the depth sensors 706, 708 in a known scenario. The processor 702 may be configured to control the depth sensors 706, 708 to carry out the respective depth-measurement in a (common) field of view having known properties, e.g. a field of view including objects with predefined (e.g., known) properties.
[00130] For example, the processor 702 may be configured to control the first depth sensor 706 to carry out the optical path length measurement in a predefined field of view, e.g. in a field of view including one or more predefined objects. In a corresponding manner, the processor 702 may be configured to control the second depth sensor 708 to carry out the disparity -based depth
measurement in the predefined field of view (illustratively, in the field of view including the one or more predefined objects).
[00131] The one or more predefined objects may illustratively be or include one or more objects having predefined properties. For example, the one or more predefined objects may have a predefined shape, a predefined orientation, and/or predefined location within the field of view. The predefined properties may be known to the processor 702 (e.g., may be stored in the storage 704). In an exemplary configuration, which has been found to enable an efficient calibration process, the one or more predefined objects may be free of refractive portions. For example, the one or more predefined objects may be completely non-transparent. As another example, the one or more predefined objects may have refractive portions disposed facing away from the depth sensors 706, 708, e.g. disposed in such a manner that the refractive portions are not illuminated by the light emitted by the depth sensors 706, 708.
[00132] The processor 702 may be configured to calibrate the depth sensors 706, 708 with respect to one another based on respective results 712, 714 of the corresponding depth measurements, e.g. based on respective depth maps generated by the depth sensors 706, 708. For example, the processor 702 may be configured to generate calibration data 716 based on the results 712, 714 of the depth measurements. The calibration data 716 may include or represent one or more calibration parameters. The one or more calibration parameters may include or define adjustment values for matching first depth values obtained via the optical path length measurement to corresponding second depth values obtained via the disparity-based depth measurement. The one or more calibration parameters may represent how to adjust the first result 712 of the first depth measurement to match the second result 714 of the second depth measurement, or vice versa. Illustratively, the one or more calibration parameters may represent adjustment values to adjust the first result 712 so as to obtain the same depth values as for the second result 714, or vice versa. The processor 702 may be configured to store the calibration data 716, e.g. in the storage 704 (e.g., in the storage 204 of the imaging device 200).
[00133] As a compact and convenient representation, the calibration data may be or include a calibration map for matching a first depth map obtained via the optical path length measurement to a second depth map obtained via the disparity-based depth measurement. Illustratively, the calibration map may include a plurality of adjustment values (also referred to as calibration values) at respective (x-y) coordinates of the predefined field of view. The calibration map may thus represent, for each coordinate, a calibration (illustratively, a correction) to be applied to the first result 712 for matching the second result 714, or vice versa. Illustratively, the calibration map may be representative, for each coordinate of the field of view, of a calibration parameter for modifying a depth value of the first depth measurement at that coordinate and/or for modifying a depth value of the second depth measurement at that coordinate.
[00134] According to various aspects, the results of a depth measurement carried out by a depth sensor may be corrected using the calibration data for the depth sensors prior to carrying out the comparison and related analysis to identify transparent objects. For example, the processor of an imaging device may be configured to modify the first result of a first depth measurement and/or the second result of a second depth measurement based on calibration data representative of a calibration of the first depth measurement with respect to the second depth measurement. This may apply for example to the processor 202 of the imaging device 200, configured to calibrate the first result 212 with respect to the second result 214, or vice versa, prior to comparing the results 212, 214 with one another.
[00135] FIG.7B and FIG.7C each shows a schematic flow diagram 700b, 700c of a calibration method, e.g. of a method for calibrating depth sensors. Illustratively, the method 700a, 700b may be an exemplary implementation of the method 710 carried out by the processor 702.
[00136] As shown in FIG.7B, the method 700b may include, in 710, carrying out a first depth measurement via an optical path length measurement in a predefined field of view, e.g. in a field of view including one or more predefined objects. The method 700b may include, in 720, carrying out a second depth measurement via a disparity-based depth measurement in the
predefined field of view, e.g. in the field of view including the one or more predefined objects. The optical path length measurement and the disparity-based depth measurement may be carried out in parallel (e.g., simultaneously) with one another, or in a sequence.
[00137] The method 700b may further include, in 730, generating calibration data based on the results of the first depth measurement and second depth measurement. For example, the method 700b may include generating calibration data representative of one or more calibration parameters for matching first depth values obtained via the optical path length measurement to corresponding second depth values obtained via the disparity -based depth measurement.
[00138] FIG.7C shows method 700c which may be an exemplary implementation of the method 700b, e.g. including exemplary steps that may be present in the method 700c. In general, a calibration method 700b, 700c may be performed in factory and/or runtime at regular intervals or when some other condition triggers it. The calibration method 700c may include blocks analogous to an ordinary acquisition, e.g. analogous to the method 300a, 300b, with the difference that the depth-measurement is carried out in a predefined field of view. For example, two depth maps may be acquired on a scene with no refractive objects and optimally with full coverage of the field of view. It may be expected that the differential depth acquired on such scenes would be zero on a fully calibrated system. Any differential depth that is still observed is stored as a differential calibration map, and is used to correct subsequent acquisitions, as mentioned above. Alternatively, depending on the application, the differential output may be further processed to produce a more complex calibration. For instance, the differential output may be used as an input to a function depending on other system parameters (such as the sensor operating temperature), or using the measured depth data itself to compute the correction for each point of the sensor.
[00139] The method 700c may include, in 735, acquiring an optical path length on a predefined field of view, e.g. the method 700c may include carrying out an optical path length measurement, for example based on time-of-flight or phase of the emitted light. For example,
the method 700c may include acquiring an optical path length on a scene with no refractive objects. The method 700c may further include, in 745, extracting depth information from the optical path length on the predefined field of view, e.g. on the scene with no refractive objects. Illustratively, the method 700c may include applying one or more corrections to the optical path length to obtain depth value(s) from the measured optical path length(s).
[00140] The method 700c may include, in 755, acquiring structured light images on the predefined field of view, e.g. the method 700c may include carrying out a disparity-based depth measurement. For example, the method 700c may include acquiring structured light images on the scene with no refractive objects. The method 700c may further include, in 765, extracting depth information from the structured light images, e.g. using a correspondence-based algorithm. Illustratively, the method 700c may include applying one or more corrections to the structured light images to obtain depth values from the acquired structured light images.
[00141] The method 700c may further include, in 775, establishing a matching between the results generated via the two methods, e.g. the method 700c may include establishing a mapping between depth map coordinates generated via the two methods. The method 700c may further include, in 785, analyzing the results generated via the two methods to generate calibration data, e.g. the method 700c may include comparing the two depth acquisitions and generate a differential output. The method 700c may further include, in 795, generating calibration data, e.g. generating a calibration map, for the depth sensors.
[00142] The approach described in the present disclosure has been tested by means of simulations, described in further detail in relation to FIG.8 A to FIG.8C.
[00143] FIG.8A illustrates a modeled object 800 having a refractive portion 802. The modeled object 800 may be an eye having an eyeball surface 804, a corneal lens 802 defining a refractive surface (e.g., with refractive index n of about 1,336), and an iris surface 806. The model considers thus a two-dimensional geometrical ray tracing of an eyeball sphere with a corneal lens surface and allows evaluating the distortion of the iris plane 806 caused by the corneal lens
802, when using a structured light approach. FIG.8 further shows the rays 808 propagating from the projector 810, and the rays 812 propagating back into the camera 814 (e.g., a pinhole camera). The example shows the case of an eyeball whose closest edge to the camera is about 25 mm. The eyeball is rotated by 60 degrees with respect to the optical axis. Depth may be estimated from the expected disparity on the camera as well as from the total optical path for different gaze angle rotations.
[00144] FIG.8B and FIG.8C show plots 820b, 820c illustrating the results of the simulation. The plot 820b in FIG.8B shows the simulated results of disparity -based depth measurement, half optical path length (ToF) measurement, and depth-to-range corrected path length. For the illustrative purposes of the simulation, the correction was applied using an analogous modeled object without corneal lens, and deriving a correction factor for each point between the ToF and the disparity based depth. Such correction factor was then applied to the modeled object with corneal lens. The plots 820b show that outside of the corneal region 802, the corrected path- length is consistent with the structured-light approach (the lines are perfectly overlapping), while in the corneal region 802 the two return different results.
[00145] The plot 820c in FIG.8C shows the difference in depth information returned from each of the two methods, for an eyeball with corneal lens or an eyeball without corneal lens. The numerical difference between the two is zero at any position where there is no lens, while a non-zero value highlights the position of the corneal lens and contains some information on the shape and geometrical layout. Such output may be further post-processed by an additional software to extract meaningful information on the object, perform a selection, or used as selection mask to apply some correction to the same depth maps, or configure device operation for subsequent acquisitions.
[00146] According to various aspects, a computer program may be provided, including instructions which, when the program is executed by a computer, cause the computer to carry
out any one of the methods described herein, e.g. any one of the methods 210, 300a, 300b, 710, 700b, 700c.
[00147] The term “processor” as used herein may be understood as any kind of technological entity that allows handling of data. The data may be handled according to one or more specific functions that the processor may execute. Further, a processor as used herein may be understood as any kind of circuit, e.g., any kind of analog or digital circuit. A processor may thus be or include an analog circuit, digital circuit, mixed-signal circuit, logic circuit (e.g., a hard-wired logic circuit or a programmable logic circuit), microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), Digital Signal Processor (DSP), Field Programmable Gate Array (FPGA), integrated circuit, Application Specific Integrated Circuit (ASIC), etc., or any combination thereof. It is understood that any two (or more) of the processors detailed herein may be realized as a single entity with equivalent functionality or the like, and conversely that any single processor detailed herein may be realized as two (or more) separate entities with equivalent functionality or the like.
[00148] The word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
[00149] Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures, unless otherwise noted.
[00150] The phrase “at least one” and “one or more” may be understood to include a numerical quantity greater than or equal to one (e.g., one, two, three, four, [...], etc.). The phrase “at least one of’ with regard to a group of elements may be used herein to mean at least one element from the group consisting of the elements. For example, the phrase “at least one of’ with regard to a group of elements may be used herein to mean a selection of: one of the listed elements, a plurality of one of the listed elements, a plurality of individual listed elements, or a plurality of a multiple of individual listed elements.
[00151] While the above descriptions and connected figures may depict electronic device components as separate elements, skilled persons will appreciate the various possibilities to combine or integrate discrete elements into a single element. Such may include combining two or more circuits for form a single circuit, mounting two or more circuits onto a common chip or chassis to form an integrated element, executing discrete software components on a common processor core, etc. Conversely, skilled persons will recognize the possibility to separate a single element into two or more discrete elements, such as splitting a single circuit into two or more separate circuits, separating a chip or chassis into discrete elements originally provided thereon, separating a software component into two or more sections and executing each on a separate processor core, etc.
[00152] It is appreciated that implementations of methods detailed herein are demonstrative in nature, and are thus understood as capable of being implemented in a corresponding device. Likewise, it is appreciated that implementations of devices detailed herein are understood as capable of being implemented as a corresponding method. It is thus understood that a device corresponding to a method detailed herein may include one or more components configured to perform each aspect of the related method.
[00153] All acronyms defined in the above description additionally hold in all claims included herein.
[00154] While the invention has been particularly shown and described with reference to specific aspects, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes, which come within the meaning and range of equivalency of the claims, are therefore intended to be embraced.
List of reference signs
100a Detection scenario 100b Detection scenario 102 Point on the illuminator 104 Point on the camera 106 Distance to object 108 Position of the object 110 Baseline 112 Focal length 114 Refractive surface 200 Imaging device 202 Processor 204 Storage 206 First depth sensor 208 Second depth sensor 210 Method 212 First result of depth measurement 214 Second result of depth measurement 216 Output signal 218 Result of optical path length measurement 220 Result of disparity-based measurement 300a Method 300b Method 310 Method step 320 Method step 330 Method step 340 Method step 350 Method step 360 Method step 370 Method step 380 Method step 390 Method step 400a Depth sensor 400b Depth sensor 402 Illuminator 404 Light sensor 406 Emitter optics 408 Receiver optics 412 Illuminator 414 Light sensor 416 Emitter optics 450a Depth sensor 450b Depth sensor
Illuminator Light sensor Emitter optics Receiver optics Illuminator Light sensor Emitter optics Receiver optics a Imaging device b Imaging device c Imaging device Illuminator Light sensor for time-of-flight Light sensor for structured-light imaging Illuminator Light sensor a Imaging device b Imaging device a Camera optical layer b Camera optical layer a Sensor b Sensor a Illuminator module b Illuminator module a VCSEL source b VCSEL source a Illuminator optical layer b Illuminator optical layer a Object b Object a Distance to object b Distance to object Calibration device b Calibration method c Calibration method Processor Storage First depth sensor Second depth sensor Method First result of depth measurement Second result of depth measurement Calibration data Method step Method step
Method step Method step Method step Method step Method step
Method step Method step Method step
Model object Refractive portion
Eyeball surface Iris surface
Emitted light rays Projector
Reflected light rays Camera b Graphs c Graphs
Claims
Claims
1. An imaging device (200) comprising: a processor (202) configured to: compare a first result (212) of a first depth measurement with a second result (214) of a second depth measurement, wherein the first depth measurement is carried out via an optical path length measurement, wherein the second depth measurement is carried out via a disparity-based depth measurement, wherein the first depth measurement and the second depth measurement are carried out in a field of view common to the first depth measurement and the second depth measurement; and generate, based on a result of the comparison, an output signal (216) representative of whether the common field of view comprises at least one object having a refractive portion. The imaging device (200) according to claim 1, wherein the first result (212) comprises a first depth map of the common field of view, wherein the second result (214) comprises a second depth map of the common field of view, and
wherein the processor (202) is configured to generate the output signal (216) based on a difference between the first depth map and the second depth map.
3. The imaging device (200) according to claim 2, wherein the processor (202) is configured to generate the output signal (216) by generating a differential depth map based on the first depth map and the second depth map, wherein the differential depth map is representative, for each coordinate in the common field of view, of whether the common field of view comprises at that coordinate at least one object having a refractive portion.
4. The imaging device (200) according to any one of claims 1 to 3, wherein the processor (202) is further configured to modify the first result (212) of the first depth measurement and/or the second result (214) of the second depth measurement based on calibration data representative of a calibration of the first depth measurement with respect to the second depth measurement.
5. The imaging device (200) according to any one of claims 1 to 4, wherein the optical path length measurement comprises a direct time of flight measurement, an indirect time of flight measurement, or a self-mixing interferometry measurement.
6. The imaging device (200) according to any one of claims 1 to 5, wherein the disparity-based depth measurement comprises a depth measurement based on structured light, or a stereo vision measurement.
7. The imaging device (200) according to any one of claims 1 to 6, further comprising: a first depth sensor (206) configured to: carry out the optical path length measurement; and deliver a first output signal (218) representative of a result of the optical path length measurement to the processor (202).
8. The imaging device (200) according to claim 7, wherein the first depth sensor (206) is configured as a direct time of flight sensor, or as an indirect time of flight sensor, or as a self mixing interferometer.
9. The imaging device (200) according to any one of claims 1 to 8, further comprising: a second depth sensor (208) configured to: carry out the disparity -based depth measurement; and deliver a second output signal (220) representative of a result of the disparity -based depth measurement to the processor (202).
10. The imaging device (200) according to claim 9,
wherein the second depth sensor (208) is configured as a structured-light depth sensor, or as a stereo vision sensor.
11. The imaging device (200) according to claims 7 and 9, wherein at least one light sensor is shared between the first depth sensor (206) and the second depth sensor (208).
12. The imaging device (200) according to claims 7 and 9, wherein at least one illuminator is shared between the first depth sensor (206) and the second depth sensor (208).
13. The imaging device (200) according to claims 7 and 9, wherein the first depth sensor (206) is calibrated with respect to the second depth sensor (208).
14. The imaging device (200) according to claims 7 and 9, wherein the processor (202, 702) is configured to control a calibration of the first depth sensor (206, 706) with respect to the second depth sensor (208, 708).
15 The imaging device (200) according to claim 14,
wherein, to control the calibration of the first depth sensor (206, 706) with respect to the second depth sensor (208, 708), the processor (202, 702) is configured to: control the first depth sensor (206, 706) to carry out the optical path length measurement in a field of view including one or more predefined objects; control the second depth sensor (208, 708) to carry out the disparity-based depth measurement in the field of view including the one or more predefined objects; and generate calibration data (716) representative of one or more calibration parameters for matching first depth values obtained via the optical path length measurement to corresponding second depth values obtained via the disparity-based depth measurement.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102022134569 | 2022-12-22 | ||
DE102022134569.6 | 2022-12-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024132463A1 true WO2024132463A1 (en) | 2024-06-27 |
Family
ID=89072986
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2023/083922 WO2024132463A1 (en) | 2022-12-22 | 2023-12-01 | Device and method to detect refractive objects |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024132463A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210405165A1 (en) * | 2020-06-18 | 2021-12-30 | Shenzhen GOODIX Technology Co., Ltd. | Time-of-flight distance measuring method and related system |
US20220043148A1 (en) * | 2019-03-11 | 2022-02-10 | Mitsubishi Electric Corporation | Setting value adjustment device for displacement meter |
CN114200480A (en) * | 2020-09-01 | 2022-03-18 | 珊口(深圳)智能科技有限公司 | Sensor error measurement method and system applied to mobile robot |
-
2023
- 2023-12-01 WO PCT/EP2023/083922 patent/WO2024132463A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220043148A1 (en) * | 2019-03-11 | 2022-02-10 | Mitsubishi Electric Corporation | Setting value adjustment device for displacement meter |
US20210405165A1 (en) * | 2020-06-18 | 2021-12-30 | Shenzhen GOODIX Technology Co., Ltd. | Time-of-flight distance measuring method and related system |
CN114200480A (en) * | 2020-09-01 | 2022-03-18 | 珊口(深圳)智能科技有限公司 | Sensor error measurement method and system applied to mobile robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9335220B2 (en) | Calibration of time-of-flight measurement using stray reflections | |
US11002537B2 (en) | Distance sensor including adjustable focus imaging sensor | |
US10823818B2 (en) | Detector for optically detecting at least one object | |
KR102456961B1 (en) | Depth mapping using structured light and time of flight | |
KR102405110B1 (en) | Optical imaging modules and optical detection modules including a time-of-flight sensor | |
CN113454419B (en) | Detector having a projector for illuminating at least one object | |
US9443308B2 (en) | Position and orientation determination in 6-DOF | |
US8860930B2 (en) | Three dimensional surface mapping system using optical flow | |
KR20210027461A (en) | Image processing method and apparatus and image processing device | |
EP3645965A1 (en) | Detector for determining a position of at least one object | |
EP3791209B1 (en) | Phase wrapping determination for time-of-flight camera | |
CN109085603A (en) | Optical 3-dimensional imaging system and color three dimensional image imaging method | |
CN107564051B (en) | A kind of depth information collection method and system | |
US20220364849A1 (en) | Multi-sensor depth mapping | |
WO2024132463A1 (en) | Device and method to detect refractive objects | |
Langmann | Wide area 2D/3D imaging: development, analysis and applications | |
CN216211121U (en) | Depth information measuring device and electronic apparatus | |
Hertzberg et al. | Detailed modeling and calibration of a time-of-flight camera | |
Peters et al. | A bistatic simulation approach for a high-resolution 3d pmd (photonic mixer device)-camera | |
WO2014203138A1 (en) | Calibration of time-of-flight measurement using stray reflections | |
WO2021253308A1 (en) | Image acquisition apparatus | |
Hansard et al. | Characterization of Time-of-Flight Data | |
Kirmani | Femtosecond transient imaging | |
CN113822875A (en) | Depth information measuring device, full-scene obstacle avoidance method and electronic equipment | |
Hedenberg et al. | Safety Standard for Mobile Robots-A Proposal for 3D Sensors. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23817411 Country of ref document: EP Kind code of ref document: A1 |