WO2008152647A2 - Procédé et appareil d'imagerie tridimensionnelle - Google Patents
Procédé et appareil d'imagerie tridimensionnelle Download PDFInfo
- Publication number
- WO2008152647A2 WO2008152647A2 PCT/IL2008/000812 IL2008000812W WO2008152647A2 WO 2008152647 A2 WO2008152647 A2 WO 2008152647A2 IL 2008000812 W IL2008000812 W IL 2008000812W WO 2008152647 A2 WO2008152647 A2 WO 2008152647A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- distance
- pixel
- equals
- laser pulse
- output level
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4865—Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
- G01S17/18—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves wherein range gates are used
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
- G01S7/4863—Detector arrays, e.g. charge-transfer gates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
Definitions
- the present invention in some embodiments thereof, relates to time-of-flight three-dimensional imaging and, more particularly, but not exclusively, to a two-stage time-of-flight imaging technique.
- Three-dimensional (3D) imaging is concerned with extracting visual information from the geometry of visible surfaces and analyzing the 3D coordinate data thus obtained.
- the 3D data may be used to detect, track the position, and reconstruct the profile of an object, often in real time.
- 3D data analysis is utilized in a variety of industrial applications, including ground surveys, automated process control, target recognition, autonomous machinery guidance and collision avoidance.
- TOF time-of-flight
- LADAR Laser Detection and Ranging
- the senor In general, since real-time TOF applications are concerned with very fast events (i.e., occurring with the speed of light) two main conditions are necessary for the appropriate sensor operation. Firstly, very fast and low-noise operation of the readout electronics (i.e., very high operational frequency) is required. Secondly, the sensor (or the sensing element) should have the ability to detect and distinguish (e.g., separate from the background and handle unknown object reflectivity) the light signal which might be very weak (e.g., the light pulse reflected from objects which are relatively close one to the other), and/or should have the ability to integrate the detected signal in order to achieve a reasonable output in time.
- very fast and low-noise operation of the readout electronics i.e., very high operational frequency
- the sensor or the sensing element
- Fig. 1 shows a simplified block diagram of a pulsed-based TOF system.
- a short laser pulse is generated by laser pulser 100 and transmitted towards an optically-visible target 110.
- a signal is taken from the transmitter to serve as a start pulse for time interval measurement circuitry 120 (e.g. a time-to-digital converter).
- the back-reflected pulse is detected by photodetector 130, and amplified by amplifier 140.
- a stop pulse for time interval measurement circuitry 120 is generated from the amplified signal by timing detection element 150.
- the time interval between the start and stop pulses i.e. the time of flight
- the distance to the target is calculated by multiplying the TOF by the velocity of the signal in the application medium, as shown in Eqn. 1 :
- c is the speed of light (3*10 8 m/sec).
- One approach for determining the time of arrival of the reflected light pulse uses comparators to sense the moment in which the photodetector output signal exceeds a certain threshold.
- the time intervals to be measured are typically very short and the required timing accuracy and stability are very high, (e.g., picoseconds in time correspond to 1 cm in distance).
- fast and complex readout electronics are required.
- Some complex non-CMOS technologies may meet the required performance (e.g., BiCMOS) [4], but the need for very high bandwidth pixels makes it difficult to perform two-dimensional array integration.
- Fig. 2 is a simplified timing diagram for CMOS pulsed-based TOF, as performed in [5]-[10].
- S 1 a short laser pulse with duration of Tp is emitted.
- the shutter is triggered when the laser pulse is transmitted, and remains open for the duration of the laser pulse Tp.
- Tp the duration of the laser pulse
- the second cycle the measurement is repeated, but now with a shutter window greatly exceeding Tp.
- Vsi the signal that results when the shutter is open for substantially the time duration Tp (i.e. measurement cycle S 1 )
- Vs 2 as the signal that results when the shutter windowing exceeds Tp (i.e. measurement cycle S 2 ).
- the round trip TOF is calculated as:
- the distance d is calculated as: where c is the speed of light.
- the maximum TOF which may be measured is Tp. Therefore, for typical Tp values of 30ns-200ns range of pulse duration, the maximum distance which may be measured is about 4.5m-30m.
- Fig. 3 is a simplified timing diagram illustrating an alternate approach to TOF determination, denoted pulsed indirect TOF [H]. This approach increases the distance information gathered by generating successive laser pulses which are integrated with associated delayed shutters S 1 -S K - The distance, d&, is calculated as:
- Vs2 - V si The accuracy of pulsed indirect TOF depends on the precise measurement of the voltage difference (Vs2 - V si), which may be on the order of about 5 ⁇ V.
- Vs2 - V si the voltage difference
- ADC Analog-to-Digital Converter
- SNR signal-to-noise ratio
- a distance image sensor determines the signals of two charge storage nodes which depend on the delay time of the modulated light.
- a signal by the background light is received from the third charge and is subtracted from the signal which depends on the delay time of the two charge storage nodes, so as to remove the influence of the background.
- TOF imaging determines the distance to objects in the field of view by exposing an optical sensor to back-reflections of laser pulses. The time between the transmission of the laser pulse to its return to the optical sensor is used to calculate the distance to the object from which the laser pulse was reflected. A 3D image may then be constructed of the scene.
- Some TOF imaging embodiments presented herein are based on a two-stage data collection and processing approach.
- First a coarse ranging phase is performed by dividing the distance range of interest into sub-ranges. Each sub-range is checked to determine whether an object is or is not present for each sensor array pixel. Since the actual distance is not calculated at this stage, there is no need to gather data with a high SNR. There is also no need to perform a complex calculation of the object distance, merely to make a Yes/No decision whether an object is present in the sub-range currently being checked.
- the coarse ranging phase it is known which pixels have objects in their field of view and in which sub-range.
- the fine imaging phase data which permits a more accurate determination of the distance to the objects in the range of interest is gathered and the distances calculated.
- this stage may require collecting a larger amount of data per pixel, the data is collected only for those pixels for which an object has been detected, and only in the sub-range in which the object was detected.
- the dual-stage approach permits efficient 3D imaging by focusing the data collection process and distance calculations on the significant regions for the relevant pixels, and does not require collecting complete data over the entire distance range for all of the pixels.
- TOF data is collected using a pixel which includes two optical sensing elements, where the two sensing elements may be exposed with different timing.
- the two sensing elements are exposed in successive time intervals, and the TOF of the laser pulse is calculated from the output levels of both sensing elements.
- a method for determining a distance to an object is performed as follows. First a first optical sensing element of an optical pixel is exposed to a back-reflected laser pulse for an initial time interval to obtain a first output level. Then a second optical sensing element of the optical pixel is exposed to the back-reflected laser pulse at a successive time interval to obtain a second output level. Finally, the distance to the object is calculated from the first and second output levels.
- the calculating is in accordance with a ratio of the first and second output levels.
- the method includes the further step of determining a background noise level and subtracting the background noise level from the first and second output levels prior to the calculating.
- the calculating is performed as:
- V 1 the first output level minus a background noise level
- V 1 the second output level minus a background noise level
- the calculating is performed as:
- T an initial exposure time after transmission of the laser pulse
- T p the duration of the laser pulse
- V 1 the first output level
- V 2 the second output level
- the initial and successive time intervals are of the duration of the laser pulse.
- the method includes the further step of comparing the first and second output levels to a threshold to determine if an object is present.
- a method for performing three-dimensional imaging is performed as follows.
- each pixel of an optical sensor array is exposed to a back-reflected laser pulse.
- Each of the pixels is exposed with a shutter timing corresponding to a respective distance sub-range.
- the pixel's output level is then used to determine whether an object is present in the pixel's respective distance sub-range.
- the distance to the object is determined from the respective pixel output level.
- the determining if an object is present in the respective distance sub-range includes comparing the respective pixel output level to a threshold.
- the method includes the further step of outputting an array of the determined distances.
- the method includes the further step of outputting a three-dimensional image generated in accordance with the determined distances.
- the method includes the further step of selecting the duration of a pixel exposure time in accordance with a required length of the distance sub-range.
- the method includes the further step of transmitting a laser pulse with a specified pulse length.
- the duration of the pixel exposure equals the laser pulse length.
- the pixels of a row of the array are exposed with the same shutter timing for each frame.
- the pixels of successive rows of the array are exposed with a shutter timing corresponding to successive distance sub- ranges.
- the pixels of a row of the array are exposed with a shutter timing corresponding to successive distance sub-ranges.
- determining a distance from a pixel to the object includes: exposing the pixel to a plurality of back-reflected laser pulses, with a shutter timing corresponding to the respective distance sub-range; accumulating a pixel output level to laser pulses back-reflected from the respective distance sub-range; and calculating, from the accumulated pixel output level, a distance to an object within the respective distance sub-range.
- the accumulating is repeated to obtain a desired signal to noise ratio.
- the method includes beginning the exposure of a pixel in accordance with the distance of an initial distance of the respective distance sub-range.
- the method includes the further steps of: providing the pixel as a first and second optical sensing element; obtaining a first and second output level by exposing each of the first and second sensing elements to a back-reflected laser pulse for a respective time interval; and calculating a distance from the optical pixel to the object from the first and second output levels.
- the method includes the further step of determining a background noise level and subtracting the background noise level from the first and second output levels prior to the calculating.
- the calculating a distance from the optical pixel to the object from the first and second output levels is in accordance with a ratio of the first and second output levels.
- the calculating a distance from the optical pixel to the object from the first and second output levels is performed as:
- d the distance to the object
- c the speed of light
- Tj an initial exposure time after transmission of the laser pulse
- T p the duration of the laser pulse
- V 1 the first output level minus a background noise level
- V 2 the second output level minus a background noise level
- an optical pixel which includes: a first optical sensing element configured for providing a first output level in accordance with exposure to a back-reflected laser pulse over a first exposure period; a second optical sensing element configured for providing a second output level in accordance with exposure to a back-reflected laser pulse over a successive exposure period; and a distance calculator configured for calculating a distance from the optical pixel to an object from the first and second output levels.
- the distance calculator is further configured for subtracting a background noise level from the first and second output levels prior to calculating the distance.
- the distance calculator is configured calculate the distance in accordance with a ratio of the first and second output levels.
- the distance calculator is configured to calculate the distance as:
- d equals the distance to the object
- c equals the speed of light
- T an initial exposure time after transmission of the laser pulse
- T p equals the duration of the laser pulse
- V x equals the first output level minus a background noise level
- V 2 equals the second output level minus a background noise level
- the distance calculator is configured to calculate the distance as:
- d the distance to the object
- c the speed of light
- Tj an initial exposure time after transmission of the laser pulse
- T p the duration of the laser pulse
- V 1 the first output level
- V 1 the second output level
- a three-dimensional imaging apparatus which includes: a sensor array which includes a plurality of optical pixels configured for exposure to a back-reflected laser pulse: an exposure controller associated with the sensor array, configured for controlling a respective exposure time of each of the pixels so as to expose each of the pixels to back-reflection from a respective distance sub-range; and a distance calculator associated with the sensor array, configured for calculating from a pixel respective output level a distance to an object within the respective distance sub-range.
- the distance calculator includes an object detector configured for determining if an object is present in a pixel's respective distance sub-range from a respective pixel output level.
- the object detector is configured to determine if the object is present by comparing the respective pixel output level to a threshold.
- the imaging apparatus further includes an image generator for outputting a three-dimensional image generated from the calculated distances.
- the imaging apparatus further includes a laser for generating laser pulses for back-reflection.
- the exposure controller is configured for selecting the initial time of the exposure in accordance with an initial distance of the respective distance sub-range and the duration of the exposure in accordance with a length of the respective distance sub-range.
- the exposure controller is configured for exposing successive rows of the array with a shutter timing corresponding to successive distance sub-ranges.
- the exposure controller is configured for exposing the pixels of a row of the array with a shutter timing corresponding to successive distance sub-ranges.
- the distance calculator includes an output accumulator configured for accumulating a pixel output level from a plurality of exposures, and wherein the distance calculator is configured for calculating the distance from the accumulated pixel output level.
- each of the optical pixels includes a first optical sensing element configured for providing a first output level in accordance with exposure to a back-reflected laser pulse for a first time interval, and a second optical sensing element configured for providing a second output level in accordance with exposure to a back-reflected laser pulse for a successive time, and wherein the distance calculator is configured for calculating the distance from the first and second output levels.
- the distance calculator is configured calculate the distance in accordance with a ratio of the first and second output levels.
- Implementation of the method and/or system of embodiments of the invention can involve performing or completing selected tasks manually, automatically, or a combination thereof.
- several selected tasks could be implemented by hardware, by software or by firmware or by a combination thereof using an operating system.
- hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit.
- selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system.
- a data processor such as a computing platform for executing a plurality of instructions.
- the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data.
- a network connection is provided as well.
- a display and/or a user input device such as a keyboard or mouse are optionally provided as well.
- Fig. 1 is a block diagram of a prior art embodiment of a pulsed-based TOF system
- Fig. 2 is a simplified timing diagram for a prior art pulsed-based TOF technique
- Fig. 3 is a simplified timing diagram for a prior art pulsed indirect TOF technique
- Fig. 4 is a simplified system of a 3D imaging system, in accordance with an embodiment of the present invention.
- Fig. 5 is a simplified flowchart of a method for performing three-dimensional imaging, according to a preferred embodiment of the present invention
- Fig. 6a illustrates partitioning an overall distance into N coarse sub-ranges Ad
- Fig. 6b shows the distance sub-ranges captured by each row of the sensor array during the first frame t o -t l5 in an exemplary embodiment of the present invention
- Fig. 7 shows sub-ranges captured by each row at the end of the coarse ranging phase, in an exemplary embodiment of the present invention
- Fig. 8 is an example of a distance map for N x M pixel array
- Fig. 9 is a simplified flowchart of a method for determining a distance to an object, according to a first preferred embodiment of the present invention
- Fig. 10 is a simplified flowchart of a method for determining a distance to an object, according to a second preferred embodiment of the present invention.
- Fig. 11 is a simplified exposure timing diagram for TOF data acquisition by a pixel having two sensing elements, according to a preferred embodiment of the present invention.
- Fig. 12 is a simplified exposure timing diagram for pixels in a pixel array during the fine ranging phase, in an exemplary embodiment of the present invention
- Fig. 13 is an exemplary timing diagram for collecting data in the third distance sub-range over multiple frames during the fine ranging phase;
- Fig. 14 is a simplified block diagram of an imaging apparatus, according to a preferred embodiment of the present invention
- Fig. 15 is a simplified block diagram of an optical pixel for TOF imaging, according to a preferred embodiment of the present invention.
- Fig. 16 is a simplified block diagram of a 3D imager with column-parallel readout, according to an exemplary embodiment of the present invention.
- the present invention in some embodiments thereof, relates to time-of-flight three-dimensional imaging and, more particularly, but not exclusively, to a two-stage time-of-flight three-dimensional imaging technique.
- a sequence of short laser pulses is transmitted towards the field of view.
- the back-reflected light is focused on a two- dimensional array of optical sensors.
- the overall distance range of interest is partitioned into coarse sub-ranges d; to d ⁇ .
- First a coarse ranging phase is performed to detect 3D objects in the various sub-ranges.
- a fine sub-ranging phase is triggered for high- resolution 3D imaging within the sub-range, by accumulating pixel output levels over multiple exposure intervals to improve the SNR.
- a sequence of rolling-shutter readouts is performed on the sensor array.
- each row of the sensor array images one of the distance sub-ranges.
- the sub-range imaged by each of the rows is shifted, so that after N readout cycles all the rows of the pixel array have imaged each of the dj to d ⁇ sub-ranges.
- each pixel of the sensor array includes two sensing elements, as described in more detail below.
- the two sensing elements are exposed during successive intervals, and the TOF is calculated from the ratio of the signals detected by the sensing elements.
- the distance may be obtained directly from the TOF according to Eqn. 1.
- Controller 10 generates trigger pulses, which cause laser 20 to emit a sequence of laser pulses towards the field of view.
- the laser pulses are reflected towards sensor array 40 (in this case a two- dimensional CMOS camera).
- Controller 10 reads out sensor array 40, and processes the sensor array output signal in order to determine for each pixel if an object is or is not present in the currently imaged sub-range dj. During the fine ranging phase, a similar process is performed using the accumulated levels of both sensors in order to determine where a detected object is located within the sub-range. Controller 10 analyzes the collected data to obtain a 3D image 50 of the object(s) within the field of view.
- the optical sensing elements in sensor array 40 are compatible with Active Pixel Sensor (APS) CMOS fabrication.
- a CMOS sensor array may allow integration of some or all of the functions required for timing, exposure control, color processing, image enhancement, image compression and/or ADC on the same die.
- Other possible advantages of utilizing a CMOS 3D sensor array include low- power, low-voltage and monolithic integration.
- the laser is a near-infrared (NIR) laser which operates in the Si responsivity spectrum.
- NIR near-infrared
- a sensor array formed as an NxM pixels array with column parallel readout is presented. It is to be understood that the invention is capable of other embodiments of sensor array configurations and/or readout techniques.
- the sensor array may be a single pixel.
- Embodiments presented below utilize a constant pulse duration of Tp for each laser pulse. Other embodiments may utilize varying laser pulse lengths, thereby enabling dividing the overall distance range of interest into unequal sub-ranges.
- Embodiments presented below utilize a pixel exposure time of Tp. Other embodiments may utilize varying pixel exposure times.
- FIG. 5 is a simplified flowchart of a method for performing three-dimensional imaging, according to a preferred embodiment of the present invention.
- the optical data is gathered by an optical sensor array, having an array of pixels.
- each of the array pixels is exposed to a back-reflected laser pulse.
- Each of the array pixels is exposed with a shutter timing corresponding to a respective distance sub-range.
- phrases relating to exposure of an optical sensor to back- reflection of a laser pulse mean that the optical sensor is exposed in the direction towards which the laser pulse was transmitted, so that reflections of the laser pulse back from objects will be arrive at the optical sensor.
- the phrase does not imply that the shutter timing is set so that the optical sensor is necessarily exposed at the time that the reflections arrive at the optical sensor.
- the pixel output level preferably by comparing the pixel output to a threshold. If the pixel output exceeds the threshold, the pixel is designated as having an object in the current distance range.
- the threshold may differ for different sub-ranges and/or specific array pixels.
- the determination made in 510 is whether an object is or is not present for the given pixel, and does not necessarily indicate that the object is the same for multiple pixels.
- the distance from the pixel to the object is calculated for each pixel for which an object was detected in 510. The details of how the distance calculation is made are presented in more detail below.
- the method may further include outputting an array of the distances obtained in 520 and/or a three-dimensional image generated in accordance with the determined distances.
- the method may further include transmitting a laser pulse of duration Tp.
- the timing of the pixel exposure is preferably determined relative to the time of transmission and duration of the laser pulse whose back-reflection is being collected.
- the duration of the pixel exposure period equals the laser pulse duration for some or all of the sensor array pixels.
- each row in the array may be individually exposed to back-reflected laser pulses from a predefined sub-range.
- the beginning of the exposure period is preferably selected in accordance with the initial distance of the respective distance sub-range.
- the duration of the exposure period for a given pixel is selected in accordance with the required length of the respective distance sub-range.
- the overall distance may be partitioned into N coarse sub-ranges Ad, as shown in Fig. 6a.
- the length of the sub-ranges Ad is equivalent to the travel time of a laser pulse with duration Tp.
- the maximum TOF may be expressed as a function of Tp as follows:
- TOF max r 0 + N-7> (6)
- TQ denotes a fixed Time of Interest (TOI) from which data should be recorded.
- TOI Time of Interest
- the pixels of each row i are preferably exposed with the same shutter timing.
- the pixels of successive rows of the array are exposed with a shutter timing corresponding to successive distance ranges.
- a pulse length of Tp corresponds to a sub-range of length Ad, where ⁇ d is equal to the travel of the laser pulse in the duration of the exposure time interval.
- ⁇ d is equal to the travel of the laser pulse in the duration of the exposure time interval.
- Fig. 6b illustrates the distance sub-ranges captured by each row of the sensor array during the first frame Vt 1 , in an exemplary embodiment of the present invention.
- the row shutter timing is set to capture reflections from successive distance sub-ranges in successive frames.
- the exposure process may be repeated N times, once for each distance sub-range.
- the shutter timing for each row may be shifted
- Fig. 7 illustrates the sub-ranges captured by each row at the end of the coarse ranging phase, in an exemplary embodiment of the present invention.
- the first row of the sensor array collects data from the j-th sub-range
- the second row of the sensor array collects data from the subsequent sub-range, and so forth.
- the i-th row of the sensor array collects data from sub-range [i - (J-I)IN-
- the quantity [/ - (/-2)]w is the result of a modulo operation, given by:
- the sub-range coordinates (or a sub-range identifier) are stored for the given pixel.
- the location of the 3D objects within the FOV is coarsely known.
- all of the array pixels are exposed with the same shutter timing for each frame. That is, all of the array pixels collect data for the same sub-range. Data for different sub-ranges is collected in subsequent frames. Fine ranging phase
- a map of approximate distances to objects within the range of interest is known.
- An example of a distance map for N x M pixel array is shown in Fig. 8.
- an additional fine ranging phase is performed to improve distance measurement accuracy within the sub-range (520 of Fig. 5).
- the fine-ranging is performed only for pixels (or rows) for which objects were found in the coarse ranging phase, and/or only for sub-ranges in which the object was found.
- the coarse range map may first be analyzed to identify regions of interest, for which fine-ranging is performed.
- the value of vn or v # may be very small (e.g., 5 ⁇ V).
- the measurement process is repeated using laser pulse re-emitting for signal accumulation.
- the SNR improvement is on the order of the square-root of m, where m is the number of time the process is repeated. Improved SNR provides a longer distance measurement range and increases the maximum unambiguous range.
- Fig. 9 is a simplified flowchart of a method for determining a distance from a pixel to an object, according to a first preferred embodiment of the present invention.
- the pixel is exposed to a plurality of back- reflected laser pulses.
- the pixel shutter timing relative to the respective laser pulse, corresponds to the distance sub-range in which the object was detected.
- the pixel level is accumulated over the multiple exposures. Due to the shutter timing, the pixel level is accumulated for pulses reflected from the distance sub-range of interest.
- the distance from the pixel to the object is calculated from the accumulated pixel output level. The number of exposures for which the pixel output is accumulated may be selected in order to obtain the desired signal to noise ratio.
- the distance information from the coarse and/or fine ranging phases may be stored for post-processing and 3D object reconstruction.
- TOF Acquisition The distance information from the coarse and/or fine ranging phases may be stored for post-processing and 3D object reconstruction.
- the TOF (and consequently the distance) is determined using data collected by a pixel consisting of two optical sensing elements (e.g. pinned photodiodes) which may be exposed independently.
- the sensing elements are of a type suitable for CMOS
- Fig. 10 is a simplified flowchart of a method for determining a distance to an object, according to a second preferred embodiment of the present invention.
- the optical pixel being exposed to the back-reflected laser pulse includes two sensing elements.
- the first sensing element is exposed for an initial time interval to obtain a first output level.
- the second sensing element is exposed for a successive time interval to obtain a second output level.
- the distance from the optical pixel to the object is calculated from the first and second output levels.
- the output levels may first be compared to a threshold to determine if an object is present, and the distance calculated only if there is an object.
- the length of the two exposure durations may be the laser pulse length, in order to obtain the best sub-range distance resolution. As shown below, the distance may be calculated from the ratio of the two output levels.
- Fig. 11 is a simplified exposure timing diagram for TOF data acquisition by a pixel having two sensing elements, according to a preferred embodiment of the present invention.
- the exposure of the pixel consists of two successive intervals with Tp duration.
- a laser pulse of duration Tp is emitted at time t m i t -
- the back- reflected light arrives at the pixel at t; n i t +TOF.
- Sensing element S 1 is exposed from time Ti to Tj+Tp, while sensing element S 2 is exposed from time Tj+Tp to Tj+2-Tp.
- the signals Vu and v i2 obtained respectively from sensing elements S 1 and S 2 , are proportional to the magnitude of the reflected pulse, ⁇ L and background illumination, ⁇ B , as follows:
- the TOF is calculated as:
- the background illumination ⁇ B is measured previously without sending a laser pulse under known exposure conditions and stored.
- the TOF is calculated after subtracting the background illumination ⁇ B from signals v,- / and va ⁇ .
- the calculated TOF does not depend on the responsivity of the sensor or on the reflectivity of the target object.
- Fig. 12 is a simplified exposure timing diagram for pixels in a pixel array during the fine ranging phase, for an exemplary embodiment of the present invention.
- Each pixel includes two sensing elements with independent shutter timing.
- S, / and S ⁇ denote the shutter timing applied to the first and second sensing element, respectively, for a pixel within the z-th row of the sensor array.
- the shaded regions represent time intervals which are necessary to avoid, during which the pixel is not exposed in order to eliminate interference from near and far objects outside the current sub-range.
- the output levels of both sensing elements are preferably accumulated for m iterations prior to calculating the object distance. For example, consider the distance map acquired for Nx M pixels array at the end of the coarse ranging phase shown in Fig. 8. An object in the third sub-range has been detected by certain pixels in rows / and /+1. For the subsequent m iterations laser pulses continue to be emitted and the pixels which were reflected from the object are exposed at the third sub-range and read, as exemplarily shown in Fig. 13. The readouts from these m iterations are accumulated to determine the location of the object within the third sub-range, and improve the distance measurement precision.
- the SNR of some embodiments is dependent on the noise sources affecting the detected sensor output signal.
- the total noise on the sensing element e.g., photodiode
- the total noise on the sensing element may be expressed as the sum of five independent components:
- CDS Correlated Double Sampling
- the charge (including noise) on the sensing element associated with v/and v/ may thus be expressed as:
- Ql Yq[ 1 L - [Tp -At) + 1 B -T P + Q shot + Q r e adout ] electron
- Qf Y q [h - ⁇ t + h -Tp +Qshot +Grtadout] electron provided Q(Tp) ⁇ Q max , (Qma x is the saturation charge, also referred to as well capacity).
- Ii and I B are the laser and background induced photoelectric currents, respectively.
- the generated shot noise charge Q st , ot with zero mean and PSD is given by:
- the time resolution (absolute accuracy) is inversely proportion to the root of the laser photocurrent (i.e., optical power) as is expected from Poisson statistics.
- the result of Eqn. 17 may be reduced by the square root of the number of measurement iterations, as mentioned above.
- the final range measurement is determined after the completion of the coarse ranging and the fine ranging phases.
- the durations for completing these phases are:
- T frame T coarse +T fine
- M denotes the number of columns in the sensor (with column parallel readout).
- Imager 1400 includes sensor array 1410, exposure controller 1420 and distance calculator 1430.
- Sensor array 1410 includes multiple optical pixels for absorbing back-reflection of a laser pulse, plus any other required circuitry (e.g. for sensor array readout).
- the pixels may be organized as an NxM array.
- Exposure controller 1420 controls the exposure time of each of the pixels so as to expose each of the pixels to back-reflection from the pixel's current distance sub-range. Exposure controller 1420 adjusts the initial time and duration of exposure, which are based on the initial distance and length of the current distance sub-range.
- Exposure controller 1420 may also control the pulse timing of laser 1440, in order to accurately synchronize the timing between the laser pulse transmission and the pixel exposure.
- Exposure controller 1420 adjusts the pixel exposure according to the desired method for scanning the different sub-ranges of interest. For example, each row of pixels may be exposed to a different sub-range during a single frame (see Fig. 7). Additionally or alternately, a given row of pixels may be exposed to different subranges in different frames.
- Distance calculator 1430 calculates the distance to an object from a pixel output level, within the distance sub-range in which an object was detected for the given pixel.
- Distance calculator 1430 may include object detector 1450 which determines if an object is present in a pixel's respective distance sub-range. In one embodiment, the presence of an object is determined by comparing the pixel output level compared to a threshold. An object is considered to be present only if the threshold is exceeded.
- Imager 1400 may include output accumulator 1460 which accumulates the pixel output level over multiple exposures. Distance calculator 1430 may then calculate the distance to the object from the accumulated pixel output level.
- Imager 1400 may further include image generator 1470, which generates a three- dimensional image from the distances provided by distance calculator 1430, and outputs the image.
- each pixel includes two optical sensing elements, which have separate shutter controls.
- the sensing elements may be shuttered by independent trigger signals, or may be shuttered with an automatic offset from a single trigger signal.
- the distance/TOF for an object in a given sub-range is preferably calculated as a function of the both of the sensing element output signals.
- Fig. 15 is a simplified block diagram of an optical pixel for TOF imaging, according to a preferred embodiment of the present invention.
- Pixel 1510 includes two optical sensing elements, 1515.1 and 1515.2, and distance calculator 1530.
- the two sensing elements 1515.1 and 1515.2 are exposed for successive exposure periods, where the initial time and duration of the exposure period is preferably selected in accordance with distance and length of the current distance subrange.
- Distance calculator 1530 calculates the distance from pixel 1510 to an object from the first and second output levels. The distance may be calculated only after distance calculator 1530 determines that an object is present in the current sub-range. Distance calculator 1530 may subtract the background noise level from the sensing element output levels prior to calculating the distance.
- distance calculator 1530 calculates the distance from the pixel to the object according to Eqn. 1 Ia or 1 Ib above.
- 3D sensor 1600 includes a CMOS image sensor (CIS) 1610, with an NxM array of CMOS pixels.
- Sensor 1600 further includes controller 1620, analog peripheral circuits 1630, and two channels of readout circuit (1640.1 and 1640.2 respectively).
- Each pixel consists of two sensing elements (e.g., photodiodes), and allows individual exposure for each of the sensing elements. Additional circuits (not shown) may be integrated at the periphery of the pixels array for row and column fixed pixel noise reduction.
- Controller 1620 manages the system synchronicity and generates the control signals (e.g.
- Analog peripheral circuits 1630 provide reference voltages and currents, and low-jitter clocks for proper operation of the imager and the readout circuits.
- Readout circuits (1640.1 and 1640.2) consist of two channels for even and odd rows for relaxed readout timing constraints. The output data may be steered to an off-chip digital computation circuit.
- Each readout circuit comprises a column decoder (not shown), Sample & Hold (S/H) circuits (1650.1/1650.2), column ADCs (1660.1/1660.2), and a RAM block (shown as ping-pong memory 1670.1/1670.2). The signal path is now described.
- the low-noise S/H circuits 1650.1/1650.2 maintain a fixed pixel output voltage, while the high performance (i.e. high conversion rate and accuracy) column ADC 1660.1/1660.2 provides the corresponding digital coding representation.
- the role of RAM block 1670.1/1670.2 is twofold: (1) storing the digital value at the end of the A/D conversion, and (2) enabling readout of the stored data to an off-chip digital computation circuit.
- the RAM architecture is dual- port ping-pong RAM.
- the ping-pong RAM enables exchanging blocks of data between processors rather than individual words.
- the RAM block in each channel may be partitioned into two sub-blocks for two successive rows (i.e., 2/ and 2i + 2 or 2/ - 1 and 2/ + 1).
- a mode e.g., write for the 2z-th row or read out of the 2i + 2-th row
- the two sub-blocks are exchanged. That is, the mode is switched to read out of 2Mb. row or to write to the 2/ + 2-th row.
- This approach allows a dual-port function with performance equal to that of individual RAM.
- the 3D imaging techniques described above provide a high-performance TOF- based 3D imaging method and system.
- the system may be constructed using standard CMOS technology.
- compositions, methods or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
- a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
- various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range.
- a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
- a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range.
- the phrases "ranging/ranges between" a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number "to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
Abstract
L'invention concerne un procédé permettant de déterminer une distance à un objet. Le procédée consiste à exposer, en premier lieu, un premier élément de détection optique d'un pixel optique à une impulsion laser rétroréfléchie pendant un intervalle de temps initial pour obtenir un premier niveau de sortie. Puis, à exposer un second élément de détection optique du pixel optique à l'impulsion laser rétroréfléchie à un intervalle de temps successif pour obtenir un second niveau de sortie. Enfin, à calculer la distance à l'objet à partir des premier et second niveaux de sortie. Pour un réseau de détecteur, une imagerie tridimensionnelle peut être réalisée par un procédé à deux étapes dont la première consiste à déterminer si un objet est présent dans une sous-plage de distance de pixel. Dans certains modes de réalisation, la distance à l'objet est calculée à partir d'un niveau de sortie de pixel accumulé sur de multiples impulsions laser.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US92917307P | 2007-06-15 | 2007-06-15 | |
US60/929,173 | 2007-06-15 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2008152647A2 true WO2008152647A2 (fr) | 2008-12-18 |
WO2008152647A3 WO2008152647A3 (fr) | 2010-02-25 |
Family
ID=40130290
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IL2008/000812 WO2008152647A2 (fr) | 2007-06-15 | 2008-06-15 | Procédé et appareil d'imagerie tridimensionnelle |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2008152647A2 (fr) |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2322953A1 (fr) * | 2008-07-30 | 2011-05-18 | National University Corporation Shizuoka University | Capteur d'image de distance et procédé pour générer un signal d'image par le procédé de temps de vol |
WO2011150574A1 (fr) * | 2010-06-04 | 2011-12-08 | 深圳泰山在线科技有限公司 | Capteur d'image cmos, procédé de commande de synchronisation et procédé d'exposition associés |
FR2998666A1 (fr) * | 2012-11-27 | 2014-05-30 | E2V Semiconductors | Procede de production d'images avec information de profondeur et capteur d'image |
FR2998683A1 (fr) * | 2012-11-27 | 2014-05-30 | Saint Louis Inst | Procede d imagerie 3d |
US20150109414A1 (en) * | 2013-10-17 | 2015-04-23 | Amit Adam | Probabilistic time of flight imaging |
JP2015210176A (ja) * | 2014-04-25 | 2015-11-24 | キヤノン株式会社 | 撮像装置及び撮像装置の駆動方法 |
CN105093206A (zh) * | 2014-05-19 | 2015-11-25 | 洛克威尔自动控制技术股份有限公司 | 飞行时间传感器中的波形重构 |
US9256944B2 (en) | 2014-05-19 | 2016-02-09 | Rockwell Automation Technologies, Inc. | Integration of optical area monitoring with industrial machine control |
WO2016034408A1 (fr) * | 2014-09-03 | 2016-03-10 | Basler Ag | Procédé et dispositif simplifiant l'enregistrement d'images en profondeur |
WO2016171913A1 (fr) * | 2015-04-21 | 2016-10-27 | Microsoft Technology Licensing, Llc | Simulation de temps de vol de phénomènes lumineux à trajets multiples |
WO2016186775A1 (fr) * | 2015-05-17 | 2016-11-24 | Microsoft Technology Licensing, Llc | Caméra temps de vol à déclenchement périodique |
EP3147689A1 (fr) * | 2015-09-28 | 2017-03-29 | Sick Ag | Procede de detection d'un objet |
US9625108B2 (en) | 2014-10-08 | 2017-04-18 | Rockwell Automation Technologies, Inc. | Auxiliary light source associated with an industrial application |
US9696424B2 (en) | 2014-05-19 | 2017-07-04 | Rockwell Automation Technologies, Inc. | Optical area monitoring with spot matrix illumination |
JP2017151062A (ja) * | 2016-02-26 | 2017-08-31 | 株式会社東京精密 | 表面形状測定装置及び表面形状測定方法 |
WO2017178711A1 (fr) * | 2016-04-13 | 2017-10-19 | Oulun Yliopisto | Dispositif de mesure de distance et émetteur, récepteur et procédé associés |
EP3460517A1 (fr) * | 2017-09-20 | 2019-03-27 | Industry-Academic Cooperation Foundation, Yonsei University | Capteur lidar pour véhicules et son procédé de fonctionnement |
EP3477340A1 (fr) * | 2017-10-27 | 2019-05-01 | Omron Corporation | Capteur de déplacement |
US10311378B2 (en) | 2016-03-13 | 2019-06-04 | Microsoft Technology Licensing, Llc | Depth from time-of-flight using machine learning |
CN110073243A (zh) * | 2016-10-31 | 2019-07-30 | 杰拉德·迪尔克·施密茨 | 利用动态体素探测的快速扫描激光雷达 |
CN110168403A (zh) * | 2017-02-28 | 2019-08-23 | 索尼半导体解决方案公司 | 距离测量装置、距离测量方法和距离测量系统 |
JPWO2018131514A1 (ja) * | 2017-01-13 | 2019-11-07 | ソニー株式会社 | 信号処理装置、信号処理方法、およびプログラム |
EP3627466A1 (fr) * | 2018-09-24 | 2020-03-25 | Rockwell Automation Technologies, Inc. | Système et procédé de détection d'intrusion d'objet |
US10725177B2 (en) | 2018-01-29 | 2020-07-28 | Gerard Dirk Smits | Hyper-resolved, high bandwidth scanned LIDAR systems |
CN111758047A (zh) * | 2017-12-26 | 2020-10-09 | 罗伯特·博世有限公司 | 单芯片rgb-d相机 |
JP2020180941A (ja) * | 2019-04-26 | 2020-11-05 | 株式会社デンソー | 光測距装置およびその方法 |
CN112034471A (zh) * | 2019-06-04 | 2020-12-04 | 精準基因生物科技股份有限公司 | 飞行时间测距装置以及飞行时间测距方法 |
WO2021020496A1 (fr) * | 2019-08-01 | 2021-02-04 | 株式会社ブルックマンテクノロジ | Appareil de capture d'image de distance et procédé de capture d'image de distance |
US10935989B2 (en) | 2017-10-19 | 2021-03-02 | Gerard Dirk Smits | Methods and systems for navigating a vehicle including a novel fiducial marker system |
US10962867B2 (en) | 2007-10-10 | 2021-03-30 | Gerard Dirk Smits | Method, apparatus, and manufacture for a tracking camera or detector with fast asynchronous triggering |
US10969476B2 (en) | 2018-07-10 | 2021-04-06 | Rockwell Automation Technologies, Inc. | High dynamic range for sensing systems and methods |
US10996324B2 (en) | 2018-05-14 | 2021-05-04 | Rockwell Automation Technologies, Inc. | Time of flight system and method using multiple measuring sequences |
US11002836B2 (en) | 2018-05-14 | 2021-05-11 | Rockwell Automation Technologies, Inc. | Permutation of measuring capacitors in a time-of-flight sensor |
CN112888958A (zh) * | 2018-10-16 | 2021-06-01 | 布鲁克曼科技株式会社 | 测距装置、摄像头以及测距装置的驱动调整方法 |
US11067794B2 (en) | 2017-05-10 | 2021-07-20 | Gerard Dirk Smits | Scan mirror systems and methods |
US11137497B2 (en) | 2014-08-11 | 2021-10-05 | Gerard Dirk Smits | Three-dimensional triangulation and time-of-flight based tracking systems and methods |
US11243294B2 (en) | 2014-05-19 | 2022-02-08 | Rockwell Automation Technologies, Inc. | Waveform reconstruction in a time-of-flight sensor |
WO2022162328A1 (fr) * | 2021-02-01 | 2022-08-04 | Keopsys Industries | Procédé d'acquisition d'images 3d par balayage en ligne et détection à portes temporelles |
US11709236B2 (en) | 2016-12-27 | 2023-07-25 | Samsung Semiconductor, Inc. | Systems and methods for machine perception |
US11714170B2 (en) | 2015-12-18 | 2023-08-01 | Samsung Semiconuctor, Inc. | Real time position sensing of objects |
US11829059B2 (en) | 2020-02-27 | 2023-11-28 | Gerard Dirk Smits | High resolution scanning of remote objects with fast sweeping laser beams and signal recovery by twitchy pixel array |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6088099A (en) * | 1996-10-30 | 2000-07-11 | Applied Spectral Imaging Ltd. | Method for interferometer based spectral imaging of moving objects |
US20030063775A1 (en) * | 1999-09-22 | 2003-04-03 | Canesta, Inc. | Methods for enhancing performance and data acquired from three-dimensional image systems |
US20040051711A1 (en) * | 1996-04-24 | 2004-03-18 | Jerry Dimsdale | Integrated system for quickly and accurately imaging and modeling three-dimensional objects |
US20060214121A1 (en) * | 2003-10-29 | 2006-09-28 | Olaf Schrey | Distance sensor and method for detecting a distance |
US20060269896A1 (en) * | 2005-05-27 | 2006-11-30 | Yongqian Liu | High speed 3D scanner and uses thereof |
-
2008
- 2008-06-15 WO PCT/IL2008/000812 patent/WO2008152647A2/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040051711A1 (en) * | 1996-04-24 | 2004-03-18 | Jerry Dimsdale | Integrated system for quickly and accurately imaging and modeling three-dimensional objects |
US6088099A (en) * | 1996-10-30 | 2000-07-11 | Applied Spectral Imaging Ltd. | Method for interferometer based spectral imaging of moving objects |
US20030063775A1 (en) * | 1999-09-22 | 2003-04-03 | Canesta, Inc. | Methods for enhancing performance and data acquired from three-dimensional image systems |
US20060214121A1 (en) * | 2003-10-29 | 2006-09-28 | Olaf Schrey | Distance sensor and method for detecting a distance |
US20060269896A1 (en) * | 2005-05-27 | 2006-11-30 | Yongqian Liu | High speed 3D scanner and uses thereof |
Cited By (82)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10962867B2 (en) | 2007-10-10 | 2021-03-30 | Gerard Dirk Smits | Method, apparatus, and manufacture for a tracking camera or detector with fast asynchronous triggering |
EP2322953A4 (fr) * | 2008-07-30 | 2012-01-25 | Univ Shizuoka Nat Univ Corp | Capteur d'image de distance et procédé pour générer un signal d'image par le procédé de temps de vol |
US8537218B2 (en) | 2008-07-30 | 2013-09-17 | National University Corporation Shizuoka University | Distance image sensor and method for generating image signal by time-of-flight method |
EP2322953A1 (fr) * | 2008-07-30 | 2011-05-18 | National University Corporation Shizuoka University | Capteur d'image de distance et procédé pour générer un signal d'image par le procédé de temps de vol |
WO2011150574A1 (fr) * | 2010-06-04 | 2011-12-08 | 深圳泰山在线科技有限公司 | Capteur d'image cmos, procédé de commande de synchronisation et procédé d'exposition associés |
CN102907084A (zh) * | 2010-06-04 | 2013-01-30 | 深圳泰山在线科技有限公司 | Cmos图像传感器及其时序控制方法和曝光方法 |
US8964083B2 (en) | 2010-06-04 | 2015-02-24 | Shenzhen Taishan Online Technology Co., Ltd. | CMOS image sensor, timing control method and exposure method thereof |
US9699442B2 (en) | 2012-11-27 | 2017-07-04 | E2V Semiconductors | Method for producing images with depth information and image sensor |
FR2998666A1 (fr) * | 2012-11-27 | 2014-05-30 | E2V Semiconductors | Procede de production d'images avec information de profondeur et capteur d'image |
FR2998683A1 (fr) * | 2012-11-27 | 2014-05-30 | Saint Louis Inst | Procede d imagerie 3d |
WO2014082864A1 (fr) * | 2012-11-27 | 2014-06-05 | E2V Semiconductors | Procede de production d'images avec information de profondeur et capteur d'image |
EP2735886A3 (fr) * | 2012-11-27 | 2014-08-13 | I.S.L. Institut Franco-Allemand de Recherches de Saint-Louis | Procédé d'imagerie 3D |
CN105723238A (zh) * | 2013-10-17 | 2016-06-29 | 微软技术许可有限责任公司 | 概率飞行时间成像 |
KR102233419B1 (ko) | 2013-10-17 | 2021-03-26 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | 확률적 tof 이미징 |
US20150109414A1 (en) * | 2013-10-17 | 2015-04-23 | Amit Adam | Probabilistic time of flight imaging |
WO2015057535A1 (fr) * | 2013-10-17 | 2015-04-23 | Microsoft Corporation | Imagerie probabiliste de temps de vol |
US10063844B2 (en) | 2013-10-17 | 2018-08-28 | Microsoft Technology Licensing, Llc. | Determining distances by probabilistic time of flight imaging |
KR20160071390A (ko) * | 2013-10-17 | 2016-06-21 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | 확률적 tof 이미징 |
JP2015210176A (ja) * | 2014-04-25 | 2015-11-24 | キヤノン株式会社 | 撮像装置及び撮像装置の駆動方法 |
US9477907B2 (en) | 2014-05-19 | 2016-10-25 | Rockwell Automation Technologies, Inc. | Integration of optical area monitoring with industrial machine control |
US9921300B2 (en) | 2014-05-19 | 2018-03-20 | Rockwell Automation Technologies, Inc. | Waveform reconstruction in a time-of-flight sensor |
US9256944B2 (en) | 2014-05-19 | 2016-02-09 | Rockwell Automation Technologies, Inc. | Integration of optical area monitoring with industrial machine control |
CN105093206A (zh) * | 2014-05-19 | 2015-11-25 | 洛克威尔自动控制技术股份有限公司 | 飞行时间传感器中的波形重构 |
EP2947477A3 (fr) * | 2014-05-19 | 2015-12-16 | Rockwell Automation Technologies, Inc. | Reconstruction de forme d'onde dans un capteur à temps de vol |
US11243294B2 (en) | 2014-05-19 | 2022-02-08 | Rockwell Automation Technologies, Inc. | Waveform reconstruction in a time-of-flight sensor |
US9696424B2 (en) | 2014-05-19 | 2017-07-04 | Rockwell Automation Technologies, Inc. | Optical area monitoring with spot matrix illumination |
US11137497B2 (en) | 2014-08-11 | 2021-10-05 | Gerard Dirk Smits | Three-dimensional triangulation and time-of-flight based tracking systems and methods |
WO2016034408A1 (fr) * | 2014-09-03 | 2016-03-10 | Basler Ag | Procédé et dispositif simplifiant l'enregistrement d'images en profondeur |
US9625108B2 (en) | 2014-10-08 | 2017-04-18 | Rockwell Automation Technologies, Inc. | Auxiliary light source associated with an industrial application |
WO2016171913A1 (fr) * | 2015-04-21 | 2016-10-27 | Microsoft Technology Licensing, Llc | Simulation de temps de vol de phénomènes lumineux à trajets multiples |
US10062201B2 (en) | 2015-04-21 | 2018-08-28 | Microsoft Technology Licensing, Llc | Time-of-flight simulation of multipath light phenomena |
WO2016186775A1 (fr) * | 2015-05-17 | 2016-11-24 | Microsoft Technology Licensing, Llc | Caméra temps de vol à déclenchement périodique |
US9864048B2 (en) | 2015-05-17 | 2018-01-09 | Microsoft Technology Licensing, Llc. | Gated time of flight camera |
DE102015116368A1 (de) | 2015-09-28 | 2017-03-30 | Sick Ag | Verfahren zur Detektion eines Objekts |
EP3147689A1 (fr) * | 2015-09-28 | 2017-03-29 | Sick Ag | Procede de detection d'un objet |
US9989641B2 (en) | 2015-09-28 | 2018-06-05 | Sick Ag | Method of detecting an object |
JP2017106894A (ja) * | 2015-09-28 | 2017-06-15 | ジック アーゲー | 物体検出方法 |
US11714170B2 (en) | 2015-12-18 | 2023-08-01 | Samsung Semiconuctor, Inc. | Real time position sensing of objects |
JP2017151062A (ja) * | 2016-02-26 | 2017-08-31 | 株式会社東京精密 | 表面形状測定装置及び表面形状測定方法 |
US10311378B2 (en) | 2016-03-13 | 2019-06-04 | Microsoft Technology Licensing, Llc | Depth from time-of-flight using machine learning |
US11300666B2 (en) | 2016-04-13 | 2022-04-12 | Oulun Yliopisto | Distance measuring device and transmitter, receiver and method thereof |
WO2017178711A1 (fr) * | 2016-04-13 | 2017-10-19 | Oulun Yliopisto | Dispositif de mesure de distance et émetteur, récepteur et procédé associés |
CN110073243A (zh) * | 2016-10-31 | 2019-07-30 | 杰拉德·迪尔克·施密茨 | 利用动态体素探测的快速扫描激光雷达 |
EP3532863A4 (fr) * | 2016-10-31 | 2020-06-03 | Gerard Dirk Smits | Lidar à balayage rapide avec sondage par voxel dynamique |
US10935659B2 (en) | 2016-10-31 | 2021-03-02 | Gerard Dirk Smits | Fast scanning lidar with dynamic voxel probing |
CN110073243B (zh) * | 2016-10-31 | 2023-08-04 | 杰拉德·迪尔克·施密茨 | 利用动态体素探测的快速扫描激光雷达 |
US11709236B2 (en) | 2016-12-27 | 2023-07-25 | Samsung Semiconductor, Inc. | Systems and methods for machine perception |
US11585898B2 (en) | 2017-01-13 | 2023-02-21 | Sony Group Corporation | Signal processing device, signal processing method, and program |
JP7172603B2 (ja) | 2017-01-13 | 2022-11-16 | ソニーグループ株式会社 | 信号処理装置、信号処理方法、およびプログラム |
EP3570066A4 (fr) * | 2017-01-13 | 2019-12-25 | Sony Corporation | Dispositif de traitement de signal, procédé de traitement de signal et programme |
JPWO2018131514A1 (ja) * | 2017-01-13 | 2019-11-07 | ソニー株式会社 | 信号処理装置、信号処理方法、およびプログラム |
CN110168403A (zh) * | 2017-02-28 | 2019-08-23 | 索尼半导体解决方案公司 | 距离测量装置、距离测量方法和距离测量系统 |
EP3591437A4 (fr) * | 2017-02-28 | 2020-07-22 | Sony Semiconductor Solutions Corporation | Dispositif de mesure de distance, procédé de mesure de distance et système de mesure de distance |
CN110168403B (zh) * | 2017-02-28 | 2023-12-01 | 索尼半导体解决方案公司 | 距离测量装置、距离测量方法和距离测量系统 |
JP7027403B2 (ja) | 2017-02-28 | 2022-03-01 | ソニーセミコンダクタソリューションズ株式会社 | 測距装置、および測距方法 |
JPWO2018159289A1 (ja) * | 2017-02-28 | 2019-12-19 | ソニーセミコンダクタソリューションズ株式会社 | 測距装置、測距方法、および測距システム |
US11067794B2 (en) | 2017-05-10 | 2021-07-20 | Gerard Dirk Smits | Scan mirror systems and methods |
US10935658B2 (en) | 2017-09-20 | 2021-03-02 | Industry-Academic Cooperation Foundation, Yonsei University | Lidar sensor for vehicles and method of operating the same |
EP3460517A1 (fr) * | 2017-09-20 | 2019-03-27 | Industry-Academic Cooperation Foundation, Yonsei University | Capteur lidar pour véhicules et son procédé de fonctionnement |
US10935989B2 (en) | 2017-10-19 | 2021-03-02 | Gerard Dirk Smits | Methods and systems for navigating a vehicle including a novel fiducial marker system |
EP3477340A1 (fr) * | 2017-10-27 | 2019-05-01 | Omron Corporation | Capteur de déplacement |
US11294057B2 (en) | 2017-10-27 | 2022-04-05 | Omron Corporation | Displacement sensor |
CN111758047B (zh) * | 2017-12-26 | 2024-01-19 | 罗伯特·博世有限公司 | 单芯片rgb-d相机 |
CN111758047A (zh) * | 2017-12-26 | 2020-10-09 | 罗伯特·博世有限公司 | 单芯片rgb-d相机 |
US11240445B2 (en) * | 2017-12-26 | 2022-02-01 | Robert Bosch Gmbh | Single-chip RGB-D camera |
US10725177B2 (en) | 2018-01-29 | 2020-07-28 | Gerard Dirk Smits | Hyper-resolved, high bandwidth scanned LIDAR systems |
US10996324B2 (en) | 2018-05-14 | 2021-05-04 | Rockwell Automation Technologies, Inc. | Time of flight system and method using multiple measuring sequences |
US11002836B2 (en) | 2018-05-14 | 2021-05-11 | Rockwell Automation Technologies, Inc. | Permutation of measuring capacitors in a time-of-flight sensor |
US10969476B2 (en) | 2018-07-10 | 2021-04-06 | Rockwell Automation Technologies, Inc. | High dynamic range for sensing systems and methods |
EP3627466A1 (fr) * | 2018-09-24 | 2020-03-25 | Rockwell Automation Technologies, Inc. | Système et procédé de détection d'intrusion d'objet |
US10789506B2 (en) | 2018-09-24 | 2020-09-29 | Rockwell Automation Technologies, Inc. | Object intrusion detection system and method |
EP3839555A4 (fr) * | 2018-10-16 | 2022-07-06 | Brookman Technology, Inc. | Dispositif de mesure de distance, appareil de prise de vues et procédé de réglage de l'entraînement d'un dispositif de mesure de distance |
CN112888958A (zh) * | 2018-10-16 | 2021-06-01 | 布鲁克曼科技株式会社 | 测距装置、摄像头以及测距装置的驱动调整方法 |
JP7259525B2 (ja) | 2019-04-26 | 2023-04-18 | 株式会社デンソー | 光測距装置およびその方法 |
JP2020180941A (ja) * | 2019-04-26 | 2020-11-05 | 株式会社デンソー | 光測距装置およびその方法 |
CN112034471A (zh) * | 2019-06-04 | 2020-12-04 | 精準基因生物科技股份有限公司 | 飞行时间测距装置以及飞行时间测距方法 |
JP2021025833A (ja) * | 2019-08-01 | 2021-02-22 | 株式会社ブルックマンテクノロジ | 距離画像撮像装置、及び距離画像撮像方法 |
WO2021020496A1 (fr) * | 2019-08-01 | 2021-02-04 | 株式会社ブルックマンテクノロジ | Appareil de capture d'image de distance et procédé de capture d'image de distance |
JP7463671B2 (ja) | 2019-08-01 | 2024-04-09 | Toppanホールディングス株式会社 | 距離画像撮像装置、及び距離画像撮像方法 |
US11829059B2 (en) | 2020-02-27 | 2023-11-28 | Gerard Dirk Smits | High resolution scanning of remote objects with fast sweeping laser beams and signal recovery by twitchy pixel array |
WO2022162328A1 (fr) * | 2021-02-01 | 2022-08-04 | Keopsys Industries | Procédé d'acquisition d'images 3d par balayage en ligne et détection à portes temporelles |
FR3119462A1 (fr) * | 2021-02-01 | 2022-08-05 | Keopsys Industries | Procédé d’acquisition d’images 3D par balayage en ligne et détection à portes temporelles |
Also Published As
Publication number | Publication date |
---|---|
WO2008152647A3 (fr) | 2010-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2008152647A2 (fr) | Procédé et appareil d'imagerie tridimensionnelle | |
Perenzoni et al. | A 64$\times $64-Pixels Digital Silicon Photomultiplier Direct TOF Sensor With 100-MPhotons/s/pixel Background Rejection and Imaging/Altimeter Mode With 0.14% Precision Up To 6 km for Spacecraft Navigation and Landing | |
US11769775B2 (en) | Distance-measuring imaging device, distance measuring method of distance-measuring imaging device, and solid-state imaging device | |
US10838066B2 (en) | Solid-state imaging device, distance measurement device, and distance measurement method | |
US8642938B2 (en) | Shared time of flight pixel | |
US10681295B2 (en) | Time of flight camera with photon correlation successive approximation | |
KR102471540B1 (ko) | 이미징 어레이 내의 픽셀의 노출 값으로부터 백그라운드 광을 차감하기 위한 방법 및 이를 이용한 픽셀 | |
US20220196812A1 (en) | Time of flight sensor | |
US20200217965A1 (en) | High dynamic range direct time of flight sensor with signal-dependent effective readout rate | |
JP6665873B2 (ja) | 光検出器 | |
US9140795B2 (en) | Time of flight sensor with subframe compression and method | |
US11119196B2 (en) | First photon correlated time-of-flight sensor | |
US20130140433A1 (en) | Sensor Pixel Array and Separated Array of Storage and Accumulation with Parallel Acquisition and Readout | |
JP6709335B2 (ja) | 光センサ、電子機器、演算装置、及び光センサと検知対象物との距離を測定する方法 | |
US20160097841A1 (en) | Distance-measuring/imaging apparatus, distance measuring method of the same, and solid imaging element | |
EP3370079B1 (fr) | Extraction de paramètre et de plage au moyen d'histogrammes traités générés par la détection d'impulsions d'un capteur de durée de vol | |
CN110221273B (zh) | 时间飞行深度相机及单频调制解调的距离测量方法 | |
US10520590B2 (en) | System and method for ranging a target with a digital-pixel focal plane array | |
WO2018181013A1 (fr) | Détecteur de lumière | |
US20210270969A1 (en) | Enhanced depth mapping using visual inertial odometry | |
Charbon | Introduction to time-of-flight imaging | |
Bellisai et al. | Indirect time-of-flight 3D ranging based on SPADs | |
Bellisai et al. | Low-power 20-meter 3D ranging SPAD camera based on continuous-wave indirect time-of-flight | |
Stoppa et al. | Single-photon detectors for time-of-flight range imaging | |
Stoppa et al. | Scannerless 3D imaging sensors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08763570 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 08763570 Country of ref document: EP Kind code of ref document: A2 |