WO2016034408A1 - Procédé et dispositif simplifiant l'enregistrement d'images en profondeur - Google Patents

Procédé et dispositif simplifiant l'enregistrement d'images en profondeur Download PDF

Info

Publication number
WO2016034408A1
WO2016034408A1 PCT/EP2015/068963 EP2015068963W WO2016034408A1 WO 2016034408 A1 WO2016034408 A1 WO 2016034408A1 EP 2015068963 W EP2015068963 W EP 2015068963W WO 2016034408 A1 WO2016034408 A1 WO 2016034408A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
pulse
signal
signals
light pulse
Prior art date
Application number
PCT/EP2015/068963
Other languages
German (de)
English (en)
Inventor
Jörg Kunze
Original Assignee
Basler Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Basler Ag filed Critical Basler Ag
Priority to JP2017512300A priority Critical patent/JP2017530344A/ja
Publication of WO2016034408A1 publication Critical patent/WO2016034408A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4915Time delay measurement, e.g. operational details for pixel components; Phase measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • G01S17/36Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar

Definitions

  • the invention relates to an apparatus and a method for detecting a three-dimensional depth image based on image information from an image sensor with one or two-dimensional pixel arrangement.
  • digital cameras are used for the electronic capture of images nowadays.
  • Such a digital camera is described, for example, in US4131919A and in EP2367360A2 and is hereinafter referred to as a conventional digital camera.
  • Brightness images are taken using conventional digital cameras. In such a brightness image, brightness values are assigned to the pixels.
  • Fig. 1 shows a schematic construction of a conventional digital camera 10 which captures a scene 11 consisting of objects 17, in which the image is captured by means of an optical system 12, e.g. is imaged by means of a lens on at least one image sensor 13, whose signals are converted by an electronic image processing device 14 into a digital image, which is output by means of an interface 15 via a signal transmission line 16.
  • an optical system 12 e.g. is imaged by means of a lens on at least one image sensor 13, whose signals are converted by an electronic image processing device 14 into a digital image, which is output by means of an interface 15 via a signal transmission line 16.
  • a storage it is possible for a storage to take place beforehand in the electronic image processing device 14 or for the signal transmission line 16 to lead to a storage medium (not shown) in which the image is stored.
  • Fig. 2 shows a schematic structure of an image sensor 170 for conventional cameras, which will be referred to as a conventional image sensor.
  • image sensors usually consist of a periodic arrangement of pixels 171 (hereinafter referred to as pixels).
  • pixels Predominantly one-dimensional arrays are referred to as line sensors, and predominantly two-dimensional arrays are referred to as area sensors.
  • the image sensor shown in FIG. 2 is thus an area sensor with conventional pixels 171.
  • the pixels 171 have in common that they each have a photosensitive area 172, typically a photodiode (PD) or a so-called pinned Photodiode (PPD), which is designed so that during an exposure time in response to the incident light generates an electrical variable that is a measure of the amount of light received by the pixel in question.
  • This electrical quantity may be a charge, a voltage, a current or even a time-coded signal, such as a pulse train.
  • CCD charge-coupled device
  • inter-line displacement CCDs Inter-line Transfer CCDs
  • CMOS Complementary Metal-Oxide-Semiconductor
  • ERS Electronic Rolling Shutter
  • GS Global Shutter
  • pixels for conventional image sensors regardless of the technology in which they are performed that is CCD or CMOS, referred to as conventional pixels.
  • GS pixels typically have a signal memory 173, as shown in FIG. 2, in which charges or voltages can be stored.
  • CMOS pixels are what is known as floating diffusion (FD), while charge storage in CCDs often takes place in metal-oxide semiconductor (MOS) diodes.
  • FD floating diffusion
  • MOS metal-oxide semiconductor
  • pixels which are equipped with more than one signal memory, for example to perform a so-called Correlated Double Sampling (CDS), or to achieve an extended dynamic range (HDR).
  • CDS Correlated Double Sampling
  • HDR extended dynamic range
  • control path 175 does not form a second signal path to a latch, but merely connects the photosensitive area to a supply voltage so that the charge is removed from the pixel outside the exposure time.
  • monochrome cameras There are monochrome cameras called monochrome. Monochrome cameras do not have the capability in the To distinguish image capture between different colors. In addition, there are also color cameras that can detect such a distinction between different colors. For example, they can use an image sensor having a so-called mosaic filter with different colors, as described in US3971065.
  • 3D cameras which generate so-called distance images or depth images in which distance values are assigned to the pixels, which represent a measure of the distance between the camera and the object.
  • the depth images are output directly, or that further processing steps are carried out internally, for example the production of so-called point clouds from 3D coordinates or the interpretation of the depth images, for example as a gesture of a hand.
  • time-of-flight cameras which perform a light-time-based distance measurement and are referred to as time-of-flight cameras (TOF cameras), as disclosed, for example, in DE102011089636A1.
  • FIG. 3 shows a schematic structure of a ToF camera 20, which has a synchronization unit 21, which controls an electronic control device 22 for a light source 23 by means of a control signal 33 in such a way that this light source 23 transmits light or light pulses modulated in time.
  • the emitted light beams 34 and 36 are scattered or reflected by objects 25 and 26 and partly pass back to the camera as object light beams 35 and 37, being delayed because they travel the necessary distance at approximately the speed of light.
  • There they are with an optics 27, eg with a Lens, imaged on the image sensor 28.
  • the image sensor 28 is in turn driven by means of a drive signal 32 from the synchronization unit 21 so that it performs a demodulation of the object light beams.
  • From the raw data supplied by the image sensor 28, depth images are then generated in a computing unit 29 which are output via an interface 30 to a transmission line 31.
  • ToF cameras often have special ToF image sensors for measuring distances, as are known, for example, from DE19704496C2, US8115158 or US20120176476A1.
  • These image sensors often work with pixels that are equipped with so-called photonic mixer detectors (PMD) or work on a related principle.
  • PMD photonic mixer detectors
  • These pixels will hereinafter be referred to as demodulation pixels and the image sensors as demodulation sensors.
  • FIG. 4 shows a schematic structure of a ToF demodulation sensor 180 with demodulation pixels 282.
  • demodulation pixels 181 have the common feature that there is in each case a photosensitive area 182 which is connected to at least two different signal stores 183 and 184 via at least two different signal paths 185 and 186 is connected.
  • the read-out electronics for the at least two signal memories are present more than once per pixel.
  • Demodulation pixels are predominantly used together with light sources which are in so-called continuous wave operation
  • Sinusoidally modulated light is used very often or light with a pulse-pause ratio of approx. 50%.
  • Such image sensors with demodulation pixels are manufactured especially for ToF applications. They are generally more expensive and therefore more expensive than comparable conventional image sensors, or they have fewer pixels at the same price and thus lower spatial resolution.
  • demodulation pixels require more sophisticated electronics for their characteristic plurality of signal paths than conventional pixels. As a result, they have an increased space requirement compared to the conventional pixels, which leads to a higher resource requirement, for example chip area.
  • the area fraction of the total area of the pixel which is available for the photosensitive area is reduced compared to a conventional pixel.
  • conventional pixels are generally more sensitive to light than demodulation pixels.
  • conventional image sensors are currently produced in far greater numbers than ToF image sensors, resulting in a further price advantage for the conventional image sensors. For these reasons, it would be advantageous to be able to provide conventional image sensors with conventional pixels also for ToF cameras.
  • the conventional pixels in conventional image sensors usually have only one signal path and often only one signal memory. Even with such image sensors, however, ToF range images can basically be recorded.
  • Another method for capturing a ToF image by means of a conventional CMOS image sensor is known from EP1040366B1.
  • sensor signals three different raw images (referred to there as sensor signals) for determining a distance image, namely a first image with dark current and ambient light, a second raw image in which, depending on the light transit time, a part of the received light is integrated, and a third signal with a higher integration time.
  • dark current and ambient light are recorded twice, with short and with a long integration time.
  • a pixel architecture can be found, for example, for a CCD image sensor in US8576319A and for a CMOS image sensor in EP2109306A2.
  • the invention has for its object to provide a light cycle camera, which can provide depth data with high measurement quality and high image resolution.
  • the light source of the time of flight camera for emitting at least one light pulse and the image sensor for demodulating the received light signal are controlled by means of an electrical shutter associated with each pixel of the image sensor, the electrical shutter being clocked at least three times in fixed phase relation to the at least one light pulse (ie opened and closed) to detect at least three signals, wherein the time duration of the opening of the electrical closing device is equal to each, and wherein the phase shift between the first opening and the second opening is equal to the phase shift between the second opening and the third opening ,
  • the shutter being clocked such that the phase shift between the light pulse and the shutter is varied by the different phase positions of the timing between the subimages.
  • the solution according to the invention can provide image processing with only one signal path per pixel of the image sensor, whereby the conventional image sensor-dependent advantages of low complexity, low noise, high lateral resolution and high frame rate can be maintained even in light-time cameras.
  • Due to the inventive possibility of using conventional image sensors in ToF cameras it is possible to record both conventional images and distance images with one and the same camera.
  • color image sensors can also be used. Since the color filters used in the Bayer pattern are usually transparent to infrared light, a distance image can also be recorded with the conventional use of an infrared light source with a conventional color image sensor. This has the advantage of making it possible to record both color images and distance images with one and the same ToF camera.
  • the phase shift between the first control signal and the second control signal and the phase shift between the second control signal and the third control signal with respect to the period of the light pulse can each be 120 °.
  • the phase shift between the time of opening of the electrical closing device by the first of the three control signals and the time of delivery of the light pulse with respect to the period of the light pulse can be -60 °.
  • the time duration of the pulse-shaped control signals for the opening of the electrical closure device with respect to the period of the pulse of light can correspond to a phase angle of 180 ° and the duration of the light pulse with respect to the period of the light pulse can correspond to a phase angle of 60 °.
  • the phase shift between the time of opening of the electrical closing device by the first of the three control signals and the time of delivery of the light pulse with respect to the period of the light pulse can be 0 °.
  • the time duration of the pulse-shaped control signals for the opening of the electrical closure device with respect to the period of the light pulse a phase angle of 60 ° and the duration of the light pulse with respect to the period of the light pulse corresponding to a phase angle of 180 °. This allows a high measurement quality with a smaller measuring range.
  • an advantageous avoidance of the undesired depth aliasing can be achieved by increasing the period duration of the light pulse signal.
  • This enables high measurement quality with low nonlinear errors.
  • the phase shift between the time of opening of the electrical closure device by the first of the three control signals and the time of delivery of the light pulse with respect to the period of the light pulse can be 0 °, but now the time duration of the pulse-shaped Control signals for the opening of the electric shutter with respect to the period of the light pulse corresponds to a phase angle of 120 ° and also the duration of the light pulse with respect to the period of the light pulse corresponds to a phase angle of 120 °.
  • This modification of the phase relationships makes it possible to determine the distance with a particularly low computational outlay and associated lower resources.
  • the proposed time of flight camera may include calculating means for calculating the distance information of the three-dimensional image based on the values of the three signals based on case discrimination for different ranges determined by mutual size ratios of the values of the three signals.
  • the computing device can advantageously be configured such that the validity of the calculated distance information is determined on the basis of a ratio between signal quality and noise. Additionally or alternatively, the computing device can be designed so that a decision about the validity of the calculated distance information based on the degree of saturation of the three signals.
  • a pulse-pause ratio which is smaller or even very much less than 50% can be used for the emitted light signal. This results in a lower sensitivity to ambient light and thus an improved measurement quality. This allows camera applications in strong ambient light, such as sunlight or studio spotlights.
  • the components of the device or time-of-flight camera proposed for achieving the above-mentioned object can be used individually or jointly as discrete circuits, integrated circuits (eg application-specific integrated circuits (ASICs)), programmable circuits (eg field programmable gate arrays (FPGAs). ) be realized.
  • the arithmetic unit can be realized by an FPGA as a central component.
  • the steps of the method claim can be realized as a software program or software routine for controlling the processor of a computer device for its execution.
  • Fig. 1 shows a schematic structure of a conventional
  • FIG. 3 shows a schematic structure of a ToF camera
  • FIG. 4 shows a schematic structure of a ToF demodulation sensor with demodulation pixels
  • 5 (a) - (d) are schematic timing charts for a ToF camera according to a first embodiment
  • Fig. 6 (a) - (e) are schematic timing charts for explaining the timing for light and control signals according to the first embodiment
  • FIGS. 9 (a) and (b) show trajectories of S vectors in S space according to FIG.
  • Fig. 14 shows a diagram with signal curves taking into account the law of distance;
  • Fig. 15 is a formula for calculating the depth noise;
  • Fig. 16 is a formula for determining the validity of distance values
  • FIG. 17 is a diagram illustrating an approximation of FIG.
  • FIG. 20 shows time charts with a time controller according to a second embodiment
  • FIG. 21 (a) and (b) are diagrams showing resultant waveforms according to the second embodiment
  • Fig. 22 is a formula for determining the distance d in the second embodiment
  • Figs. 23 (a) and (b) are timing charts showing the peak value and average value of a pulse frequency modulated signal at different pulse repetition frequencies;
  • Fig. 25 is a formula for calculating the depth noise in the third embodiment
  • G. Fig. 26 is a formula for determining the validity of dependency values of the third embodiment
  • FIG. 27 shows time charts with a timing according to a fourth embodiment
  • FIG. 28 is a timing chart showing a timing according to a fifth embodiment
  • Figs. 30 (a) and (b) show a trajectory of the S vectors for the fifth
  • Embodiment from two different directions of view Embodiment from two different directions of view
  • Fig. 31 is a formula for determining the distance from the
  • FIG. 32 shows a curve of a real measured distance as a function of the time difference for the fifth exemplary embodiment
  • FIG. 34 is a graph showing resulting waveforms according to the sixth embodiment.
  • Fig. 35 is a formula for determining the distance in the sixth
  • Fig. 36 timing diagrams with a timing according to
  • FIG. 37 is diagrams showing resultant waveforms according to the seventh embodiment.
  • Figs. 38 (a) and (b) show a trajectory of the S vectors for the seventh
  • Embodiment from two different directions of view Embodiment from two different directions of view
  • Fig. 39 is a formula for determining the distance in the seventh
  • Fig. 40 is a formula for calculating the depth noise in the seventh embodiment
  • FIG. 42 are diagrams showing resultant waveforms according to the eighth embodiment.
  • Fig. 43 is a formula for determining the distance in the eighth
  • FIG. 5 (a) to (d) are timing charts showing waveforms according to a first embodiment of operating a ToF camera of FIG. 3 with an image sensor 28, which is preferably a conventional image sensor of FIG. 5 (a) shows the timing for emitting light L from the light source 23 in the camera 20 with two different intensity levels "0" and "1" over time t. For example, at level “1" light is emitted and No light is emitted at the "0" level, the light is emitted as a time-limited light pulse 40 having a temporal pulse length t1 and having a period tp after a dead time to optional as a second light pulse 60 and a third light pulse 61, and so on.
  • the period tp is here equated to 360 °
  • incident light D is detected by the image sensor 28.
  • the associated waveform is also shown in Fig. 5 (a).
  • the incident light initially only consists of the ambient light 43 with the intensity level B assumed to be constant.
  • the incident light pulse 42 is added to this ambient light, resulting in the time profile of D shown. This has traveled between emission and incidence from the light source 23 to the objects 25, 26 and back to the image sensor 28 a distance at about the speed of light and is therefore delayed by a time difference td.
  • the light pulse 40 is also reduced by the propagation in space in accordance with the law of distance and the incomplete reflection on the object in its intensity, so that the received light pulse 42 is generally less intense than the emitted light pulse 40th
  • the image sensor 28 is driven by the synchronization unit 21 by means of the drive signal 32, so that it carries out a demodulation of the object light beams.
  • This control signal 32 is formed in the following exemplary embodiments by three control signals CO, C1 and C2.
  • the control The image sensor 28 takes place by means of an electric shutter (Shutters) of the type mentioned.
  • control signal CO for controlling the electric shutter of the image sensor 28 shown in Fig. 3 is shown over the time t for obtaining a first raw image.
  • control signal CO assumes states “0" and “1” with the electric shutter in the "0" state closed and the image sensor 28 unable to receive light while in the "1" state and the image sensor 28 light can receive.
  • the control signal CO is applied in parallel to the emitted light pulses 40, 60 and 61 with fixed phase relation phi0 shown in Fig. 5 (a) to obtain a first raw image.
  • the opening time is 180 °.
  • a signal component 47 of the ambient light 43 is detected by the image sensor 28 during the opening time of the electrical shutter, while another signal component 48 of the ambient light 43 is not detected outside the opening time of the electrical shutter.
  • a signal component 45 of the incident light pulse 42 is additionally detected, while another signal component 46 of the incident light pulse is not detected. This can be realized, for example, by removing the charge from the photosensitive region 172 via the electrical shutter (not shown) of FIG. 2 connected to the signal path 175 when the control signal CO becomes "0", and then when the control signal CO is in the "1" state assumes the charge from the photosensitive area 172 is cumulatively supplied to the memory 173 via the signal path 174.
  • a signal is formed, for example a charge, a voltage, a current or a digital number. If several light pulses 40, 60 and 61 have been emitted and the electrical shutter has been opened several times, the signals of the individual openings of the shutter become analogous, as charges in the memory 173, i. as charge packets, added or cumulated, which is symbolized by the adding function 49, and finally form a signal SO. This signal is assigned to one pixel each. Together with further signals SO from other pixels, a first raw image is produced for the signal SO.
  • the adding function 49 could also be realized in the arithmetic unit 29 shown in FIG. 3, if signals of the individual shutter openings are separately digitized and processed. The same applies to the (symbolic) adding functions 54 and 59 mentioned below.
  • a second control signal Gl for controlling the electric shutter over the time t for obtaining a second raw image is shown.
  • the terms used correspond to those of Fig. 5 (b).
  • another phase relation is used.
  • the opening time is 180 °.
  • a signal is again formed from the incident light, and if several light pulses 40, 60 and 61 have been emitted and the electrical shutter has been opened several times, the signals of the individual apertures of the shutter are analogously represented by a symbolic addition function 54 as charges in the memory 173, i. as charge packets, added or accumulated and finally form a signal Sl.
  • This signal is also assigned to one pixel. Together with further signals Sl from other pixels, a second raw image is produced for the signal S1.
  • the adder function 54 could also be implemented in the arithmetic unit 29 shown in FIG. 3 if signals of the individual shutter openings are separately digitized and processed.
  • a third control signal C2 for controlling the electrical closure over time t is shown for obtaining a third raw image.
  • the terms used correspond to those of Figs. 5 (b) and (c).
  • phi2 120 °, as well as phil.
  • the opening time is again 180 °.
  • the adding function 59 could also be realized in the arithmetic unit 29 shown in FIG. 3, if signals of the individual shutter openings are separately digitized and processed.
  • the control signals CO, C1 and C2 have the same period tp as the light L.
  • Figs. 6 (a) to (e) are diagrams for explaining the occurrence of the signals by using the timing for light and control signals. Specifically, Figs. 6 (a) to (e) show how the phase relationship between the incident light D and the control signal C results in a waveform of the signal S over the time difference td.
  • the incident light pulse 70 arrives so that a signal portion 71 of the received light pulse is before the rising edge 73 of the control signal C and is not received, while another signal portion 72 is behind the rising edge 73 and received accordingly becomes.
  • the later now the incident light pulse 70 arrives the smaller is the non-received signal component 71 of the incident light pulse and the greater is the received signal component 72.
  • an increasing profile 74 of the signal S results over the time difference td.
  • the incident light pulse 75 arrives and the control signal C becomes "1" during the entire time of arrival, thereby receiving the incident light pulse as the completely received light pulse 76.
  • a constant high value 77 for the signal S is the case.
  • the incident light pulse 78 arrives so that a signal portion 79 of the received light pulse is before the falling edge 81 of the control signal C and is received, while another signal component 80 is behind the falling edge 81 and accordingly not received.
  • a falling curve 82 of the signal S over the time difference td results in the case shown, a falling curve 82 of the signal S over the time difference td.
  • the incident light pulse 75 arrives at a time when the control signal continuously assumes the state "0.” This completely suppresses the received light pulse 84. As long as this is the case, the signal S results a constant low value 85.
  • Figs. 7 (a) and (b) are diagrams showing waveforms for the first embodiment. More specifically, Fig. 7 (a) shows the signals SO, Sl and S2, and Fig. 7 (b) shows the value for the distance d determined therefrom.
  • the waveform shown in FIG. 7 (a) is obtained for the signals SO, S1 and S2.
  • the signals thereby move between a background signal Sb, which results only from the contribution of the received ambient light 58 to the signals SO, S1 and S2, and the peak signal Sp, which is the sum of the contribution of the received ambient light and the received light pulses 51 and 52 to the signals SO, Sl and S2.
  • An S-space can be defined as space, which is spanned from the unit vectors SO, Sl and S2 as a basis.
  • the value triples of the signals SO, Sl and Sl define an S-vector having as components the values of the signals SO, Sl and S2.
  • the time difference td exceeds the period tp, then there are, for example, in the area 96 values for the signals SO, S1 and S2 which can not be distinguished from those in the area 90 and in the area 97 values which are not distinguished from those in the area 91 can.
  • This effect is called depth aliasing. If one looks at the origin of the S-space along the unit diagonal, this division corresponds to a segmentation along the unit diagonal with six pie-shaped segments of 60 ° each.
  • a periodic repetition of the proportional increase in the distance d ensues.
  • both the precipitous drop 101 and the periodic repetition are measurement errors due to depth aliasing.
  • the proportionality factor for the proportional increase in the first region 100 results from the speed of light c and the pulse length t1, the latter namely the slope of the within the ranges in Fig. 7 (a) respectively increasing or decreasing signal influenced.
  • the proportionality factor is c * tl 12.
  • FIG. 8 shows a formula used in the arithmetic unit 29 for determining the distance d from the signals SO, S1 and S2.
  • the distinction between regions 90 to 95 shown in FIG. 7 (a) results in a linear signal curve within each region, so that in each region the distance d can be determined using a linear equation.
  • Fig. 8 gives a formula by means of which linear equations for the regions 90 to 95 can be combined by means of a case distinction into a single piecewise linear formula.
  • d is the distance to be measured
  • c is the speed of light
  • tl is the pulse length.
  • the six case discriminations by inequalities are made to determine to which of the six regions 90 to 95 the S vector belongs and with which linear equation a distance is to be calculated from its components. For each area, a linear equation is given. By adding the numbers 0, 1, 2, 3, 4, 5 a continuity in the determined distance d is achieved.
  • a value for the distance d shown in Fig. 7 (b) can be correctly determined.
  • the values of the signals SO, S1 and S2 are thus supplied to an operator, which is constructed analogously to a Hue operator and a rotation angle about the spatial diagonal of the S0-S1-S2 space determined. By means of a proportionality constant, this is then converted into the distance information.
  • the proposed method makes it possible to achieve a particularly large measuring range for a given pulse width t1.
  • FIGS. 9 (a) and (b) show a trajectory of S vectors in S space according to the prior art and the first embodiment, respectively.
  • FIG. 9 an S-space is shown as a cube and the trajectory of the S-vectors is plotted with increasing time difference td.
  • Fig. 9 (a) shows a trajectory according to the prior art
  • Fig. 9 (b) shows a trajectory according to the first embodiment.
  • One way of measuring the time difference td by means of the three signals SO, Sl and S2 is to measure the ambient light as a constant signal S2, partially receive an incident light pulse and measure as rising signal Sl and the ambient light plus the full incident light pulse to measure SO as a constant signal. This results in a linear course in S-space, as shown by way of example in FIG. 9 (a), the trajectory drawn here assuming an ambient light of zero.
  • Fig. 9 (b) shows the trajectory of the S vectors in S space resulting from the waveform shown in Fig. 7 (a) assuming that the intensity of the ambient light is zero.
  • a 6 times as long trajectory is achieved.
  • the selected trajectory has a maximum length under the given boundary conditions.
  • the given boundary conditions consist of the requirement that the time difference td and thus the distance d can be determined independently of the intensity of the ambient light and of the intensity of the incident light pulse. This particularly long trajectory makes it possible to achieve a particularly large measuring range.
  • the measurement range having a certain range of distances given by the period tp does not start from zero at a distance d from the camera, but at a different, selectable distance.
  • a possible reason for this is that objects from which distance images are to be created, for example parcels on a conveyor belt, on which a camera mounted above is directed from above, can not exceed a certain height and the measuring range can be optimally adapted to the actually occurring quantities should.
  • Another possible reason is that objects in too close proximity cause the signals of the image sensor to saturate the signals and therefore the distance can not be measured correctly anyway, so that it is advantageous to adapt the measuring range to correctly measurable distances , This circumstance will be explained in more detail in the discussion of FIG. 14.
  • Figs. 10 (a) and (b) show waveforms when the measuring range is shifted to a minimum distance dmin by 120 °, as an example of a multiple of 60 °.
  • the waveform in Fig. 10 (a) is also shown. with respect to the waveform in Fig. 7 (a) unchanged.
  • the shifted distance values shown in Fig. 7 (b) are calculated here.
  • the range of the distance ranges from a minimum distance dmin, which is greater than zero, to almost a maximum distance dmax, which is advantageously greater than dp.
  • the small values of the time difference td which belong to a rising distance value 103 lying ahead of the measuring range, are then subject to a depth aliasing and do not occur according to the changed task.
  • the signals are converted correctly into an increasing distance d in a first area 105. Behind the falling edge 106 of the distance d, a further increase occurs in a second area 107 of the distance, which however subject to a depth aliasing.
  • FIG. 11 shows a formula used in the arithmetic unit 29 for determining the distance d from the signals SO, S1 and S2 with a measuring range shifted by the two signal ranges 90 and 91.
  • the formula according to FIG. 11 can be realized with the shifted measuring range shown in FIG. Compared to FIG. 7, it can be seen that the signal regions 90 and 91 result in a different distance interpretation for the same signals SO, Sl and S2, which is found in the formula in the offset values "6" and "7". Generally speaking, for the regions affected by an increased interpretation of the distance, the offset value is increased by "6.” In principle, this process can also be carried out several times, so that a minimum distance dmin beyond dp can also be realized.
  • Figs. 12 (a) and (b) show waveforms when the measuring range is shifted to a minimum distance dmin by 150 ° as an example of a multiple of 60 ° Value.
  • the signal curves shown here are represented as a function of a time difference, ie for each value of td, a separate time diagram must run and then supplies the signal value as a result.
  • This shift of the measuring range allows various distance interpretations to be made in different parts of a distance range.
  • the left-hand part of the signal region 92 is interpreted differently than the right-hand part, and a falling edge 108 of the value for the distance d results in the middle of the signal region 92.
  • the correct measurement range 109 begins within the signal range 92.
  • FIG. 13 shows a formula used in the arithmetic unit 29 for determining the distance d from the signals SO, S1 and S2 with a measuring range shifted by the two signal ranges 90 and 91 and a part of the signal range 92.
  • This formula indicates the distance interpretation shown in FIG.
  • FIG. 14 shows a signal course for the signals SO, S1 and S2, taking into account the law of distance and starting from the aforementioned explanations.
  • the waveform for the signals SO, Sl and S2 shown with increasing distance d.
  • the spacing law superimposes a 1 / r 2 function on the waveform of FIG. 10 (a).
  • the signal moves between the background signal Sb and the saturation signal Ss.
  • the latter arises from the fact that image sensors usually can store only a finite maximum signal in the pixel, which is referred to here as the saturation signal Ss.
  • FIG. 15 shows a formula used in the calculation unit 29 for calculating the depth noise Nd, which was calculated using the formula according to FIG. 8.
  • NSO is the noise of the signal SO
  • NS1 the noise of the signal Sl
  • NS2 the noise of the signal S2 and tl the pulse length and c the speed of light.
  • the operator max () denotes a maximum operator and min () a minimum operator.
  • the noise values NSO, NS1 and NS2 can either be measured or calculated. In the latter case, for example, the formulas of standard EMVA1288 release A3.0 can be used. There, the calculation of the temporal noise is determined from the dark noise of the image sensor and from the shot noise of the charge carriers and from the so-called conversion gain. If necessary, the quantization noise can still be considered.
  • the distance noise Nd obtained with the first embodiment is directly proportional to the pulse length t1.
  • a simple way Therefore, to make a high-quality depth measurement with a low noise, it is to select the pulse length tl short. Since the length of the measuring range from "0" to dp or from dmin to dmax is in turn proportional to t1, the measuring range is shortened with a shortening of the pulse length t1. Since, as mentioned above, the measuring range in the first exemplary embodiment is particularly In addition, the distance range can be optimally adjusted by the possibility of shortening the pulse length t1 and thus reducing the distance noise Nd, as explained in FIGS. 10 to 13.
  • FIG. 16 shows a formula used in the arithmetic unit 29 for determining the validity of range values calculated from the signals SO, S1 and S2.
  • SO, S1 and S2 are the already explained signals as digital values
  • o is the offset value of the camera as digital value
  • Nd is the dark noise of the camera, which can be determined according to EMVA1288 and given in electrons e- K
  • tl the pulse length in a unit of time
  • c the speed of light in a unit of length per time
  • Lim a freely selectable parameter for setting a limit value for the depth noise in units of length, eg Meter.
  • the formula of FIG. 16 is constructed of terms 210 to 213.
  • Term 210 provides a limit value, Term 211 a proportionality factor, Term 212 a measure of the signal quality in digital Numbers and Terra 213 an estimate for the noise.
  • the formula can be explained as follows: If the signal quality is higher than the noise by an adjustable limit value, the corresponding measured value is valid. This makes it possible to differentiate between valid and invalid measured values.
  • Another criterion for the validity of measured values is that all three signals SO, S1 and S2 are not saturated, that is to say are smaller than the saturation signal Ss.
  • Fig. 17 shows a diagram with an approximation of the noise by means of a simplified approximation function.
  • the depth noise Nd over the distance d is shown as graph 160.
  • This course was calculated from the formula according to FIG. 15. Since the resulting formula is complex and leads to a high computational effort in practical use, an alternative approximation graph 161 with an approximate formula was calculated.
  • This approximation formula results if instead of the three noise values NSO, NS1 and NS2 only one noise value is used for the mean value of the signals SO, S1 and S2. This approach is based on the formula according to FIG. 16.
  • Fig. 18 shows a time course of an ideal and a real light pulse.
  • the ideal light pulse 140 has a steep turn-on edge 141 in which the intensity rises from “0" to "1” and a steep turn-off edge 142, in which the intensity drops from “1” to "0", without any relevant increase - or fall time would be needed.
  • the real light pulse 143 has a steady turn-on edge 144 and a steady turn-off edge 145 and thus does not generate one ideal temporal intensity curve, which differs significantly from the curve of the ideal light pulse 140.
  • This deviation leads to deviations of the real course of the signals SO, S1 and S2 as a function of the time difference td from the signal profile shown in FIG. 7 (a), for example, and consequently also from the distance d determined therefrom, which in FIG. 7 (b) is shown.
  • FIG. 19 shows a curve of a real measured distance d as a function of the time difference td.
  • a real, determined by measurement course of the measured distance d has compared to the ideal proportional increase in the range 100 in Fig. 7 (b) non-linear deviations.
  • the measured distance can be linearized by means of a nonlinear correction function. This can be done, for example, in analytical form using an analytic function f (d) whose input variable is the measured distance d and which as a result yields a linearized distance f.
  • a function can be determined, for example, by mathematical modeling by means of a periodic function in which the selectable parameters are adjusted by Fit.
  • Fit refers to an iterative approximation method.
  • a numerical correction function which associates the measured distance d with a correct, linearized distance.
  • the distance d can be measured for a plurality of known locations and a memory table can be created from the known correct distances of these locations and the measured distance.
  • the arithmetic unit 29 in FIG. 3 can then read out a correct distance value for a measured value of a distance d.
  • an interpolation between two measuring points can be carried out, wherein the interpolation can be carried out, for example, linear, quadratic, cubic or with an even higher order.
  • Such a non-linear correction function can be used to correct measurement errors that result from deviations of the pulse shape of a real light pulse from the pulse shape of an ideal light pulse.
  • Fig. 20 shows waveforms for a timing according to a second embodiment.
  • This timing controller has the same pulse length t1, the same opening time ts and the same phase shifts phil and phi2 as the timing shown in Fig. 5, whereas the dead time to, the phase shift phiO and the period tp are changed.
  • the values for the phase shift are used here in such a way that their values in degrees still correspond to the original definition in the first exemplary embodiment in relation to refer to there period tp.
  • the light pulses follow one another with the increased value for the period tp more than 360 ° apart, while the duration of a light pulse continues to be 60 °, the shutter opening time still 180 °, and ph.il and phi2 still 120 ° ,
  • Figs. 21 (a) and (b) show waveforms of the signals SO, Sl and S2 resulting from the timing of the second embodiment. These signal waveforms have signal range 220 to 229. Compared to Fig. 7 (a), the signal areas 223 to 226 of Fig. 21 (a) qualitatively correspond to the signal areas 90 to 93 of Fig. 7 (a), respectively, but shifted along the axis of time difference td. In addition, new signal areas 220, 221, 222, 227, 228 and 229 show that have a fundamentally different signal course.
  • This curve has two areas 154 and 158 in which no distance can be calculated because the three signals SO, S1 and S2 are constant and identical there, so that on the basis of which small equation can be resolved after the distance.
  • the curve also has two regions 155 and 157 with a piecewise constant value for the distance d, and a region 156 in which the value for the distance increases linearly with an increasing time difference td and in which the time difference td is correctly set in a value for the distance d can be implemented.
  • FIG. 22 shows a distance calculation formula used in the arithmetic unit 29 for the second embodiment.
  • the formula is largely identical to that given in FIG. 8, but an additional offset value dmin has been added from the value of phiO, which differs significantly from that in FIG. 5 in the illustration of FIG.
  • dmin has been added from the value of phiO, which differs significantly from that in FIG. 5 in the illustration of FIG.
  • the second embodiment has the following advantages over the first embodiment. Since the period tp in FIG. 20 is significantly greater than in FIG. 5, a periodicity of the value of the distance d, as shown for example in FIG. 7, occurs only at significantly higher values for the time difference tp and thus only in significant greater distances between the camera 20 and the objects 25 and 26 of FIG. In practice, it is even possible to set the value for the period tp to be so great that practically no depth aliasing occurs because it is shifted to such great time differences td with such great distances that due to the law of distance from this distance practically no measurable light returns to the camera 20 or is so much weakened that the values determined therefrom can be recognized as invalid, for example by using the formula shown in FIG.
  • the design of the profile with the regions 154, 155, 157 and 158 in FIG. 21 (b) achieves a uniqueness of the distance values determined in the linear value range 156. That is, if one of the values for the distance is measured from the range of 156, it can be ensured that this value for the distance is also correct. If a value from the value range 155 or 157 is measured, it can be seen that the measured value lies outside the linear value range for the distance 156 and it is even recognizable whether it lies in the value range 155 before or in the value range 157 behind the linear value range 156.
  • a disadvantage is that in the second embodiment, the linear value range with the correctly determined distance d, which includes the four signal areas 223 to 226, is shorter than in the first execution area, where it includes the six signal areas 90 to 95.
  • Figures 23 (a) and (b) show peak and average waveforms of a pulse frequency modulated signal at different pulse repetition rates. From these waveforms, a further advantage of the first embodiment over the prior art and the second embodiment over the first embodiment can be removed. There, the relation between peak and mean value is shown for a pulse width or pulse frequency modulated signal.
  • the peak value Pmax and the mean value Pmean of such a signal are linked via the pulse-duty ratio (duty cycle).
  • the average value is 50% of the peak value
  • a pulse-pause ratio of 1/6 as in Fig. 23 (b)
  • the average value is 1 / 6 of the peak.
  • Light sources for ToF cameras usually work with light sources that emit invisible wavelengths to the human eye from the near infrared range (NIR).
  • NIR near infrared range
  • coherent or non-coherent radiation is used.
  • the usable power is limited in each case, for example by the requirements set out in Directive 2006/25 / EC.
  • the mean value is the decisive factor. That is, the light source 23 must not exceed a certain average power.
  • it is desirable to emit the highest possible light output so that the light emitted by the light source 23 is brighter or at least of the same order of magnitude as the ambient light.
  • the limits are primarily related to the average power, and the ratio to the ambient to the peak power, the performance of a ToF camera can be increased by choosing a low duty cycle.
  • a high peak value and a low mean value are achieved in the light intensity at the same time, as a result of which both of the aforementioned requirements can be implemented jointly in an advantageous manner.
  • a particularly high peak value for the intensity can be achieved in comparison with the first exemplary embodiment, with a mean value that is particularly low at the same time.
  • this embodiment is suitable for the use of so-called pulse laser diodes, which are usually optimized for a pulse-duty ratio of 1/1000.
  • This is a great advantage in particular with respect to all ToF cameras with demodulation image sensors. The advantage comes especially where there is a high intensity of the ambient light, for example in strong sunlight or strong studio headlights.
  • Fig. 24 shows waveforms for a timing according to a third embodiment. This is a modification of the first exemplary embodiment according to FIG.
  • the advantage of the third embodiment is that a larger ratio between the pulse length t1 and the period tp can be realized.
  • the customary pulsable light sources for example light-emitting diodes, laser diodes or even pulse laser diodes, have a finite switch-on time and a finite switch-off time, so that their pulse length can not be reduced below a certain level.
  • a certain minimum size of a measuring range can not be undershot, and thus only a limited measuring accuracy can be achieved.
  • the third embodiment allows to further reduce the measuring range by a factor of three while also achieving a further increase in measurement accuracy.
  • FIG. 25 shows a formula used in the arithmetic unit 29 for calculating the depth noise Nd in the third embodiment.
  • the first factor in the denominator has a "6" instead of a "2" and thus the noise is basically smaller by a factor of "3.”
  • the actually achievable reduction of the noise falls with the same exposure
  • time is somewhat smaller, since only a signal portion of the light can be received by the short opening times of the shutter in comparison with the light pulses, and thus the signals SO, S1 and S2 are smaller than in the first exemplary embodiment - Searing effect that the shorter opening time, even a smaller part of the ambient light is received.
  • FIG. 26 shows a formula used in the arithmetic unit 29 for determining the validity of distance values obtained from the signals SO, S1 and S2 of the third embodiment.
  • this formula is derived from the formula for the depth noise Nd using a simplification.
  • the different factors in the formula of the depth noise namely the value "6" in Fig. 25 and the value "2" in Fig. 15, give a different factor in the counter of the term 171 in Fig. 26, namely, "108" instead of "12 ".
  • the validity of measured values can also be determined for the third exemplary embodiment.
  • the saturation can be used as a criterion.
  • FIG. 27 shows waveforms for a timing according to a fourth embodiment.
  • the fourth of the third can be derived from the first by increasing the period tp, so that the same formulas and measures can also be used for the fourth exemplary embodiment.
  • the fourth embodiment has the same advantages over the third, as the second compared to the first.
  • Fig. 28 shows waveforms for a timing according to a fifth embodiment.
  • both the pulse length tl and the opening times of the control signals CO, C1 and C2 are each 180 °, phiO is 0 ° and phil and phi2 each 120 °.
  • a particularly short opening time of the shutter was used to achieve a high measurement accuracy with a low noise level.
  • the opening time of the shutter is extended by a factor of "3.” As a result, a larger portion of the emitted light can also be received, resulting in further improvement of the measurement accuracy.
  • Fig. 29 shows waveforms of the signals SO, Sl and S2 for the timing according to the fifth embodiment.
  • the course of the signals results from the timing according to FIG. 28 with the considerations mentioned in the discussion of FIG. 6 and differs significantly from that in FIG. 7.
  • Figs. 30 (a) and (b) show the trajectory of the S vectors in S space for the fifth embodiment from different angles.
  • the fifth embodiment is based on the idea of guiding the signal course in S-space on a polygonal line which comes as close as possible to a circular path around the unit diagonal.
  • An S-space having such a polygon line consisting of lines 130 to 135 is shown in Figs. 30 (a) and (b).
  • FIGS. 30 (a) and (b) show the same polygonal line for a better understanding of the three-dimensional course from a viewing angle rotated by 90 ° relative to one another.
  • the lines connect points on the outer surfaces of the cube, which represents the S-space.
  • FIG. 31 shows a formula used in the arithmetic unit 29 for determining the distance d from the signals SO, S1 and S2 for the fifth embodiment.
  • FIG. 32 shows a graph of a real measured distance d as a function of the time difference td for the fifth exemplary embodiment.
  • FIG. 32 shows a course of the measured distance d determined by measurement as a function of the time difference td for the fifth exemplary embodiment.
  • the nonlinear deviations are particularly small here. This is due to the fact that the effect of the deviations of the real switch-on edge on the ideal switch-off edge and the real switch-off edge on the ideal switch-off edge compensate each other in a particularly favorable manner. The remaining non-linear deviation can be corrected by the same means as proposed for the first embodiment.
  • the noise for the fifth embodiment may be determined by the formula of FIG. 25. Since more light can be received by the longer opening time of the shutter than in the third embodiment, the denominator is generally larger, so that the noise tends to be smaller as a result.
  • Fig. 33 shows signals of a timing according to a sixth embodiment.
  • FIG. 34 shows a profile of the signals S0, S1 and S2 corresponding to the illustration in FIG. 12 at a time control according to the sixth exemplary embodiment.
  • FIG. 35 shows a formula used in the arithmetic unit 29 for determining the distance d in the sixth embodiment.
  • This formula indicates how a correct distance d can be determined from the signals shown in FIG. Compared to the formula in FIG. 31, significant differences are evident both in the domain-defining conditions and in the linear terms which are valid within the ranges, which are due to the signal curve from FIG. 34. It can be seen that with the sixth embodiment, in comparison to the second and fourth embodiments, a particularly large measuring range can be covered with particularly good linearity, without losing the advantages listed there.
  • Fig. 36 shows waveforms of a timing according to a seventh exemplary embodiment.
  • the pulse length and the length of the opening times of the shutter are each 120 °, phil and phi2 also 120 ° and phiO is chosen here with 0 °.
  • FIG. 37 shows the course of the signals SO, S1 and S2 at a timing according to the seventh embodiment.
  • the signal profile shown in FIG. 37 results for the seventh exemplary embodiment on the basis of the considerations discussed with reference to FIG. In contrast to the previous exemplary embodiments, only three different signal regions 200, 201 and 202 result here.
  • the fourth signal region 203 already represents a periodic repetition of the first signal region 200. The smaller number of regions reduces the number of case distinctions and thus also the number of regions Calculation effort in the arithmetic unit 29 of FIG. 3 for determining the distance.
  • Fig. 38 (a) and (b) show a trajectory of S vectors in S space for the seventh embodiment from different angles.
  • the seventh exemplary embodiment is based on the idea of guiding the signal course in S-space on a polygonal line which has a trajectory curve that can be described mathematically as simply as possible.
  • a trajectory forms, for example, a triangle which, together with the basis vectors of the S-space, spans a triangular pyramid.
  • Figs. 38 (a) and (b) show for better spatial understanding exactly the same polygonal line from a rotated by 90 ° to each other angle.
  • FIG. 9 and in FIG. 30 it is assumed in this illustration that no ambient light is present.
  • 39 shows a formula used in the arithmetic unit 29 for determining the distance d from the signals SO, S1 and S2 at a timing according to the seventh embodiment.
  • FIGS. 8, 11, 13, 22 and 31 it is noticeable that this formula is structured in a simpler way, and in particular in the realization of the hardware of the arithmetic unit 29 according to FIG. 3, for example in a field -Programmable Gated Array (FPGA), a particularly low resource consumption required.
  • 40 shows a formula used in the arithmetic unit 29 for calculating the depth noise Nd from the signals SO, S1 and S2 for the seventh embodiment.
  • the parameters NS0, NS1 and NS2 characterize the noise values of the signals SO, S1 and S2, which can be measured directly or also calculated according to the formulas of the EMVA 1288 standard. From this formula, a criterion for the validity of measured values can be derived with the same considerations that were discussed in connection with FIGS. 16 and 17.
  • FIG. 41 shows waveforms for a timing according to an eighth embodiment.
  • Fig. 42 shows waveforms of the signals SO, S1 and S2 at a timing according to the eighth embodiment.
  • the waveform shown in Fig. 42 is obtained in accordance with the time difference td.
  • the distance d within the regions 241 and 242 can be determined correctly, can be determined within the regions 240 and 243 as being in front of or behind the measuring region, and can not be determined in the region 244. From region 245, a periodic repetition occurs.
  • a formula can be formulated as in the previous examples.
  • FIG. 43 shows such a formula used in the arithmetic unit 29 for the eighth embodiment.
  • the formula is also very compact and requires little resource expenditure when implemented in hardware.
  • image sensor 28 in Fig. 3 now a conventional image sensor can be used, which has only one signal path per pixel.
  • the image sensor 28 can be produced in a particularly simple and therefore cost-effective manner and is sensitive to light or low in noise. Furthermore, it can have a particularly high lateral resolution or can be read out particularly quickly. Since considerably more conventional image sensors are available on the market than demodulation image sensors, it is easy to find a corresponding image sensor which has the corresponding advantage.
  • a color image sensor can be used, so that the distance and color of objects can be detected with only a single image sensor.
  • an apparatus and a method for controlling a time of flight camera have been described in which a distance information for a three-dimensional image from a transit time difference or a phase shift between a light emitted by a light source of the runtime camera light signal and one of an image sensor of the runtime camera by scattering or reflection of the emitted light signal light signal is detected. At least three subimages are recorded. In this case, at least one light pulse is emitted and a shutter (shutter) is clocked by means of at least three control signals so that the phase shift between the light pulse and shutter is varied by the different phase angles of the control signals between the subimages. This gives three readings per pixel. These can be supplied to an operator, which is constructed analogously to a Hue operator and determines a rotation angle about the spatial diagonal of the SO-S1-S2 space. By means of a proportionality constant this can then be converted into the distance information.

Abstract

L'invention concerne un dispositif et un procédé de commande d'une caméra temps de vol. Une information d'éloignement concernant une image tridimensionnelle est déterminée à partir d'une différence de temps de vol ou d'un déphasage entre un signal lumineux (L) émis par une source lumineuse de la caméra temps de vol et un signal lumineux (D) reçu par un capteur d'images de la caméra temps de vol par diffusion ou réflexion du signal lumineux émis. On prend au moins trois sous-images. On émet au moins une impulsion lumineuse, et un dispositif obturateur (shutter) est synchronisé au moyen d'au moins trois signaux de commande (C0, C1, C2) de telle manière que le déphasage entre l'impulsion lumineuse et l'obturateur varie entre les sous-images du fait des différentes positions de phase des signaux de commande. On obtient ainsi trois valeurs de mesure (S0, S1, S2) par pixel. Ces valeurs peuvent être amenées à un opérateur de conception analogue à celle d'un opérateur de teintes qui détermine un angle de rotation autour de la diagonale spatiale de l'espace S0-S1-S2. Ce dernier peut être converti en une information d'éloignement au moyen d'une constante de proportionnalité.
PCT/EP2015/068963 2014-09-03 2015-08-18 Procédé et dispositif simplifiant l'enregistrement d'images en profondeur WO2016034408A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017512300A JP2017530344A (ja) 2014-09-03 2015-08-18 深度画像の単純化された検出のための方法およびデバイス

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102014013099.1A DE102014013099B4 (de) 2014-09-03 2014-09-03 Verfahren und Vorrichtung zur vereinfachten Erfassung eines Tiefenbildes
DE102014013099.1 2014-09-03

Publications (1)

Publication Number Publication Date
WO2016034408A1 true WO2016034408A1 (fr) 2016-03-10

Family

ID=54064290

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/068963 WO2016034408A1 (fr) 2014-09-03 2015-08-18 Procédé et dispositif simplifiant l'enregistrement d'images en profondeur

Country Status (3)

Country Link
JP (1) JP2017530344A (fr)
DE (1) DE102014013099B4 (fr)
WO (1) WO2016034408A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111465870A (zh) * 2017-12-18 2020-07-28 苹果公司 使用可寻址发射器阵列的飞行时间感测
US11205279B2 (en) 2019-12-13 2021-12-21 Sony Semiconductor Solutions Corporation Imaging devices and decoding methods thereof

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017115385B4 (de) 2017-07-10 2022-08-11 Basler Ag Vorrichtung und Verfahren zur Erfassung eines dreidimensionalen Tiefenbilds
CN110199205A (zh) * 2017-12-22 2019-09-03 索尼半导体解决方案公司 信号生成装置
JP7357539B2 (ja) 2017-12-22 2023-10-06 ソニーセミコンダクタソリューションズ株式会社 信号生成装置
US11867842B2 (en) 2017-12-22 2024-01-09 Sony Semiconductor Solutions Corporation Pulse generator and signal generation apparatus
KR102196035B1 (ko) * 2018-12-26 2020-12-29 (주)미래컴퍼니 펄스 위상 이동을 이용한 3차원 거리측정 카메라의 비선형 거리 오차 보정 방법
WO2023119918A1 (fr) * 2021-12-23 2023-06-29 ソニーセミコンダクタソリューションズ株式会社 Dispositif de mesure de distance et appareil électronique
WO2023120012A1 (fr) * 2021-12-24 2023-06-29 株式会社小糸製作所 Dispositif de mesure

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003016944A2 (fr) * 2001-08-06 2003-02-27 Siemens Aktiengesellschaft Procede et dispositif pour acquerir une image telemetrique tridimensionnelle
WO2005036372A2 (fr) * 2003-10-09 2005-04-21 Honda Motor Co., Ltd. Systemes et procedes destines a determiner la profondeur au moyen d'impulsions de lumiere et de capteurs a obturateurs
WO2008152647A2 (fr) * 2007-06-15 2008-12-18 Ben Gurion University Of The Negev Research And Development Authority Procédé et appareil d'imagerie tridimensionnelle
US7554652B1 (en) * 2008-02-29 2009-06-30 Institut National D'optique Light-integrating rangefinding device and method
DE102008018718A1 (de) * 2008-04-14 2009-10-15 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Optischer Abstandsmesser und Verfahren zur optischen Abstandsmessung

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4194213A (en) 1974-12-25 1980-03-18 Sony Corporation Semiconductor image sensor having CCD shift register
US3971065A (en) 1975-03-05 1976-07-20 Eastman Kodak Company Color imaging array
US4131919A (en) 1977-05-20 1978-12-26 Eastman Kodak Company Electronic still camera
US4656519A (en) 1985-10-04 1987-04-07 Rca Corporation Back-illuminated CCD imagers of interline transfer type
US5081530A (en) 1987-06-26 1992-01-14 Antonio Medina Three dimensional camera and range finder
US5841126A (en) 1994-01-28 1998-11-24 California Institute Of Technology CMOS active pixel sensor type imaging system on a chip
US5471515A (en) 1994-01-28 1995-11-28 California Institute Of Technology Active pixel sensor with intra-pixel charge transfer
DE19704496C2 (de) 1996-09-05 2001-02-15 Rudolf Schwarte Verfahren und Vorrichtung zur Bestimmung der Phasen- und/oder Amplitudeninformation einer elektromagnetischen Welle
KR100508277B1 (ko) 1997-12-23 2005-08-17 지멘스 악티엔게젤샤프트 3차원 거리 측정 이미지를 기록하기 위한 방법 및 장치
US6667768B1 (en) 1998-02-17 2003-12-23 Micron Technology, Inc. Photodiode-type pixel for global electronic shutter and reduced lag
JP2001337166A (ja) * 2000-05-26 2001-12-07 Minolta Co Ltd 3次元入力方法および3次元入力装置
US7408627B2 (en) * 2005-02-08 2008-08-05 Canesta, Inc. Methods and system to quantify depth data accuracy in three-dimensional sensors using single frame capture
EP1777747B1 (fr) 2005-10-19 2008-03-26 CSEM Centre Suisse d'Electronique et de Microtechnique SA Méthode et appareil pour la démodulation de champs d'ondes électromagnétiques modulées
JP5098331B2 (ja) * 2006-12-28 2012-12-12 株式会社豊田中央研究所 計測装置
JP5486150B2 (ja) * 2007-03-30 2014-05-07 富士フイルム株式会社 測距装置、測距方法及び測距システム
CA2716980C (fr) * 2008-02-29 2013-05-28 Leddartech Inc. Dispositif et procede de telemetrie a integration de lumiere
US8569671B2 (en) 2008-04-07 2013-10-29 Cmosis Nv Pixel array capable of performing pipelined global shutter operation including a first and second buffer amplifier
EP2150039A1 (fr) 2008-07-28 2010-02-03 Basler AG Procédé de saisie d'image d'objets relativement mobiles
JP5448617B2 (ja) * 2008-08-19 2014-03-19 パナソニック株式会社 距離推定装置、距離推定方法、プログラム、集積回路およびカメラ
JP5105453B2 (ja) 2009-12-25 2012-12-26 独立行政法人日本原子力研究開発機構 撮像素子、半導体装置、及び撮像方法、撮像装置
DE102010003039A1 (de) 2010-03-18 2011-09-22 Basler Ag Farbsättigungskorrektur
DE102011089636A1 (de) 2010-12-22 2012-06-28 PMD Technologie GmbH Lichtlaufzeitkamera
EP2477043A1 (fr) 2011-01-12 2012-07-18 Sony Corporation Caméra temps de vol 3D et procédé
GB2487740A (en) 2011-02-01 2012-08-08 Cmosis Nv High Dynamic Range Pixel Structure

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003016944A2 (fr) * 2001-08-06 2003-02-27 Siemens Aktiengesellschaft Procede et dispositif pour acquerir une image telemetrique tridimensionnelle
WO2005036372A2 (fr) * 2003-10-09 2005-04-21 Honda Motor Co., Ltd. Systemes et procedes destines a determiner la profondeur au moyen d'impulsions de lumiere et de capteurs a obturateurs
WO2008152647A2 (fr) * 2007-06-15 2008-12-18 Ben Gurion University Of The Negev Research And Development Authority Procédé et appareil d'imagerie tridimensionnelle
US7554652B1 (en) * 2008-02-29 2009-06-30 Institut National D'optique Light-integrating rangefinding device and method
DE102008018718A1 (de) * 2008-04-14 2009-10-15 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Optischer Abstandsmesser und Verfahren zur optischen Abstandsmessung

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111465870A (zh) * 2017-12-18 2020-07-28 苹果公司 使用可寻址发射器阵列的飞行时间感测
CN111465870B (zh) * 2017-12-18 2023-08-29 苹果公司 使用可寻址发射器阵列的飞行时间感测
US11852727B2 (en) 2017-12-18 2023-12-26 Apple Inc. Time-of-flight sensing using an addressable array of emitters
US11205279B2 (en) 2019-12-13 2021-12-21 Sony Semiconductor Solutions Corporation Imaging devices and decoding methods thereof
US11380004B2 (en) 2019-12-13 2022-07-05 Sony Semiconductor Solutions Corporation Imaging devices and decoding methods thereof for determining distances to objects
US11741622B2 (en) 2019-12-13 2023-08-29 Sony Semiconductor Solutions Corporation Imaging devices and multiple camera interference rejection

Also Published As

Publication number Publication date
DE102014013099A1 (de) 2016-03-03
JP2017530344A (ja) 2017-10-12
DE102014013099B4 (de) 2019-11-14

Similar Documents

Publication Publication Date Title
DE102014013099B4 (de) Verfahren und Vorrichtung zur vereinfachten Erfassung eines Tiefenbildes
EP3185038B1 (fr) Capteur optoelectronique et procede de mesure d'un eloignement
DE102008018718B4 (de) Optischer Abstandsmesser und Verfahren zur optischen Abstandsmessung
EP1423731B1 (fr) Procede et dispositif pour acquerir une image telemetrique tridimensionnelle
DE102011089629B4 (de) Lichtlaufzeitkamera und Verfahren zum Betreiben einer solchen
DE102010061382B4 (de) Optoelektronischer Sensor und Verfahren zur Erfassung und Abstandsbestimmung von Objekten
DE102013225676B4 (de) Lichtlaufzeitkamera mit einer Bewegungserkennung
EP2240797B1 (fr) Dispositif de mesure de distance optoélectronique
DE112020001783T5 (de) Flugzeitsensor
DE102006029025A1 (de) Vorrichtung und Verfahren zur Abstandsbestimmung
DE112018006021T5 (de) Einzelchip-rgb-d-kamera
WO2007031102A1 (fr) Detection d'un rayonnement optique
DE102013208802A1 (de) Lichtlaufzeitsensor mit spektralen Filtern
EP3523674B1 (fr) Unité de detection et procédé de detection d'un signal de detection optique
EP1636610B1 (fr) Detection d'un rayonnement electromagnetique
EP2978216B1 (fr) Procédé pour la détection du flou de mouvement
DE10103861A1 (de) Verfahren zur Aufnahem eines Objektraumes
DE10153742A1 (de) Verfahren und Vorrichtung zur Aufnahme eines dreidimensionalen Abstandsbildes
DE102013208804B4 (de) Lichtlaufzeitsensor mit zuschaltbarer Hintergrundlichtunterdrückung
DE102004022912A1 (de) Verfahren und Vorrichtung zur Entfernungsmessung
DE102013203088A1 (de) Lichtlaufzeitkamerasystem
DE102013208805B4 (de) Lichtlaufzeitsensor mit Zwischenspeicher
DE102016219170A1 (de) Lichtlaufzeitkamerasystem
EP3126866B1 (fr) Dispositif de recensement pour utilisation dans un véhicule et véhicule
DE102010064682B3 (de) Optoelektronischer Sensor und Verfahren zur Erfassung und Abstandsbestimmung von Objekten

Legal Events

Date Code Title Description
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15759672

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017512300

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 15759672

Country of ref document: EP

Kind code of ref document: A1