WO2011020921A1 - Time-of-flight sensor - Google Patents

Time-of-flight sensor Download PDF

Info

Publication number
WO2011020921A1
WO2011020921A1 PCT/EP2010/062202 EP2010062202W WO2011020921A1 WO 2011020921 A1 WO2011020921 A1 WO 2011020921A1 EP 2010062202 W EP2010062202 W EP 2010062202W WO 2011020921 A1 WO2011020921 A1 WO 2011020921A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
pixel
signal
array
module
Prior art date
Application number
PCT/EP2010/062202
Other languages
French (fr)
Inventor
Michael Franke
Ronald Schreiber
Berndt Uhlmann
Jörg WERNER
Original Assignee
Iee International Electronics & Engineering S.A.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iee International Electronics & Engineering S.A. filed Critical Iee International Electronics & Engineering S.A.
Publication of WO2011020921A1 publication Critical patent/WO2011020921A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • G01S17/36Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar

Definitions

  • the present invention generally relates to optical sensors with differential signal integration over time and more specifically to a range camera operating according to the time-of-flight principle and to a method for acquiring a range image using such camera.
  • Systems for creating a 3-D representation of a given portion of space have a variety of potential applications in many different fields. Examples are automotive sensor technology (e.g. vehicle occupant detection and classification), robotic sensor technology (e.g. object identification) or safety engineering (e.g. plant monitoring) to name only a few.
  • automotive sensor technology e.g. vehicle occupant detection and classification
  • robotic sensor technology e.g. object identification
  • safety engineering e.g. plant monitoring
  • a 3-D imaging system requires depth information about the target scene. In other words, the distances between one or more observed objects and an optical receiver of the system need to be determined.
  • a well-known approach for distance measurement which is used e.g. in radar applications, consists in timing the interval between emission and echo-return of a measurement signal.
  • TOF time-of-flight
  • the measurement signal consists of light waves.
  • the term "light” is to be understood as including visible, infrared (IR) and ultraviolet (UV) light.
  • the TOF method can e.g. be implemented using the phase-shift technique or the pulse technique.
  • the phase-shift technique the amplitude of the emitted light is periodically modulated (e.g. by sinusoidal modulation) and the phase of the modulation at emission is compared to the phase of the modulation at reception.
  • the pulse technique light is emitted in discrete pulses without the requirement of periodicity.
  • the modulation period is typically in the order of twice the difference between the maximum measurement distance and the minimum measurement distance divided by the velocity of light.
  • the propagation time interval is determined as phase difference by means of a phase comparison between the emitted and the received light signal.
  • Figure 1 shows the typical architecture of a state-of-art time-of-flight sensor or camera 100.
  • Light with modulated intensity is generated under the control of a modulation signal by a light source 110 and radiated against an object or scene 120.
  • ambient light is present (represented as emanating from an outside light source 130) and radiates also onto the scene 120. Both components (modulated light and ambient light) are mixed and reflected at the scene 120. A portion of the reflected light is finally received at the optics 140 of the camera 100 and is passed to the sensor pixel matrix 150.
  • the impinging light which comprises a component from the non- modulated ambient light and a component from the intensity modulated light, is converted into an electrical signal for the determination of the phase information.
  • Each pixel 160 of the pixel matrix 150 comprises a photo detector and a demodulator for demodulating the incoming signal.
  • the pixel 160 is fed with a demodulation signal, which is derived from the modulation signal.
  • each pixel 160 integrates the charge generated therein by the impinging light during at least three time intervals, each of which corresponds to a different phase within one period of the modulation signal.
  • Each pixel 160 provides response signals indicating the integrated charge for the different time intervals.
  • This raw phase information is sometimes referred to as "tap values” or “tap responses” according to the nomenclature of Robert Lange's doctoral thesis.
  • phase difference ⁇ is calculated as follows:
  • atan2(A3 - Al, A2 -A0) (eqn. 1 ) where atan2(x,y) is the four-quadrant inverse tangent function, yielding the angle between the positive x-axis of a plane and the point with coordinates (x, y) on that plane.
  • the light impinging on the range imager comprises a component from the non-modulated ambient light
  • a considerable part of the charges generated and integrated in the pixel is not related to the modulated light and thus to distance information to be determined.
  • the signal at the output of the pixel therefore comprises a component which results from the charges induced in the pixel by the non-modulated ambient light (hereinafter referred to as DC component) and a component resulting from the charges induced in the pixel by the intensity modulated light (hereinafter referred to as AC component).
  • the overall performance of the range imager suffers from background illumination measured along with the actual signal.
  • the ambient light or background illumination can e.g. cause saturation of the sense node if its intensity is very high, or the ambient light can deteriorate the contrast between the charge sensing devices.
  • the precision for the detection of the phase and the spatial distribution of the impinging radiation is low. It is therefore very important to compensate the influence of background light in order to prevent background saturation.
  • the compensation of the influence of background light and the prevention of background saturation is usually solved by additional circuitry for cutting off or compensating the background induced DC component of the pixel signal.
  • This additional circuitry is usually required to be part of the pixel itself, i.e. inside the pixel, which results in a larger pixel and a larger sensor chip or in a reduced fill factor of the sensor chip and a reduced sensitivity.
  • the present invention provides an imager configured to operate according to the time-of-flight principle, comprising an illumination unit for illuminating a scene with intensity-modulated light; an array of pixels, each of said pixels configured for integrating charge induced therein by light impinging onto said pixel and for converting said light into a demodulated pixel signal; an output module configured to read out the signal of the pixels after a predetermined integration time; and control module for controlling the operation of the pixels of said pixel array and the output module.
  • the imager further comprises a signal storage module associated to said array of pixels and a transfer module for transferring a signal from said pixels of said array of pixels into a corresponding storage area of said signal storage module, wherein said transfer module is configured for executing, during said predetermined integration time, a plurality of transfer operations for each pixel, and wherein said output module is configured to read out the integrated signal of the pixels after the predetermined integration time from the corresponding storage areas of said signal storage module.
  • the sensor according to the present invention thus consists at least of an array of pixels (Pixel Matrix), a signal storage module (called Shadow Signal Storage) and a transfer module associated with the pixel matrix and the shadow signal storage.
  • the array of pixels is preferably a 2 dimensional array, wherein the pixels are arranged in several rows and columns, however the invention is also applicable to a pixel matrix with just one column. In this context it should be noted that in general the terms “row” and “column” may be exchanged.
  • Each row of pixels is connectable under the control of the control module to a corresponding row of the shadow signal storage via the transfer module.
  • the transfer module itself is configured such that during the integration time, the pixel rows are scanned very fast to transfer the demodulated signal from the pixels to the shadow signal storage.
  • the shadow signal storage which is preferably located outside of the pixel matrix, the signals of the single transfers are accumulated per pixel to improve the signal-to-noise ratio.
  • various functions may be processed, like i.e. measuring and cutting the DC content of the demodulated signal.
  • the main advantage of this architecture resides in the fact, that the pixel storage itself does not need to store all the charges integrated during the entire integration phase.
  • the intra-pixel storage may be designed much smaller than with prior art imagers.
  • the pixels may be configured much simpler than with prior art sensors of the kind, because most of the functions of the sensor can be fulfilled during or after the transfer of the signals to the shadow signal storage by circuitry outside the pixel.
  • the circuitry for performing the different functions relating to the detection and prevention of the background saturation and enhancement of the dynamic range being located outside the pixel itself can accordingly be optimized without the constraints relating to the pixel design.
  • the different functions as listed above can therefore be improved during development without changing the pixel layout.
  • the pixel can have a higher fill factor and the pixel matrix can become smaller, which furthermore reduces the costs for the optics.
  • said array of pixels comprises n sub-arrays of pixels and wherein said transfer module and said output module each comprises n sub-modules, one of said sub-modules being associated with a respective one of said n sub-arrays of pixels.
  • the array of pixels and said signal storage module may be arranged on different semiconductor chips and connected together by means of suitable circuitry. In a preferred embodiment however, the array of pixels and said signal storage module are arranged on a common semiconductor chip.
  • the present invention also relates to a method for operating the above described imager, comprising the steps of executing, during said predetermined integration time, a plurality of transfer operations for each pixel, each of said transfer operations for transferring a signal from said pixels of said array of pixels into a corresponding storage area of said signal storage module; and after the predetermined integration time, reading out the integrated signal of the pixels from the corresponding storage areas of said signal storage module.
  • the signal transfer is configured such that during said predetermined integration time, for each pixel column, the signals from the corresponding pixels of the different pixel rows are cyclically transferred to the associated storage area of said signal storage module.
  • Fig. 1 shows the architecture of a time-of-flight sensor or camera
  • Fig. 2 shows an illustration of the pixel matrix
  • Fig. 3 shows a block diagram of one pixel
  • Fig. 4 shows a simplified architecture of the pixel matrix together with control and readout circuits
  • Fig. 5 shows a possible timing to operate the architecture of Fig. 4;
  • FIG. 6 shows a different timing to operate the architecture of Fig. 4;
  • Fig. 7 shows a simplified architecture of an embodiment of an imager according to the present invention.
  • Fig. 8-11 illustrate different timing methods of a preferred method for operating the architecture of Fig. 7;
  • Fig. 12 shows a simplified architecture of another embodiment of an imager according to the present invention.
  • a TOF sensor has in principle to fulfill a large number of functions, such as demodulate incident modulated light; output quantities of the demodulated signal; measure background intensity; detect saturation; prevent background saturation; enhance dynamic range; correct errors like offset and gain mismatch; be highly sensitive; be small to reduce costs for chip and optics; and provide the images with high frame rate.
  • functions such as demodulate incident modulated light; output quantities of the demodulated signal; measure background intensity; detect saturation; prevent background saturation; enhance dynamic range; correct errors like offset and gain mismatch; be highly sensitive; be small to reduce costs for chip and optics; and provide the images with high frame rate.
  • Figure 1 shows the architecture of a state-of-art time-of-flight sensor or camera. Active light is generated by a light source 110 and radiated against an object or scene 120. Furthermore ambient light 130 is present and also radiates onto the scene 130. Both components are mixed and reflected. A portion of the reflected light achieves the optics 140 of the imager and is passed to the sensors pixel matrix 150.
  • Figure 2 shows an illustration of the pixel matrix 150. The pixels 160 are arranged in a matrix form so as to form pixel rows 190 and pixel columns 200.
  • Figure 3 shows a block diagram of one pixel 160.
  • the incident light 230 which enters via the optics 140, reaches a stage 210 where it is detected and demodulated.
  • This stage 210 has at least one output. A number of 2 or 4 outputs are usual in reported constructions.
  • stage 220 where the demodulated signal is accumulated over several periods of the modulation signal.
  • Stage 220 includes corresponding means therefore. Further this stage includes means to pass the output signals to the common output bus 250 of one pixel column 200.
  • the control signals to control the stages 210 and 220, are tapped from a control bus 240 associated to each pixel row 190.
  • this stage 220 includes all further circuits that are required to fulfill the different functions relating to the detection and prevention of background saturation in order to enhance the dynamic range as mentioned above.
  • the required layout space for that would be large. Large pixels however can cause the pixel matrix to become very large, if a certain lateral resolution is required. Consequently it is an aim of pixel design to make the stage 220 as small as possible regarding layout area.
  • FIG. 4 shows a simplified architecture of the pixel matrix 150 together with control and readout circuits.
  • Block 270 forms a row control circuit, which applies the control signals 240 to the pixel matrix 150.
  • Block 280 forms a block that reads the common column output busses.
  • this block includes circuits like amplifiers (in case of charge readout), current sources (to drive source followers in voltage readout), readout control circuits, column multiplexers and/or column parallel ADCs.
  • this block is required just at the end of the accumulation of the signals inside the pixels, when these signals shall be read out as an image from the sensor and be passed to the post processing circuitry.
  • Figure 5 shows a possible state-of-art timing to operate the architecture of figure 4: The global shutter method.
  • Axis 1 represents or corresponds to the row number of a certain pixel column, axis 2 to the time.
  • the differently shaded zones represent the operating status of the respective pixel over the represented time frame (3: integration start; 4: integration end; 5: signal integration phase; 6: transfer signals out of chip (chip readout); 7: waiting for pixel readout; 8: pixel reset).
  • the integration time is in principle variable, the duration being usually varied in order to deal with varying incident light power ranges.
  • one pixel row is selected by the control signals of the row control circuit 270.
  • the selected pixels of each column are then connected to the block 280 via the respective column busses 250.
  • the block 280 has received and passed the data of the selected pixels, the currently selected pixel row is disconnected from the column busses and a next row of pixels is selected for read out.
  • Figure 6 shows another possible state-of-art timing to operate the architecture of figure 4: the rolling shutter method.
  • Axis 1 corresponds again to the row number of a certain pixel, axis 2 to the time.
  • the read out process is organized similar to that of the global shutter method: The pixel rows are read out 6 sequentially one after another. Again one pixel row is selected by the control signals of the row control circuit 270. The selected pixels of each column are thereby connected to the block 280 via the respective column busses 250. When the block 280 has received and passed the data of the selected pixels, the currently selected pixel row is disconnected from the column busses and a next row of pixels is selected for read out.
  • the integration time is usually equal for all pixel rows in one read out frame, but for the frame it is in principle variable. This is often used to deal with varying incident light power ranges.
  • FIG. 7 shows a simplified architecture of an embodiment of a sensor according to the present invention.
  • the sensor consists at least of an array of pixels (Pixel Matrix), a signal storage module (called Shadow Signal Storage) and a transfer module associated with the pixel matrix and the shadow signal storage.
  • the array of pixels is preferably a 2 dimensional array, wherein the pixels are arranged in several rows and columns, however the invention is also applicable to a pixel matrix with just one column.
  • Each row of pixels is connectable to a corresponding row of the shadow signal storage via the transfer means. It should be noted that in general the terms “row” and “column” may be exchanged.
  • the transfer module is configures such that during the integration time, the pixel rows are scanned very fast to transfer the demodulated signal from the pixels to the shadow signal storage. In the shadow signal storage the signals of the single transfers are accumulated per pixel to improve the signal-to-noise ratio. During this transfer various functions may be processed, like i.e. measuring and cutting the DC content of the demodulated signal.
  • the main advantage of this architecture resides in the fact, that the pixel storage itself does not need to store all the charges integrated during the entire integration phase.
  • the intra-pixel storage may be designed much smaller than with prior art imagers.
  • the pixels may be configured much simpler than with prior art sensors of the kind, because most of the functions of the sensor can be fulfilled during or after the transfer of the signals to the shadow signal storage by circuitry outside the pixel.
  • the circuitry for performing the different functions relating to the detection and prevention of the background saturation and enhancement of the dynamic range being located outside the pixel itself can accordingly be optimized without the constraints relating to the pixel design.
  • the different functions as listed above can therefore be improved during development without changing the pixel layout.
  • the pixel can have a higher fill factor and the pixel matrix can become smaller, which furthermore reduces the costs for the optics.
  • the preferred procedure for the charge transfer by cyclically or sequentially transferring the charges accumulated in the different pixels of a pixel column to the shadow signal storage is similar to "rolling shutter" in 2d imaging.
  • One major difference is that for 3d imaging there are two signals overlaid to each other: the active modulated signal and the background signal.
  • the background signal maybe significantly larger than the active modulated signal. Still and even more the measure of the active modulated signal has to be integrated over a sufficient time to be large enough to reduce background induced noise. In the case of just applying the rolling shutter method, the storages would saturate due to the strong background signal.
  • the preferred method of the present invention focuses on the splitting of one long time capture into several short time captures. This is achieved by the charges in each pixel being transferred several times during one integration period. The main purpose of doing so is to reduce the size of the required in-pixel storages and non-photo-sensitive circuit parts.
  • the required rolling and interleaved scan of the pixel rows is different to simple repeats of capturing frames in that
  • the duration of one cycle is preferably shorter than that of a usual frame capture
  • the pixel rows are scanned very fast to transfer the demodulated signal from the pixels to the shadow signal storage.
  • the signals of the single transfers are accumulated per pixel to improve the signal to noise ratio.
  • various functions may be processed, like i.e. measuring and cutting the DC content of the demodulated signal.
  • This basic idea requires a fast continuous readout of the intra-pixel storages, so that the pixel internal storages can be made small and do not saturate even in front of high ambient light power.
  • the fast continuous readout can be realized as illustrated in Fig. 8.
  • Axis 1 represents or corresponds to the row number of a certain pixel column, axis 2 to the time (like in the figures 5 and 6).
  • the differently shaded zones represent the operating status of the respective pixel over the represented time frame (3: integration start; 4: integration end; 5: signal integration phase; 6: transfer signals out of chip (chip readout); 7: waiting for pixel readout; 8: pixel reset; 9: sub-cycle period; 10: one row transfer time; 11 :signal transfer from pixel to shadow signal storage).
  • the pixel row is selected by the control signals of the row control circuit.
  • the selected pixels of each column are thereby connected to the transfer means via the column busses.
  • the transfer means have received the data from the selected pixels and transferred to the pixel external storages, the currently selected pixel row is disconnected from the column busses and a next row of pixels is selected for the transfer.
  • a reset phase 8 is inserted for the considered pixel row, to empty the pixel internal storages of the considered pixel row. The accumulation of the demodulated signal charge starts with or after the end of this reset.
  • the cycle may start again.
  • the newly transferred signals are then added to the pixel external storages, which now already contain the signals of the previous cycles.
  • the last cycle, at the end of the integration phase 4, is different in that there is a common stop 4 of accumulation of demodulated signal charge for the pixels of all rows.
  • the time phase between the stop of the demodulation/accumulation phase 5 and the readout phase 11 to the pixel external storages of a specific row is a time phase 7 where the pixels just store the accumulated signals.
  • the frame is read out to the post-processing circuits. This is illustrated as phases 6 in Fig. 8. Preferably this transfer is the one where the signal data passes the chip boundary.
  • the common start 3 of the demodulation/accumulation phase 5 of the first cycle and the common stop 4 of the demodulation/accumulation phase 5 of the last cycle advantageously cause a reduction of rolling shutter artifacts for moving scenes. But in principle it works also without this common start 3 and/or stop 5.
  • the transfer of the pixels signals to the pixel external storages may be combined with procedures to fulfill functions like mentioned above such as prevent background saturation; measure background intensity; detect saturation; enhance dynamic range; and correct errors like offset and gain mismatch.
  • Preventing background saturation is preferably done by cutting the DC content from the signal.
  • the outputs of the pixel can for instance be organized and arranged so as to build differential pairs. If the shadow signal storage works in differential mode, the DC content just builds the input common mode, which is not considered as signal in a differential architecture. So the storage just adds the differential contents, which builds the AC signal.
  • the offsets which are a measure for the background (DC) content, may be dropped here.
  • many cycles of transfer and accumulation may be required to get sufficient signal power and signal to noise ratio. Circuit errors like offsets due to mismatch may accordingly occur which need to be compensated.
  • One option to compensate offset errors is the "chopper technique".
  • the chopped path can start with an inversion of the demodulator clocks before the integration phase, so that the differential pixel outputs show an inverted differential signal between the chopper phases.
  • This demodulator clock inversion can be specific for each pixel row, because the time point of demodulation start is usually specific for each row.
  • the end point of the chopped path should lie inside the shadow signal storage.
  • the number of transfers should be common for the phases of one chopper system (example Fig. 11 ). Because the chopper clock itself also generates a residual offset due to clock feed through mismatch, it can be a good choice to change the chopper phase not after every transfer cycle. It may also be necessary to enclose the chopper switches by a further chopper branch.
  • the demodulator clock inversion can also be synchronously for all pixel rows. Then there has to be a wait time of the first rows for the last row to be processed with the old chopper phase, before the new chopper phase is activated. This can help to find a compromise between complex architecture and most efficient integration.
  • the chopper path start point can also be in or before the generation of the active radiated light.
  • a measure for the background intensity can be required as additional signal or for post calibration of demodulator errors.
  • the requirements regarding accuracy may be less than the corresponding requirements for the AC measurement. So it makes sense to use a separate storage for the measurement of the background intensity.
  • the offset of one pixels outputs is a measure for the background intensity of that pixel during the last integration sub-cycle.
  • Enhance dynamic range For most applications it is more important to calculate the phase from the demodulated signal than to have an exact representation of the signal amplitude. Often this fact is combined with the requirement to manage a wide range of incident active light power.
  • One solution to improve the sensor regarding these facts is to enable a pixel-autonomic stop of signal accumulation. Regarding the present invention this can be done by adding the following procedure and circuitry: During each single transfer phase the fill content of the shadow signal storage is checked. If the imminent signal transfer from the pixel would cause an overflow of the shadow signal storage, the transfer is skipped. The pixel signal of that sub-cycle can be deleted in that case.
  • a less-accurate measure for the power of the incident active light can still be required as additional signal or for post calibration of demodulator errors.
  • the output signal is not anymore correlated to the incident light power.
  • One way to form a measure of the incident active light power is to accumulate the sub-cycle integration times of the occurred transfers. The ratio between the final signal strength and the sum of the real integration time is then a direct measure for the active incident light power.
  • the intra-pixel storage for the signals of each sub-cycle and the shadow signal storage for the accumulated signal of the frame.
  • the saturation of the shadow signal storage is detectable from the output signal by a comparison to the allowed signal range.
  • the saturation of the intra-pixel storage may not be detectable without a special solution.
  • One solution to detect this kind of saturation is to check the pixel output range during each transfer. If intra-pixel saturation is detected, the transfer is either skipped or a flag is set for the output signal in the shadow signal storage.
  • the following variant may be used.
  • the sub-cycle integration time is reduced for some or many sub-cycles to prevent intra-pixel saturation during one sub-cycle.
  • the intra-pixel reset phase may be enlarged.
  • stages can fulfill the other functions.
  • the stages can be placed in a common block for each column or inside a pixel-specific block or in a block that is common for a number of pixels.
  • the transfer itself may be achieved by charge readout or by voltage readout of the pixel.
  • One solution can be to check the equality for each pixel. This can be done with help of counters. These counters can be digital or analogue. There can be a counter for each phase. Or there can be an up-counter and a down-counter: one phase counts up, the other one down. There can be reset or refresh phases to keep the accuracy over a long frame capture time, especially in the case of an analogue counter. If the equality is violated, a flag can be set.
  • the shadow signal storage can be realized as an analogue, digital or mixed analogue-digital storage module.
  • the readout of the shadow signal storage (“shadow readout") can in principle be organized like a usual readout of a pixel matrix without shadow signal storage.
  • the shadow readout (Fig. 9: 6) of one row can in principle start directly after the last transfer (Fig. 9: 11 ) of that row.
  • a problem by doing so could be that the intended sub-cycle time is very short and the possible speed of shadow readout over the chip boundary is usually much slower, so that the gap between transfer and shadow readout would increase from row to row. The overlapping may generate interference noise, which is than different between the rows. Another solution would be therefore to wait with the shadow readout until the last transfer of the last row is finished (see Fig. 10).
  • the pattern 7 in Figs. 9 and 10 illustrates a phase, where the pixels have already stopped demodulation and accumulation, just keep their current demodulated sub-signals, and are waiting for sub-signal transfer. As mentioned hereinabove, this helps to reduce rolling shutter artifacts for moving scenes. But in principle it works also without the common start and/or stop.
  • the time of duration of the demodulation/accumulation phase 5 shall be referred to as "sub-cycle integration time”.
  • the difference time between the cycle duration 9 and the sub-cycle integration time is required for row readout 11 and reset 8. During that time no demodulated signal charge is accumulated in the considered pixel. So it is here referred to as a "non-sensitive" time.
  • the sub-cycle integration time must not be too large, because otherwise large in-pixel storages would be required to prevent saturation in case of strong ambient light.
  • the cycle duration 9 is greater or equal to the product of the count of rows to be read out and the row selection time 10 required for the row readout 11. Bringing both together, the most effective way is to reduce the row selection time 10 for the row readout 11.
  • the signal-to-noise ratio improves with the sum of the sub-cycle integration times.
  • the sum of the sub- cycle integration times is almost proportional to the number of sub-cycles. So the number of sub-cycles may be adopted to find a sufficient compromise between the noise requirements and the overall frame capture time.
  • An enhancement of the applicable intra-scene dynamic range of the incident active light power density is further possible by a combination of different settings according to the discussed aspects.
  • the measures like pixel saturation or low amplitude are of importance to build a common frame.
  • the signals from different cycles must only be added to the pixel external storages, if the corresponding measures sign the signal of the considered pixel and cycle to be a valid one.

Abstract

An imager configured to operate according to the time-of-flight principle, comprises an illumination unit for illuminating a scene with intensity-modulated light; an array of pixels, each of said pixels configured for integrating charge induced therein by light impinging onto said pixel and for converting said light into a demodulated pixel signal; an output module configured to read out the signal of the pixels after a predetermined integration time; and control module for controlling the operation of the pixels of said pixel array and the output module. According to the invention, the imager further comprises a signal storage module associated to said array of pixels and a transfer module for transferring a signal from said pixels of said array of pixels into a corresponding storage area of said signal storage module, wherein said transfer module is configured for executing, during said predetermined integration time, a plurality of transfer operations for each pixel, and wherein said output module is configured to read out the integrated signal of the pixels after the predetermined integration time from the corresponding storage areas of said signal storage module.

Description

TIME-OF-FLIGHT SENSOR
Technical field
[0001] The present invention generally relates to optical sensors with differential signal integration over time and more specifically to a range camera operating according to the time-of-flight principle and to a method for acquiring a range image using such camera.
Background Art
[0002] Systems for creating a 3-D representation of a given portion of space have a variety of potential applications in many different fields. Examples are automotive sensor technology (e.g. vehicle occupant detection and classification), robotic sensor technology (e.g. object identification) or safety engineering (e.g. plant monitoring) to name only a few. As opposed to conventional 2-D imaging, a 3-D imaging system requires depth information about the target scene. In other words, the distances between one or more observed objects and an optical receiver of the system need to be determined. A well-known approach for distance measurement, which is used e.g. in radar applications, consists in timing the interval between emission and echo-return of a measurement signal. This so called time-of-flight (TOF) approach is based on the principle that, for a signal with known propagation speed in a given medium, the distance to be measured is given by the product of the propagation speed and the time the signal spends to travel back and forth.
[0003] In case of optical imaging systems, the measurement signal consists of light waves. For the purposes of the present description, the term "light" is to be understood as including visible, infrared (IR) and ultraviolet (UV) light.
[0004] Distance measurement by means of light waves generally requires varying the intensity of the emitted light in time. The TOF method can e.g. be implemented using the phase-shift technique or the pulse technique. With the phase-shift technique, the amplitude of the emitted light is periodically modulated (e.g. by sinusoidal modulation) and the phase of the modulation at emission is compared to the phase of the modulation at reception. With the pulse technique, light is emitted in discrete pulses without the requirement of periodicity.
[0005] In phase-shift measurements, the modulation period is typically in the order of twice the difference between the maximum measurement distance and the minimum measurement distance divided by the velocity of light. In this approach, the propagation time interval is determined as phase difference by means of a phase comparison between the emitted and the received light signal.
[0006] The principles of range imaging based upon time-of-flight measurements are described in detail in EP 1 152 261 A1 (to Lange and Seitz) and WO 98/10255 (to Schwarte). A more detailed description of the technique can be found in Robert Lange's doctoral thesis "3D Time-of-Flight Distance Measurement with Custom Sol id-State Image Sensors in CMOS/CCD-Technology" (Department of Electrical Engineering and Computer Science at University of Siegen). A method of operating a time-of-flight imager pixel that allows detecting of saturation is disclosed in WO 2007/014818 A1.
[0007] Figure 1 shows the typical architecture of a state-of-art time-of-flight sensor or camera 100. Light with modulated intensity is generated under the control of a modulation signal by a light source 110 and radiated against an object or scene 120. Furthermore ambient light is present (represented as emanating from an outside light source 130) and radiates also onto the scene 120. Both components (modulated light and ambient light) are mixed and reflected at the scene 120. A portion of the reflected light is finally received at the optics 140 of the camera 100 and is passed to the sensor pixel matrix 150. In the sensor pixel matrix 150 the impinging light, which comprises a component from the non- modulated ambient light and a component from the intensity modulated light, is converted into an electrical signal for the determination of the phase information.
[0008] Each pixel 160 of the pixel matrix 150 comprises a photo detector and a demodulator for demodulating the incoming signal. The pixel 160 is fed with a demodulation signal, which is derived from the modulation signal. Under the control of the demodulation signal, each pixel 160 integrates the charge generated therein by the impinging light during at least three time intervals, each of which corresponds to a different phase within one period of the modulation signal. Each pixel 160 provides response signals indicating the integrated charge for the different time intervals. This raw phase information is sometimes referred to as "tap values" or "tap responses" according to the nomenclature of Robert Lange's doctoral thesis. To simplify computation of the phase difference between the received light and the modulation signal, one normally chooses four integration intervals corresponding to phases separated by 90°. For each pixel, one thus retrieves four tap values (called AO, A1 , A2, A3 from now on) per picture taken. The tap values are converted into phase information by a phase calculation unit. With four tap values, the phase difference φ is calculated as follows:
φ = atan2(A3 - Al, A2 -A0) (eqn. 1 ) where atan2(x,y) is the four-quadrant inverse tangent function, yielding the angle between the positive x-axis of a plane and the point with coordinates (x, y) on that plane.
[0009] As the light impinging on the range imager comprises a component from the non-modulated ambient light, a considerable part of the charges generated and integrated in the pixel is not related to the modulated light and thus to distance information to be determined. The signal at the output of the pixel therefore comprises a component which results from the charges induced in the pixel by the non-modulated ambient light (hereinafter referred to as DC component) and a component resulting from the charges induced in the pixel by the intensity modulated light (hereinafter referred to as AC component).
[0010] Accordingly the overall performance of the range imager suffers from background illumination measured along with the actual signal. The ambient light or background illumination can e.g. cause saturation of the sense node if its intensity is very high, or the ambient light can deteriorate the contrast between the charge sensing devices. As a result, the precision for the detection of the phase and the spatial distribution of the impinging radiation is low. It is therefore very important to compensate the influence of background light in order to prevent background saturation.
[0011 ] The compensation of the influence of background light and the prevention of background saturation is usually solved by additional circuitry for cutting off or compensating the background induced DC component of the pixel signal. This additional circuitry is usually required to be part of the pixel itself, i.e. inside the pixel, which results in a larger pixel and a larger sensor chip or in a reduced fill factor of the sensor chip and a reduced sensitivity.
Technical problem
[0012] It is therefore an object of the present invention to provide an imager pixel or an imager sensor chip with improved architecture and a method for operating such an imager.
[0013] This object is achieved by an imager as claimed in claim 1 and a method for operating such an imager as claimed in claim 5.
General Description of the Invention
[0014] In order to overcome the above-mentioned problem, the present invention provides an imager configured to operate according to the time-of-flight principle, comprising an illumination unit for illuminating a scene with intensity-modulated light; an array of pixels, each of said pixels configured for integrating charge induced therein by light impinging onto said pixel and for converting said light into a demodulated pixel signal; an output module configured to read out the signal of the pixels after a predetermined integration time; and control module for controlling the operation of the pixels of said pixel array and the output module. According to the invention, the imager further comprises a signal storage module associated to said array of pixels and a transfer module for transferring a signal from said pixels of said array of pixels into a corresponding storage area of said signal storage module, wherein said transfer module is configured for executing, during said predetermined integration time, a plurality of transfer operations for each pixel, and wherein said output module is configured to read out the integrated signal of the pixels after the predetermined integration time from the corresponding storage areas of said signal storage module.
[0015] The sensor according to the present invention thus consists at least of an array of pixels (Pixel Matrix), a signal storage module (called Shadow Signal Storage) and a transfer module associated with the pixel matrix and the shadow signal storage. The array of pixels is preferably a 2 dimensional array, wherein the pixels are arranged in several rows and columns, however the invention is also applicable to a pixel matrix with just one column. In this context it should be noted that in general the terms "row" and "column" may be exchanged.
[0016] Each row of pixels is connectable under the control of the control module to a corresponding row of the shadow signal storage via the transfer module. The transfer module itself is configured such that during the integration time, the pixel rows are scanned very fast to transfer the demodulated signal from the pixels to the shadow signal storage. In the shadow signal storage, which is preferably located outside of the pixel matrix, the signals of the single transfers are accumulated per pixel to improve the signal-to-noise ratio. During this transfer various functions may be processed, like i.e. measuring and cutting the DC content of the demodulated signal.
[0017] The main advantage of this architecture resides in the fact, that the pixel storage itself does not need to store all the charges integrated during the entire integration phase. In fact, as the charges integrated in the pixel are transferred several times during the integration phase to the associated storage area of the shadow signal storage, the intra-pixel storage may be designed much smaller than with prior art imagers. Furthermore the pixels may be configured much simpler than with prior art sensors of the kind, because most of the functions of the sensor can be fulfilled during or after the transfer of the signals to the shadow signal storage by circuitry outside the pixel.
[0018] The circuitry for performing the different functions relating to the detection and prevention of the background saturation and enhancement of the dynamic range being located outside the pixel itself, this circuitry can accordingly be optimized without the constraints relating to the pixel design. The different functions as listed above can therefore be improved during development without changing the pixel layout. The pixel can have a higher fill factor and the pixel matrix can become smaller, which furthermore reduces the costs for the optics.
[0019] In a preferred embodiment of the invention, said array of pixels comprises n sub-arrays of pixels and wherein said transfer module and said output module each comprises n sub-modules, one of said sub-modules being associated with a respective one of said n sub-arrays of pixels. [0020] It will be noted, that the array of pixels and said signal storage module may be arranged on different semiconductor chips and connected together by means of suitable circuitry. In a preferred embodiment however, the array of pixels and said signal storage module are arranged on a common semiconductor chip.
[0021] The present invention also relates to a method for operating the above described imager, comprising the steps of executing, during said predetermined integration time, a plurality of transfer operations for each pixel, each of said transfer operations for transferring a signal from said pixels of said array of pixels into a corresponding storage area of said signal storage module; and after the predetermined integration time, reading out the integrated signal of the pixels from the corresponding storage areas of said signal storage module.
[0022] In a preferred embodiment of the method and for an imager in which said pixels of said array of pixels are arranged in a matrix arrangement so as to form pixels rows and pixel columns, the signal transfer is configured such that during said predetermined integration time, for each pixel column, the signals from the corresponding pixels of the different pixel rows are cyclically transferred to the associated storage area of said signal storage module.
Brief Description of the Drawings
[0023] Further details and advantages of the present invention will be apparent from the following detailed description of several not limiting embodiments with reference to the attached drawings, wherein:
Fig. 1 shows the architecture of a time-of-flight sensor or camera;
Fig. 2 shows an illustration of the pixel matrix;
Fig. 3 shows a block diagram of one pixel;
Fig. 4 shows a simplified architecture of the pixel matrix together with control and readout circuits;
Fig. 5 shows a possible timing to operate the architecture of Fig. 4;
Fig. 6 shows a different timing to operate the architecture of Fig. 4; Fig. 7 shows a simplified architecture of an embodiment of an imager according to the present invention;
Fig. 8-11 illustrate different timing methods of a preferred method for operating the architecture of Fig. 7;
Fig. 12 shows a simplified architecture of another embodiment of an imager according to the present invention.
Description of Preferred Embodiments
[0024] A TOF sensor has in principle to fulfill a large number of functions, such as demodulate incident modulated light; output quantities of the demodulated signal; measure background intensity; detect saturation; prevent background saturation; enhance dynamic range; correct errors like offset and gain mismatch; be highly sensitive; be small to reduce costs for chip and optics; and provide the images with high frame rate.
[0025] Some of these requirements conflict with each other. For instance the demodulation, the detection and prevention of background saturation and the enhancement of the dynamic range require some additional circuitry inside the pixel. This makes the pixel and the sensor chip larger or reduces the fill factor and the sensitivity. Especially the requirement of prevention of background saturation is usually solved by circuitry inside the pixel. This requirement is very important for many applications, because of the incident background light can be much more powerful than the incident modulated light. Without an intra-pixel cut of the background induced DC content, the storages could accumulate only very few signal energy besides very strong DC (background) energy, which makes the signal to noise ratio very bad.
[0026] Figure 1 shows the architecture of a state-of-art time-of-flight sensor or camera. Active light is generated by a light source 110 and radiated against an object or scene 120. Furthermore ambient light 130 is present and also radiates onto the scene 130. Both components are mixed and reflected. A portion of the reflected light achieves the optics 140 of the imager and is passed to the sensors pixel matrix 150. [0027] Figure 2 shows an illustration of the pixel matrix 150. The pixels 160 are arranged in a matrix form so as to form pixel rows 190 and pixel columns 200.
[0028] Figure 3 shows a block diagram of one pixel 160. The incident light 230, which enters via the optics 140, reaches a stage 210 where it is detected and demodulated. This stage 210 has at least one output. A number of 2 or 4 outputs are usual in reported constructions.
[0029] These outputs are passed to a stage 220, where the demodulated signal is accumulated over several periods of the modulation signal. Stage 220 includes corresponding means therefore. Further this stage includes means to pass the output signals to the common output bus 250 of one pixel column 200. The control signals to control the stages 210 and 220, are tapped from a control bus 240 associated to each pixel row 190.
[0030] Usually this stage 220 includes all further circuits that are required to fulfill the different functions relating to the detection and prevention of background saturation in order to enhance the dynamic range as mentioned above. The required layout space for that would be large. Large pixels however can cause the pixel matrix to become very large, if a certain lateral resolution is required. Consequently it is an aim of pixel design to make the stage 220 as small as possible regarding layout area.
[0031] Figure 4 shows a simplified architecture of the pixel matrix 150 together with control and readout circuits. Block 270 forms a row control circuit, which applies the control signals 240 to the pixel matrix 150. Block 280 forms a block that reads the common column output busses. Usually this block includes circuits like amplifiers (in case of charge readout), current sources (to drive source followers in voltage readout), readout control circuits, column multiplexers and/or column parallel ADCs. Usually this block is required just at the end of the accumulation of the signals inside the pixels, when these signals shall be read out as an image from the sensor and be passed to the post processing circuitry.
[0032] Figure 5 shows a possible state-of-art timing to operate the architecture of figure 4: The global shutter method. Axis 1 represents or corresponds to the row number of a certain pixel column, axis 2 to the time. The differently shaded zones represent the operating status of the respective pixel over the represented time frame (3: integration start; 4: integration end; 5: signal integration phase; 6: transfer signals out of chip (chip readout); 7: waiting for pixel readout; 8: pixel reset).
[0033] All the pixels of all the pixel rows leave the reset state 8 and start to accumulate photo-generated charge 5 at the same time 3. After a defined time, which is called "integration time", the time point 4 is reached and the accumulation of charge 5 is stopped for all pixels in common. The accumulated charge of each pixel is then still stored inside the pixel itself 7.
[0034] The integration time is in principle variable, the duration being usually varied in order to deal with varying incident light power ranges.
[0035] To read out the signals of the pixels, one pixel row is selected by the control signals of the row control circuit 270. The selected pixels of each column are then connected to the block 280 via the respective column busses 250. When the block 280 has received and passed the data of the selected pixels, the currently selected pixel row is disconnected from the column busses and a next row of pixels is selected for read out.
[0036] Figure 6 shows another possible state-of-art timing to operate the architecture of figure 4: the rolling shutter method.
[0037] Axis 1 corresponds again to the row number of a certain pixel, axis 2 to the time. The read out process is organized similar to that of the global shutter method: The pixel rows are read out 6 sequentially one after another. Again one pixel row is selected by the control signals of the row control circuit 270. The selected pixels of each column are thereby connected to the block 280 via the respective column busses 250. When the block 280 has received and passed the data of the selected pixels, the currently selected pixel row is disconnected from the column busses and a next row of pixels is selected for read out.
[0038] In contrast to the above described global shutter method, the charge accumulation 5 does not start at the same moment for all the pixels with the rolling shutter method. Rather the charge accumulation 5 starts row-specifically at a predefined time (="integration time") before the readout 6 of the considered row. The reset of a considered row ends at the start of charge accumulation 5 of the considered row. The charge accumulation of some rows may thereby timely overlap with the read out of one selected row.
[0039] The integration time is usually equal for all pixel rows in one read out frame, but for the frame it is in principle variable. This is often used to deal with varying incident light power ranges.
[0040] Figure 7 shows a simplified architecture of an embodiment of a sensor according to the present invention. The sensor consists at least of an array of pixels (Pixel Matrix), a signal storage module (called Shadow Signal Storage) and a transfer module associated with the pixel matrix and the shadow signal storage. The array of pixels is preferably a 2 dimensional array, wherein the pixels are arranged in several rows and columns, however the invention is also applicable to a pixel matrix with just one column. Each row of pixels is connectable to a corresponding row of the shadow signal storage via the transfer means. It should be noted that in general the terms "row" and "column" may be exchanged.
[0041] The transfer module is configures such that during the integration time, the pixel rows are scanned very fast to transfer the demodulated signal from the pixels to the shadow signal storage. In the shadow signal storage the signals of the single transfers are accumulated per pixel to improve the signal-to-noise ratio. During this transfer various functions may be processed, like i.e. measuring and cutting the DC content of the demodulated signal.
[0042] The main advantage of this architecture resides in the fact, that the pixel storage itself does not need to store all the charges integrated during the entire integration phase. In fact, as the charges integrated in the pixel are transferred several times during the integration phase to the associated storage area of the shadow signal storage, the intra-pixel storage may be designed much smaller than with prior art imagers. Furthermore the pixels may be configured much simpler than with prior art sensors of the kind, because most of the functions of the sensor can be fulfilled during or after the transfer of the signals to the shadow signal storage by circuitry outside the pixel.
[0043] The circuitry for performing the different functions relating to the detection and prevention of the background saturation and enhancement of the dynamic range being located outside the pixel itself, this circuitry can accordingly be optimized without the constraints relating to the pixel design. The different functions as listed above can therefore be improved during development without changing the pixel layout. The pixel can have a higher fill factor and the pixel matrix can become smaller, which furthermore reduces the costs for the optics.
[0044] The preferred procedure for the charge transfer by cyclically or sequentially transferring the charges accumulated in the different pixels of a pixel column to the shadow signal storage is similar to "rolling shutter" in 2d imaging. One major difference is that for 3d imaging there are two signals overlaid to each other: the active modulated signal and the background signal. The background signal maybe significantly larger than the active modulated signal. Still and even more the measure of the active modulated signal has to be integrated over a sufficient time to be large enough to reduce background induced noise. In the case of just applying the rolling shutter method, the storages would saturate due to the strong background signal.
[0045] For this reason, the preferred method of the present invention focuses on the splitting of one long time capture into several short time captures. This is achieved by the charges in each pixel being transferred several times during one integration period. The main purpose of doing so is to reduce the size of the required in-pixel storages and non-photo-sensitive circuit parts. The required rolling and interleaved scan of the pixel rows is different to simple repeats of capturing frames in that
• the pixel functions (like Prevent background saturation; Measure background intensity; Detect saturation; Enhance dynamic range; Correct errors like offset and gain mismatch) are done performed outside the pixels with the help of pixel external circuits
• and the in-pixel-realized functions are only the demodulation, the accumulation, the output and the reset
• and in the sense of the basic idea: the duration of one cycle is preferably shorter than that of a usual frame capture
[0046] During the integration time, the pixel rows are scanned very fast to transfer the demodulated signal from the pixels to the shadow signal storage. In the shadow signal storage the signals of the single transfers are accumulated per pixel to improve the signal to noise ratio. During this transfer various functions may be processed, like i.e. measuring and cutting the DC content of the demodulated signal.
[0047] This basic idea requires a fast continuous readout of the intra-pixel storages, so that the pixel internal storages can be made small and do not saturate even in front of high ambient light power. The fast continuous readout can be realized as illustrated in Fig. 8.
[0048] Axis 1 represents or corresponds to the row number of a certain pixel column, axis 2 to the time (like in the figures 5 and 6). The differently shaded zones represent the operating status of the respective pixel over the represented time frame (3: integration start; 4: integration end; 5: signal integration phase; 6: transfer signals out of chip (chip readout); 7: waiting for pixel readout; 8: pixel reset; 9: sub-cycle period; 10: one row transfer time; 11 :signal transfer from pixel to shadow signal storage).
[0049] All pixels of all pixel rows leave the reset state 8 and start demodulation and accumulation of signal charge 5 at the same time 3. After a short time a first row of pixels is selected to transfer the pixels signals to the pixel external storages 11.
[0050] Prior to the first transfer the shadow signal storage is reset.
[0051] To transfer the signals of the pixels, the pixel row is selected by the control signals of the row control circuit. The selected pixels of each column are thereby connected to the transfer means via the column busses. When the transfer means have received the data from the selected pixels and transferred to the pixel external storages, the currently selected pixel row is disconnected from the column busses and a next row of pixels is selected for the transfer.
[0052] The pixels of the non-selected rows continue to demodulate and accumulate.
[0053] Prior to a new accumulation phase of signal charge 5, a reset phase 8 is inserted for the considered pixel row, to empty the pixel internal storages of the considered pixel row. The accumulation of the demodulated signal charge starts with or after the end of this reset.
[0054] After the last row is deselected, the cycle may start again. The newly transferred signals are then added to the pixel external storages, which now already contain the signals of the previous cycles.
[0055] The last cycle, at the end of the integration phase 4, is different in that there is a common stop 4 of accumulation of demodulated signal charge for the pixels of all rows. The time phase between the stop of the demodulation/accumulation phase 5 and the readout phase 11 to the pixel external storages of a specific row is a time phase 7 where the pixels just store the accumulated signals.
[0056] After all relevant signals are transferred to the pixel external storages, the frame is read out to the post-processing circuits. This is illustrated as phases 6 in Fig. 8. Preferably this transfer is the one where the signal data passes the chip boundary.
[0057] The common start 3 of the demodulation/accumulation phase 5 of the first cycle and the common stop 4 of the demodulation/accumulation phase 5 of the last cycle advantageously cause a reduction of rolling shutter artifacts for moving scenes. But in principle it works also without this common start 3 and/or stop 5.
[0058] The transfer of the pixels signals to the pixel external storages may be combined with procedures to fulfill functions like mentioned above such as prevent background saturation; measure background intensity; detect saturation; enhance dynamic range; and correct errors like offset and gain mismatch.
[0059] Preventing background saturation is preferably done by cutting the DC content from the signal. The outputs of the pixel can for instance be organized and arranged so as to build differential pairs. If the shadow signal storage works in differential mode, the DC content just builds the input common mode, which is not considered as signal in a differential architecture. So the storage just adds the differential contents, which builds the AC signal. The offsets, which are a measure for the background (DC) content, may be dropped here. [0060] During normal operation of the imager, many cycles of transfer and accumulation may be required to get sufficient signal power and signal to noise ratio. Circuit errors like offsets due to mismatch may accordingly occur which need to be compensated. One option to compensate offset errors is the "chopper technique". The chopped path can start with an inversion of the demodulator clocks before the integration phase, so that the differential pixel outputs show an inverted differential signal between the chopper phases.
[0061] This demodulator clock inversion can be specific for each pixel row, because the time point of demodulation start is usually specific for each row. The end point of the chopped path should lie inside the shadow signal storage.
[0062] To get the best error reduction, the number of transfers should be common for the phases of one chopper system (example Fig. 11 ). Because the chopper clock itself also generates a residual offset due to clock feed through mismatch, it can be a good choice to change the chopper phase not after every transfer cycle. It may also be necessary to enclose the chopper switches by a further chopper branch.
[0063] In an alternative embodiment to the demodulator clock inversion, the demodulator clock inversion can also be synchronously for all pixel rows. Then there has to be a wait time of the first rows for the last row to be processed with the old chopper phase, before the new chopper phase is activated. This can help to find a compromise between complex architecture and most efficient integration.
[0064] In yet another embodiment as variant for chopper path start point, the chopper path start point can also be in or before the generation of the active radiated light.
[0065] A measure for the background intensity can be required as additional signal or for post calibration of demodulator errors. The requirements regarding accuracy may be less than the corresponding requirements for the AC measurement. So it makes sense to use a separate storage for the measurement of the background intensity. During each pixel-to-shadow-signal-storage transfer, the offset of one pixels outputs is a measure for the background intensity of that pixel during the last integration sub-cycle. These single measures can be accumulated in a second pixel-external storage. This accumulation may also be combined with any kind of compression, to enlarge the capable input range regarding the power of background light.
[0066] Enhance dynamic range: For most applications it is more important to calculate the phase from the demodulated signal than to have an exact representation of the signal amplitude. Often this fact is combined with the requirement to manage a wide range of incident active light power. One solution to improve the sensor regarding these facts is to enable a pixel-autonomic stop of signal accumulation. Regarding the present invention this can be done by adding the following procedure and circuitry: During each single transfer phase the fill content of the shadow signal storage is checked. If the imminent signal transfer from the pixel would cause an overflow of the shadow signal storage, the transfer is skipped. The pixel signal of that sub-cycle can be deleted in that case.
[0067] Getting a reduced measure for the power of the incident active light: A less-accurate measure for the power of the incident active light can still be required as additional signal or for post calibration of demodulator errors. When any transfers are skipped, the output signal is not anymore correlated to the incident light power. One way to form a measure of the incident active light power is to accumulate the sub-cycle integration times of the occurred transfers. The ratio between the final signal strength and the sum of the real integration time is then a direct measure for the active incident light power.
[0068] If the intra-pixel conditions are equal between the different sub-cycles, it would also be sufficient to count the number of occurred transfers. The sum of the real integration time can then be calculated by (the count * sub-cycle integration time).
[0069] If the shadow-signal-storage 'sees' a linear signal increase over the time, then there is no further transfer after a skipped one. Then it would also be sufficient to just store the time of the last occurred transfer. The sum of the real integration time can be calculated by this time and the well known applied timing of control.
[0070] Detect pixel saturation: It is very important to detect pixel saturation, because the saturated signal induces a large error. If it would just be further processed, the detection of that error could become impossible. This means, that a potential saturation of any storage needs to be detectable from the final output signal.
[0071] Related to this invention there are in principal two storages per pixel: the intra-pixel storage for the signals of each sub-cycle and the shadow signal storage for the accumulated signal of the frame. The saturation of the shadow signal storage is detectable from the output signal by a comparison to the allowed signal range. The saturation of the intra-pixel storage may not be detectable without a special solution. One solution to detect this kind of saturation is to check the pixel output range during each transfer. If intra-pixel saturation is detected, the transfer is either skipped or a flag is set for the output signal in the shadow signal storage.
[0072] To further enhance dynamic range and to enlarge the capable range of light power above the upper limit, the following variant may be used. The sub-cycle integration time is reduced for some or many sub-cycles to prevent intra-pixel saturation during one sub-cycle. To keep the overall sub-cycle time in the same range (which might be necessary to get around with all rows), the intra-pixel reset phase may be enlarged.
[0073] If this modification is done for just some sub-cycles, it would make sense to combine this with a conditional transfer as disclosed above where a transfer is skipped in case of saturated pixel. Then the shadow signal storage contains just valid data: for very strong light this data originates just from the sub-cycles of reduced integration time.
[0074] Compensating channel gain mismatch and correcting errors like offset and gain mismatch: The compensation of errors of one differential output is discussed hereinabove. When a pixel consists of more than one differential output, the gain mismatch needs to be corrected in addition. Similar to the described chopping of one differential path, the differential paths itself may be exchanged. This may also be done in a "flat" way, meaning that the single ended paths may be exchanged. Considering this issue in that kind enables also correction of odd numbers of pixel channels.
[0075] With the help of the above embodiment it is also possible to reduce the number of physical paths per pixel. The reduced group of physical intra-pixel paths is processed in a way, that they re-build all logical paths serially over a sufficient number of sub-cycles. The shadow signal storage still needs to have a physical storage for each logical one.
[0076] According to what has been previously said, some of the functions of the sensor are realized by the kind of transfer. Decisions are taken whether to transfer or not. Other functions are realized by storing further information prior or during the AC-signal transfer. Again other functions are realized by exchanging physical paths between sub-cycles.
[0077] To reduce the row access time it can help to put the functions into a pipeline architecture. For example a first stage could receive the pixel signal. A second stage could decide whether to add the signal to the shadow signal storage. A third stage could realize the adding of the signal to the shadow signal storage. Further stages can fulfill the other functions. The stages can be placed in a common block for each column or inside a pixel-specific block or in a block that is common for a number of pixels.
[0078] The transfer itself may be achieved by charge readout or by voltage readout of the pixel.
[0079] To further reduce the sub-cycle time, it can be helpful, to split the pixel matrix into several parts (see Fig. 12). A simple split would be a horizontal one into two parts. Each part can be considered independent from the other one, if it has its own processing and storage means. By doing so, the number of rows per split-part is reduced and consequently the overall time of getting around the rows is reduced.
[0080] As mentioned hereinabove in relation to the description of the chopper system, the number of transfers should be common for the phases of one chopper system. This is difficult to reach, if some transfers are skipped due to conditional transfer. As consequence the residual error can be larger than intended.
[0081] One solution can be to check the equality for each pixel. This can be done with help of counters. These counters can be digital or analogue. There can be a counter for each phase. Or there can be an up-counter and a down-counter: one phase counts up, the other one down. There can be reset or refresh phases to keep the accuracy over a long frame capture time, especially in the case of an analogue counter. If the equality is violated, a flag can be set.
[0082] To increase the property of equal number of transfers, different transfer- skipping criteria can be used between the first and the other phases: A more secure one for the first phase and a more relaxed one for the other phases. In addition a transfer of the other phases is just done, when there was already a transfer in the first phase. The definition of 'more secure' or 'more relaxed' can i.e. be done by defining corresponding thresholds for to be checked values, i.e. for the test for a saturated pixel or shadow signal storage.
[0083] It will be appreciated that the shadow signal storage can be realized as an analogue, digital or mixed analogue-digital storage module. The readout of the shadow signal storage ("shadow readout") can in principle be organized like a usual readout of a pixel matrix without shadow signal storage. The shadow readout (Fig. 9: 6) of one row can in principle start directly after the last transfer (Fig. 9: 11 ) of that row.
[0084] A problem by doing so could be that the intended sub-cycle time is very short and the possible speed of shadow readout over the chip boundary is usually much slower, so that the gap between transfer and shadow readout would increase from row to row. The overlapping may generate interference noise, which is than different between the rows. Another solution would be therefore to wait with the shadow readout until the last transfer of the last row is finished (see Fig. 10).
[0085] The pattern 7 in Figs. 9 and 10 illustrates a phase, where the pixels have already stopped demodulation and accumulation, just keep their current demodulated sub-signals, and are waiting for sub-signal transfer. As mentioned hereinabove, this helps to reduce rolling shutter artifacts for moving scenes. But in principle it works also without the common start and/or stop.
[0086] The time of duration of the demodulation/accumulation phase 5 shall be referred to as "sub-cycle integration time". The difference time between the cycle duration 9 and the sub-cycle integration time is required for row readout 11 and reset 8. During that time no demodulated signal charge is accumulated in the considered pixel. So it is here referred to as a "non-sensitive" time. To be most effective regarding the use of the system energy (for active light and demodulation) it is an aim to reduce the ratio of the "non-sensitive" time to the cycle time to an acceptable minimum. In addition the sub-cycle integration time must not be too large, because otherwise large in-pixel storages would be required to prevent saturation in case of strong ambient light.
[0087] The way to solve both requirements in common is to reduce mainly the non-sensitive time and to a lesser extend the cycle duration 9. As the "non- sensitive" time phase consists of the phases row readout 11 and reset 8, both are the object of timing optimization.
[0088] The cycle duration 9 is greater or equal to the product of the count of rows to be read out and the row selection time 10 required for the row readout 11. Bringing both together, the most effective way is to reduce the row selection time 10 for the row readout 11.
[0089] On the other hand, for a given sensor design, it may be an aim to further increase the maximum applicable light power. This is possible by shortening the sub-cycle integration time, especially when the cycle time can't be made shorter. As a consequence the time for the reset phase 8 prior to the demodulation/accumulation phase 5 becomes longer. As this solution decreases the efficiency regarding the use of system energy, it should just be considered as an option for a given sensor design.
[0090] Besides the discussed aspects regarding pixel saturation, it may be a further aim to increase the signal-to-noise ratio. For given light power densities of the incident active modulated and the incident ambient light, the signal-to-noise ratio improves with the sum of the sub-cycle integration times. The sum of the sub- cycle integration times is almost proportional to the number of sub-cycles. So the number of sub-cycles may be adopted to find a sufficient compromise between the noise requirements and the overall frame capture time.
[0091] An enhancement of the applicable intra-scene dynamic range of the incident active light power density is further possible by a combination of different settings according to the discussed aspects. In addition the measures like pixel saturation or low amplitude are of importance to build a common frame. The signals from different cycles must only be added to the pixel external storages, if the corresponding measures sign the signal of the considered pixel and cycle to be a valid one.

Claims

Claims
1. An imager configured to operate according to the time-of-flight principle, comprising
an illumination unit for illuminating a scene with intensity-modulated light;
an array of pixels, each of said pixels configured for integrating charge induced therein by light impinging onto said pixel and for converting said light into a demodulated pixel signal,
an output module configured to read out the signal of the pixels after a predetermined integration time; and
control module for controlling the operation of the pixels of said pixel array and the output module;
characterized by a signal storage module associated to said array of pixels and a transfer module for transferring a signal from said pixels of said array of pixels into a corresponding storage area of said signal storage module, wherein said transfer module is configured for executing, during said predetermined integration time, a plurality of transfer operations for each pixel, and wherein said output module is configured to read out the integrated signal of the pixels after the predetermined integration time from the corresponding storage areas of said signal storage module.
2. The imager according to claim 1 , wherein said pixels of said array of pixels are arranged in a matrix arrangement so as to form pixels rows and pixel columns, and wherein, for each pixel column, the signals from the corresponding pixels of the different pixel rows are sequentially transferred to the associated storage area of said signal storage module.
3. The imager according to any one of claims 1 to 2, wherein said array of pixels comprises n sub-arrays of pixels and wherein said transfer module and said output module each comprises n sub-modules, one of said sub-modules being associated with a respective one of said n sub-arrays of pixels.
4. The imager according to any one of claims 1 to 3, wherein said array of pixels and said signal storage module are arranged on a common semiconductor chip.
5. A method for operating an imager, said imager comprising an illumination unit for illuminating a scene with intensity-modulated light; an array of pixels, each of said pixels configured for integrating charge induced therein by light impinging onto said pixel and for converting said light into a demodulated pixel signal; an output module configured to read out the signal of the pixels after a predetermined integration time; a control module for controlling the operation of the pixels of said pixel array and the output module; a signal storage module associated to said array of pixels and a transfer module for transferring a signal from said pixels of said array of pixels into a corresponding storage area of said signal storage module, said method comprising the steps of
executing, during said predetermined integration time, a plurality of transfer operations for each pixel, each of said transfer operations for transferring a signal from said pixels of said array of pixels into a corresponding storage area of said signal storage module; and
after the predetermined integration time, reading out the integrated signal of the pixels from the corresponding storage areas of said signal storage module.
6. The method according to claim 5, wherein said pixels of said array of pixels are arranged in a matrix arrangement so as to form pixels rows and pixel columns, and wherein, during said predetermined integration time, for each pixel column, the signals from the corresponding pixels of the different pixel rows are cyclically transferred to the associated storage area of said signal storage module.
7. The method according to claim 5 or 6, wherein the integration time starts and ends simultaneously for all the pixels of said pixel array.
PCT/EP2010/062202 2009-08-21 2010-08-20 Time-of-flight sensor WO2011020921A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP09010770.7 2009-08-21
EP09010770 2009-08-21

Publications (1)

Publication Number Publication Date
WO2011020921A1 true WO2011020921A1 (en) 2011-02-24

Family

ID=42953847

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/062202 WO2011020921A1 (en) 2009-08-21 2010-08-20 Time-of-flight sensor

Country Status (1)

Country Link
WO (1) WO2011020921A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2587276A1 (en) * 2011-10-25 2013-05-01 Samsung Electronics Co., Ltd. 3D image acquisition apparatus and method of calculating depth information in the 3D image acquisition apparatus
WO2014138985A1 (en) * 2013-03-15 2014-09-18 Novatel Inc. System and method for heavy equipment navigation and working edge positioning using an image acquisition device that provides distance information
CN113497903A (en) * 2020-04-03 2021-10-12 爱思开海力士有限公司 Image sensing device and operation method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998010255A1 (en) 1996-09-05 1998-03-12 Rudolf Schwarte Method and device for determining the phase- and/or amplitude data of an electromagnetic wave
US5856667A (en) * 1994-11-14 1999-01-05 Leica Ag Apparatus and method for detection and demodulation of an intensity-modulated radiation field
EP1152261A1 (en) 2000-04-28 2001-11-07 CSEM Centre Suisse d'Electronique et de Microtechnique SA Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves
US20060192938A1 (en) * 2003-02-03 2006-08-31 National University Corporation Shizuoka University Distance image sensor
EP1748304A1 (en) * 2005-07-27 2007-01-31 IEE International Electronics & Engineering S.A.R.L. Method for operating a time-of-flight imager pixel
US20070057209A1 (en) * 2004-09-17 2007-03-15 Matsushita Electric Works, Ltd. Range image sensor
JP2007178314A (en) * 2005-12-28 2007-07-12 Institute Of Physical & Chemical Research Three-dimensional image acquisition method using solid imaging device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5856667A (en) * 1994-11-14 1999-01-05 Leica Ag Apparatus and method for detection and demodulation of an intensity-modulated radiation field
WO1998010255A1 (en) 1996-09-05 1998-03-12 Rudolf Schwarte Method and device for determining the phase- and/or amplitude data of an electromagnetic wave
US6825455B1 (en) * 1996-09-05 2004-11-30 Rudolf Schwarte Method and apparatus for photomixing
EP1152261A1 (en) 2000-04-28 2001-11-07 CSEM Centre Suisse d'Electronique et de Microtechnique SA Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves
US20060192938A1 (en) * 2003-02-03 2006-08-31 National University Corporation Shizuoka University Distance image sensor
US20070057209A1 (en) * 2004-09-17 2007-03-15 Matsushita Electric Works, Ltd. Range image sensor
EP1748304A1 (en) * 2005-07-27 2007-01-31 IEE International Electronics & Engineering S.A.R.L. Method for operating a time-of-flight imager pixel
WO2007014818A1 (en) 2005-07-27 2007-02-08 Iee International Electronics & Engineering S.A. Method for operating a time-of-flight imager pixel
JP2007178314A (en) * 2005-12-28 2007-07-12 Institute Of Physical & Chemical Research Three-dimensional image acquisition method using solid imaging device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2587276A1 (en) * 2011-10-25 2013-05-01 Samsung Electronics Co., Ltd. 3D image acquisition apparatus and method of calculating depth information in the 3D image acquisition apparatus
US9418425B2 (en) 2011-10-25 2016-08-16 Samsung Electronic Co., Ltd. 3D image acquisition apparatus and method of calculating depth information in the 3D image acquisition apparatus
WO2014138985A1 (en) * 2013-03-15 2014-09-18 Novatel Inc. System and method for heavy equipment navigation and working edge positioning using an image acquisition device that provides distance information
US9957692B2 (en) 2013-03-15 2018-05-01 Hexagon Technology Center Gmbh System and method for heavy equipment navigation and working edge positioning using an image acquisition device that provides distance information
CN113497903A (en) * 2020-04-03 2021-10-12 爱思开海力士有限公司 Image sensing device and operation method thereof

Similar Documents

Publication Publication Date Title
US8829408B2 (en) Sensor pixel array and separated array of storage and accumulation with parallel acquisition and readout wherein each pixel includes storage sites and readout nodes
EP2803184B1 (en) Method of operating a time-of-flight pixel
CN107710015B (en) Distance measuring device and distance image synthesizing method
US10422879B2 (en) Time-of-flight distance measuring device
US9807369B2 (en) 3D imaging apparatus
US11153551B2 (en) Apparatus for and method of illumination control for acquiring image information and depth information simultaneously
CN110596722A (en) System and method for measuring flight time distance with adjustable histogram
CN110596721A (en) Flight time distance measuring system and method of double-shared TDC circuit
US11570424B2 (en) Time-of-flight image sensor resolution enhancement and increased data robustness using a binning module
US20100046802A1 (en) Distance estimation apparatus, distance estimation method, storage medium storing program, integrated circuit, and camera
EP2437484B1 (en) Imaging device and camera system
US20110292370A1 (en) Method and system to maximize space-time resolution in a Time-of-Flight (TOF) system
CN110596725A (en) Time-of-flight measurement method and system based on interpolation
EP2591499B1 (en) Radiation-hardened roic with tdi capability, multi-layer sensor chip assembly and method for imaging
CN110596724A (en) Method and system for measuring flight time distance during dynamic histogram drawing
CN110596723A (en) Method and system for measuring flight time distance during dynamic histogram drawing
KR20150054568A (en) A depth sensor, and a method of operating the same
JP7353765B2 (en) Photodetection device, photodetection system, and moving object
US11662443B2 (en) Method and apparatus for determining malfunction, and sensor system
IL222280A (en) Method and apparatus for integrated sensor to provide higher resolution, lower frame rate and lower resolution, higher frame rate imagery simultaneously
EP3987305B1 (en) Direct time-of-flight depth sensor architecture and method for operating of such a sensor
WO2011020921A1 (en) Time-of-flight sensor
JP2003247809A (en) Distance information input device
US20210373164A1 (en) Imaging sensor
CN100548032C (en) High speed imaging sensor based on low speed CCD

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10743191

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10743191

Country of ref document: EP

Kind code of ref document: A1