WO2022128985A1 - Time-of-flight image sensor circuitry and time-of-flight image sensor circuitry control method - Google Patents

Time-of-flight image sensor circuitry and time-of-flight image sensor circuitry control method Download PDF

Info

Publication number
WO2022128985A1
WO2022128985A1 PCT/EP2021/085600 EP2021085600W WO2022128985A1 WO 2022128985 A1 WO2022128985 A1 WO 2022128985A1 EP 2021085600 W EP2021085600 W EP 2021085600W WO 2022128985 A1 WO2022128985 A1 WO 2022128985A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
depth data
image sensor
depth
time
Prior art date
Application number
PCT/EP2021/085600
Other languages
French (fr)
Inventor
Sean CHARLESTON
Original Assignee
Sony Semiconductor Solutions Corporation
Sony Depthsensing Solutions Sa/Nv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Semiconductor Solutions Corporation, Sony Depthsensing Solutions Sa/Nv filed Critical Sony Semiconductor Solutions Corporation
Priority to CN202180082922.XA priority Critical patent/CN116635748A/en
Priority to EP21836157.4A priority patent/EP4264323A1/en
Publication of WO2022128985A1 publication Critical patent/WO2022128985A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar

Definitions

  • the present disclosure generally pertains to time-of-flight image sensor circuitry and a time-of-flight image sensor circuitry control method.
  • time-of-flight (ToF) devices for measuring a depth of a scene are generally known.
  • it may be distinguished between indirect ToF (iToF), direct ToF (dToF), spot ToF, or the like.
  • a roundtrip delay of light, which has been emitted from the dToF device is directly measured in terms of a time in which the light travels from the light source and gets reflected back on an image sensor of the dToF device. Based on the time, taking into account the speed of light, the depth may be estimated.
  • spot ToF a spotted light pattern may be emitted from a spotted light source.
  • spot ToF is to measure a displacement and/ or a deterioration of the light spots which may be indicative for the depth of the scene.
  • spot ToF can be used combined with iToF (explained further below), i.e. a spotted light source is used (as spot ToF component) in combination with a CAPD (current assisted photonic demodulator) sensor (iToF component).
  • iToF current assisted photonic demodulator
  • modulated light may be emitted based on a modulation signal which is sent to a light source, wherein the same or a similar modulation signal may be applied to a pixel, such that a phaseshift of the modulated signal can be measured based on the reflected modulated light. This phaseshift may be indicative for the distance.
  • a maximum measurement distance may be present, which is generally known as unambiguous distance and integer multiples or fractions of the unambiguous (real) distance may be measured, as well.
  • unambiguous distance may roughly correspond to one and a half meters, but if the scene (or object) is one meter away, also fifty centimeters or two meters may be a result of the measurement.
  • the disclosure provides time-of-flight image sensor circuitry comprising: an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the circuitry is further configured to: apply a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; apply a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generate resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
  • the disclosure provides a time-of-flight image sensor circuitry control method for controlling a time-of-flight image sensor circuitry including an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the method further comprises: applying a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; applying a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generating resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
  • Fig. 1 depicts a block diagram of a known iToF dual-frequency measurement
  • Fig. 2 depicts an example of a phase measurement which is subject to a depth ambiguity
  • Fig. 3 depicts an embodiment of an imaging unit according to the present disclosure
  • Fig. 4 depicts a further embodiment of an imaging unit according to the present disclosure
  • Fig. 5 depicts a further embodiment of an imaging unit according to the present disclosure
  • Fig. 6 depicts a further embodiment of an imaging unit according to the present disclosure
  • Fig. 7 depicts a further embodiment of an imaging unit according to the present disclosure
  • Fig. 8 illustrates an embodiment of a two-shot measurement according to the present disclosure in a block diagram
  • Fig. 9 illustrates an embodiment of a one-shot measurement according to the present disclosure in a block diagram
  • Fig. 10 depicts an embodiment of a time-of-flight image sensor circuitry control method according to the present disclosure in block diagram
  • Fig. 11 depicts a further embodiment of a time-of-flight image sensor circuitry control method according to the present disclosure in a block diagram
  • Fig. 12 shows an illustration of index unwrapping according to the present disclosure
  • Fig. 13 depicts a schematic illustration for explaining how to combine spot-ToF with an imaging unit according to the present disclosure
  • Fig. 14 depicts a further schematic illustration for explaining how to combine spot-ToF with an imaging unit according to the present disclosure
  • Fig. 15 depicts an embodiment of an illumination pattern according to the present disclosure
  • Fig. 16 illustrates an embodiment of a ToF imaging apparatus
  • Fig. 17 depicts a further embodiment of a time-of-flight image sensor circuitry control method in a block diagram
  • Fig. 18 depicts a further embodiment of a time-of-flight image sensor circuitry control method in a block diagram
  • Fig. 19 depicts a further embodiment of a time-of-flight image sensor circuitry control method in a block diagram
  • Fig. 20 depicts a further embodiment of a time-of-flight image sensor circuitry control method in a block diagram
  • Fig. 21 is a block diagram depicting an example of schematic configuration of a vehicle control system.
  • Fig. 22 is a diagram of assistance in explaining an example of installation positions of an outside-vehicle information detecting section and an imaging section.
  • indirect ToF cameras are generally known. However, these known cameras may be limited in a maximum measurement distance without ambiguity (also known as “unambiguous distance”, after which the depth “wraps” back to zero).
  • the cause for this wrapping may be a cyclic modulation signal which is applied to a pixel (for readout) and to a light source for emitting light according to the modulation signal (i.e. modulated light is emitted, as generally known).
  • modulation signal i.e. modulated light is emitted, as generally known.
  • known iToF may use a dual-frequency iToF method 1, as depicted in a block diagram of Fig. 1.
  • four captures are performed at a first frequency.
  • four demodulation signals may be applied to an iToF pixel which are phase-shifted with respect to each other.
  • the first frequency may be higher or lower than a second frequency.
  • a depth is computed based on the four captures of 2 by determining a phase difference of the four captures of the first frequency.
  • a scene is imaged sequentially while varying the modulation frequency.
  • a scene might be first imaged with a modulation frequency of 40 MHz and then by a frequency of 60 MHz.
  • an unwrapped depth is computed using an unwrapping technique, such as an index dual or mixed dual unwrapping technique, for example combined unwrapping (for example the NCR (New Chinese Remainder) algorithm), indexing unwrapping, or the like.
  • an unwrapping technique such as an index dual or mixed dual unwrapping technique, for example combined unwrapping (for example the NCR (New Chinese Remainder) algorithm), indexing unwrapping, or the like.
  • two (or more) measurements with a (relatively) high frequency each are carried out, such as at 60 MHz and 100 MHz.
  • the measurements may be simultaneously unwrapped and combined, such that a weighted average may be the result, such that an optimum of a highest possible noise performance is achieved while an increased unambiguous distance may be obtained.
  • this method may be sensitive to a measurement error (combined accuracy and noise). If the measurement error is too high (in one or both measurements), an unwrapping error may be the result, which may the cause a depth error which is larger than it could be described with noise alone. For example, a magnitude of error may be in the range of meters when centimeters would be expected only due to noise.
  • one of the two measurements may use a (much) lower modulation frequency than the other.
  • the higher frequency may be at least twice as high as the lower frequency (e.g. 20 MHz and 100 MHz).
  • the result from the lower frequency measurement may be used to “index” the high frequency measurement.
  • the lower frequency measurement is not necessarily intended to improve the overall noise performance of the system, but to unwrap the higher frequency measurement.
  • Indexing unwrapping may have a lower error-proneness than combined unwrapping (i.e. there is less possibility of “unwrapping errors”), however, the noise performance is reduced.
  • an increased unambiguous distance may be determined, wherein the depth noise may be reduced, as well.
  • extra scene captures may be needed for performing a dual-frequency capture in known ToF devices which may lead to increased power consumption and motion blur.
  • four captures may be performed for each modulation frequency, resulting in eight captures for two different modulation frequencies.
  • fewer or more captures may be performed (e.g. three or five), possibly at the cost of a confidence or may lead to motion blur, or single captures from different frequencies may be interleaved.
  • three captures may result in a reduced accuracy and/ or may require more complicated computations to recover depth.
  • such measurements may result in a low accuracy and may require extensive calibration.
  • this may be the case when ToF is used for ICM (in-cabin monitoring) in which a (human) gesture may have to be detected at a high frame rate.
  • ICM in-cabin monitoring
  • a single frequency may be used in such cases since additional captures may need to be avoided in order to operate at a high frame rate since an additional capture may lead to motion blur.
  • the modulation frequency may be limited to around 60 MHz (to achieve an unambiguous range of ca. two and a half meters, hence for visualizing the whole cabin without depth unwrapping), such that the depth noise performance may be modulation frequency limited.
  • some embodiments pertain to a time-of-flight image sensor circuitry including: an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the circuitry is further configured to: apply a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; apply a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generate resulting-depth data in which the ambiguity of the fine- depth data is removed based on the coarse-depth data.
  • the time-of-flight image sensor circuitry may include an imaging unit (e.g. an image sensor), such as an iToF sensor which may include a plurality of CAPDs (current assisted photonic demodulators), or the like. Further, the circuitry may include components for controlling the imaging unit and each imaging element (e.g. pixel) of the image sensor, such as a processor (e.g. CPU (central processing unit)), a microcontroller, a FPGA (field-programmable gate array), and the like, or combinations thereof. For example, the imaging unit may be controlled row-wise, for example, such that each row may constitute an imaging portion. For each imaging portion, control circuitry may be envisaged, which may be synchronized (e.g.
  • a demodulation signal may be applied by corresponding readout circuitry.
  • the demodulation signal may include a repetitive signal (e.g. oscillating) which may be applied to a tap (e.g. a readout channel) of an imaging element in order to detect a charge which is generated in response to a detection of light.
  • phase-shifted demodulation signals may be applied for obtaining information about a depth (e.g. a distance from a ToF camera to a scene (e.g. an object)).
  • the sensor circuitry may further include demodulation circuitry configured to apply such demodulation signals, as will be discussed further below.
  • the sensor circuitry may include an imaging unit.
  • the imaging unit may include a first imaging portion and a second imaging portion.
  • the first imaging portion and the second imaging portion may include the same type of pixels. Which exact pixel belongs to which imaging portion may be configured spontaneously. In other words: the imaging portions may be understood functionally.
  • the first and the second imaging portions may be distinguished in that different modulation signals may be applied.
  • the modulation signals may be the same, but may be phase-shifted in the second imaging portion with respect to the first imaging portion.
  • a sine-function is applied for reading out the first imaging portion
  • a (corresponding) co- sine-function may be applied to the second imaging portion.
  • the first and the second imaging portions are driven based on a predefined readout phase-shift.
  • the respective demodulation signals are not limited to be the same (mathematical) function type.
  • a sine-like signal may be applied to the first imaging portion and a rectangle-like signal may be applied to the second imaging portion (without limiting the present disclosure to any signal type), wherein these signals may be phase-shifted.
  • the readout phase-shift may be predefined in that it may be fixed at every readout (e.g. ninety degrees, forty-five degrees, one hundred and eighty degrees, or the like) or it may be configured to be defined before every readout, e.g. at a first readout, a ninety degrees phase-shift may be applied and at a second readout, a forty-five degrees phase-shift may be applied.
  • the first imaging portion and the second imaging portion may constitute a predefined imaging portion pattern on the imaging unit.
  • each row/ column may be an own imaging portion
  • all odd rows/ columns may constitute the first imaging portion
  • all even rows/ columns may constitute the second imaging portion.
  • a checkerboard pattern may be defined by the first and the second imaging portions in that each imaging element of the first imaging portion may be arranged alternating with each imaging element of the second imaging portion, or the like.
  • iToF sensors In known iToF sensors, four consecutive readouts may be carried out at each pixel, e.g. at zero degrees, ninety degrees, one hundred and eighty degrees, and two hundred and seventy degrees.
  • a demodulation pattern may be applied.
  • a low-frequency demodulation pattern may be applied to the first and the second imaging portions.
  • a demodulation signal may be applied to each imaging portion of the first and the second imaging portions, wherein, as discussed above, the predefined readoutphase shift may be present, such that the demodulation pattern may include a first demodulation signal and a second demodulation signal with the predefined readout phase-shift, wherein the first and the second demodulation signals may include the same signal type (e.g. a rectangle signal).
  • first and the second demodulation signals may include the same frequency (e.g. 20 MHz, 10 MHz, or the like), which may correspond to a low frequency compared with a frequency of a high-frequency demodulation pattern, which will be discussed further below.
  • coarse-depth data may be generated.
  • the lower the demodulation frequency the higher the depth measurement range may be.
  • the higher the frequency the shorter the range. This may lead to a depth ambiguity in that (roughly) integer multiples of the real depth may be measured since a double roundtrip of light (e.g. due to a multipath delay) may have the same demodulation result.
  • the low-frequency demodulation pattern may be used to determine a coarse depth, which may have a larger measurement error than a fine depth from a high-frequency demodulation, but may be used for removing the depth ambiguity of the high-frequency measurement (e.g. it may be used for calibrating a high-frequency measurement).
  • Fig. 2 depicts an example of a phase measurement which is subject to a depth ambiguity, also known as phase wrapping.
  • the phase can be transformed into a distance, as it is generally known, and the higher the phase, the higher the distance.
  • a distance to an object is typically unique, such that there is only one unambiguous distance.
  • a high-frequency demodulation pattern is applied to the first and the second imaging portions.
  • the high-frequency demodulation pattern may include the same or different demodulation signals than the low-frequency demodulation pattern. However, a frequency of the respective demodulation signals may be higher than the low-frequency demodulation signals.
  • the high-frequency demodulation pattern may include multiple measurements.
  • the first imaging portion in a first measurement, the first imaging portion may have a phase of zero degrees and the second imaging portion may have a phase of ninety degrees
  • the first imaging portion in a second measurement, the first imaging portion may have a phase of one hundred and eighty degreed and the second imaging portion may have a phase of two hundred and seventy degrees.
  • fine-depth data may be generated which may be indicative for a more exact depth (i.e. a fine depth, i.e. with a lower measurement error) than the coarse depth.
  • one “shot” is performed at the low-frequency demodulation pattern and two shots are performed at the high-frequency demodulation pattern.
  • the coarse-depth data may be used to the remove the ambiguity of the fine-depth data.
  • Depth data in which the ambiguity is removed may be referred to as resulting-depth data herein.
  • resulting-depth data are generated in which the ambiguity of the fine- depth data is removed based on the coarse-depth data.
  • a similar noise performance is achieved as in a known four components ToF measurement, since the integration time may be similar.
  • two consecutive measurements may be carried out, as discussed herein, leading to a similar integration time, although the total amount of light on each pixel may be half than in the four-component measurement.
  • the total integration time may be halved, if a one-shot is performed (as discussed herein), such that this measurement may be subject to a higher noise error.
  • the integration time may be increased unto a value that the fine-depth data may be successfully unwrapped, i.e. that the resulting-depth data may be generated without an unwrapping error.
  • it may not be necessary to double the integration time in the low-frequency measurement in order to compensate for the increased noise, such that in total, a measurement time may still be smaller than in a known four component ToF measurement.
  • the present disclosure is not limited to only two demodulation patterns with two different frequencies since more demodulation patterns can be applied, as well.
  • a first, second a third demodulation pattern can be applied, each having different demodulation frequencies (and/ or signal types), wherein the principles of the present disclosure for determining a depth can be applied accordingly when more than two (three or even more) demodulation patterns are envisaged.
  • the first imaging portion and the second imaging portion are arranged in a predefined pattern on the imaging unit, as discussed herein.
  • IQ mosaic imaging unit To such imaging units, it may be referred to as IQ mosaic imaging unit.
  • a spatial phase offset pattern may be applied to the imaging elements, as discussed herein. Thereby, instead of changing phase offsets in time (i.e. in sequential captures), the phase offsets can be arranged spatially.
  • a similar noise performance may be achieved as with four captures of a known iToF sensor.
  • the low-frequency demodulation pattern includes one phase demodulation, as discussed herein.
  • the high-frequency demodulation pattern includes two phase-demodulations, as discussed herein.
  • the time-of-flight image sensor circuitry is further configured to: shift readout-phases of the first and the second imaging portions for applying the two phase-demodulations, as discussed herein.
  • the shifting of the readout-phases is not limited to the embodiment described above.
  • the first imaging portion may have one hundred eighty degrees of phase and the second imaging portion may have a phase of ninety degrees and in a second shot, the first imaging portion may have a phase of zero degrees and the second imaging portion may have a phase of two hundred seventy degrees.
  • any predetermined phase-shifting pattern (and high-frequency demodulation pattern) may be applied as it is needed for the specific use-case.
  • phase-shift in a first shot and a ninety degrees phase-shift in a second shot.
  • present disclosure is also not limited to have two shots in the high-frequency demodulation pattern as more shots may be envisaged as well.
  • the imaging unit may have a first to fourth imaging portion, which may have predefined phase-shifts to each other, such as zero degrees (of the first imaging portion), ninety degrees (of the second imaging portion), one hundred and eighty degrees (of the third imaging portion), and two hundred and seventy degrees (of the fourth imaging portion), or the like.
  • the time-of-flight image sensor circuitry of is further configured to: remove the ambiguity by comparing a coarse depth with a fine depth.
  • the coarse depth and the fine depth may be determined based on the respective coarse and fine-depth data and the fine depth may be corrected based on the coarse depth.
  • the time-of-flight image sensor circuitry is further configured to: determine the coarse depth based on the coarse-depth data for calibrating the fine-depth data for generating the resulting-depth data.
  • only the coarse depth may be determined, and the fine-depth data may be corrected, marked, processed, or the like, such that the fine-depth data may not include the ambiguity anymore.
  • the high-frequency of the high-frequency demodulation pattern is a multiple of a low-frequency of the low-frequency demodulation pattern.
  • the low frequency may be 20 MHz and the high frequency may be 100 MHz, or the low frequency may be 10 MHz and the high frequency may be 80 MHz, without limiting the present disclosure in that regard.
  • an incidence of fine-depth data points may be the same multiple of an incidence of coarse-depth data points, such that a calibration of the fine-depth data may be simplified.
  • the resulting-depth data is generated based on an artificial intelligence which is configured to determine how to remove the ambiguity of the fine-depth data based on the coarse- depth data.
  • the artificial intelligence may be any type of artificial intelligence (strong or weak), such as a neural network, a support vector machine, a Bayesian network, a genetic algorithm, or the like.
  • the artificial intelligence may adopt a machine learning algorithm, for example, including supervised learning, semi-supervised learning, unsupervised learning, reinforcement learning, feature learning, sparse dictionary learning, anomaly detection learning, decision tree learning, association rule learning, or the like.
  • the machine learning algorithm may further be based on at least one of the following: Feature extraction techniques, classifier techniques or deep-learning techniques.
  • Feature extraction may be based on at least one of: Scale Invariant Feature Transfer (SIFT), Cray Level Co-occurrence Matrix (GLCM), Gaboo Features, Tubeness or the like.
  • Classifiers may be based on at least one of: Random Forest; Support Vector Machine; Neural Net, Bayes Net or the like.
  • Deep learning may be based on at least one of: Autoencoders, Generative Adversarial Network, Weakly Supervised Learning, Boot-Strapping or the like, without limiting the present disclosure in that regard.
  • the time-of-flight image sensor circuitry is further configured to: determine the resulting-depth data based on a detection of a spotted light pattern on the imaging unit.
  • a spotted light source may be combined with the time-of-flight image sensor circuitry according to the present disclosure.
  • spot-ToF a deterioration of light spots may be indicative for the depth of the scene.
  • the light spots may be read out with a phase-shifted demodulation pattern, as well, which will be discussed further below in more detail.
  • Some embodiments pertain to a time-of-flight image sensor circuitry control method for controlling a time-of-flight image sensor circuitry including an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the method further includes: applying a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; applying a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generating resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
  • the method may be carried out with ToF image sensor circuitry according to the present disclosure or with control circuitry (e.g. a processor or any external device), for example.
  • control circuitry e.g. a processor or any external device
  • the first imaging portion and the second imaging portion are arranged in a predefined pattern on the imaging unit, as discussed herein.
  • the low-fre- quency demodulation pattern includes one phase demodulation as discussed herein.
  • the high-frequency demodulation pattern includes two phase-demodulations as discussed herein.
  • the time-of-flight image sensor circuitry control method of further includes: shifting readout-phases of the first and the second imaging portions for applying the two phase-demodulations as discussed herein.
  • the time-of-flight image sensor circuitry control method further includes: removing the ambiguity by comparing a coarse depth with a fine depth as discussed herein.
  • the time-of-flight image sensor circuitry control method further includes: determining the coarse depth based on the coarse-depth data for calibrating the fine-depth data for generating the resulting-depth data as discussed herein.
  • a high-frequency of the high-frequency demodulation pattern is a multiple of a low-frequency of the low-frequency demodulation pattern as discussed herein.
  • the resulting-depth data is generated based on an artificial intelligence which is configured to determine how to remove the ambiguity of the fine-depth data based on the coarse-depth data as discussed herein.
  • the time-of-flight image sensor circuitry control method further includes: determining the resulting-depth data based on a detection of a spotted light pattern on the imaging unit, as discussed herein.
  • the methods as described herein are also implemented in some embodiments as a computer program causing a computer and/ or a processor to perform the method, when being carried out on the computer and/or processor.
  • a non-transitory computer-readable recording medium is provided that stores therein a computer program product, which, when executed by a processor, such as the processor described above, causes the methods described herein to be performed.
  • the imaging unit 20 includes a first imaging portion 21, which has a plurality of imaging elements 22 having a phase of 0.
  • the imaging unit 20 further includes a second imaging portion 23, which has a plurality of imaging elements 24 having a phase of 0+7l/2.
  • the predefined readout-phase shift between the first and the second imaging portions is 7l/2.
  • 0 is zero degrees and 71 is one hundred and eighty degrees. Since the phase offset is 7l/2, zero and ninety degrees are measured in the first phase measurement.
  • 0 may be one hundred and eighty degrees and 71 is still one hundred and eighty degrees (but only 7l/2 is added), such that the four phases zero, ninety, one hundred and eighty, and two hundred and seventy may be measured in two shots.
  • the two shots may be performed at a high frequency, and one shot, having for example 0 as zero degrees and 7l/2 as ninety degrees without changing 0, may be performed at a low frequency, and the one-shot-measurement may be used for unwrapping the two-shot measurement, i.e. for removing the ambiguity of the two-shot measurement.
  • Fig. 4 depicts an imaging unit 30 according to the present disclosure.
  • the imaging unit 30 may be the same as imaging unit 20 as discussed under reference of Fig. 3, but the readout-phases may be defined differently. Hence, there is no need to manufacture a completely new imaging unit since it may be sufficient to amend the readout and apply the demodulation signals differently. This concept is generally applicable to any imaging unit as discussed herein, e.g. under reference of Figs. 3 to 7.
  • the imaging unit 30 of Fig. 4 is different from the imaging unit 20 of Fig. 3 in that the imaging portions are arranged row-wise instead of column-wise.
  • Fig. 5 depicts an imaging unit 40 having imaging portion arranged two-column-wise, i.e. two columns of a first imaging portion 41 are arranged alternating with two columns of a second imaging portion 42.
  • An imaging unit 50 of Fig. 6 depicts imaging elements 51 of a first imaging portion alternatingly arranged with imaging elements 52 of a second imaging portion, such that the first and the second imaging portions constitute a checkerboard pattern on the imaging unit 51.
  • Fig. 7 depicts an imaging unit 60 with four imaging portions, having imaging elements 61 to 64 which are arranged alternating and are phase-shifted with a predefined phase-shift of ninety degrees (7l/2) to each other.
  • the second imaging portion has a phase-shift of ninety degrees with respect to the first imaging portion.
  • the third imaging portion has a phase-shift of one hundred and eighty degrees with respect to the first imaging portion.
  • the fourth imaging portion has a phaseshift of two hundred and seventy degrees with respect to the first imaging portion.
  • a whole four-phase-measurement can be carried out in one shot. Hence, it is not necessary according to this embodiment, to carry out a two-shot measurement in the high frequency since one shot may be sufficient.
  • Fig. 8 there is illustrated an embodiment of a two-shot measurement 70 with the imaging unit 20 which has been described under reference of Fig. 3.
  • the first imaging portion 21 has a phase of zero degrees and the second imaging portion 23 has a phase of ninety degrees.
  • the first imaging portion 21 has a phase of one hundred and eighty degrees and the second imaging portion 23 has a phase of two hundred and seventy degrees.
  • the readouts are subtracted from each other, resulting in a Lvalue on the first imaging portion 21 and in a Q-value on the second imaging portion 23. Thereby, mosaiced I-Q-values are obtained.
  • the two required differential measurements (known as I and Q) are obtained and can be combined to form a phase image.
  • applying an IQ mosaic pattern according to the present disclosure may reduce a total number of captures required, as already discussed herein, and further, power consumption may be reduced since a lower number of sensor reads is needed. Furthermore, motion blur may be reduced significantly.
  • a loss of spatial resolution resulting from such a method may be recovered by a demosaicking algorithm, for example for finding “missing” I and Q values.
  • a one-shot measurement 80 is illustrated which is carried out with the imaging unit 20 of Fig. 3.
  • the first imaging portion 21 has a phase of zero degrees and the second imaging portion 23 has a phase of ninety degrees.
  • the one-shot measurement 80 is indicative of mosaiced I-Q-values on the respective imaging portion.
  • the readouts first imaging portion 21 are indicative of an Lvalue and the readouts of the second imaging portion 23 are indicative of a Q-value.
  • a reduced noise performance may be compensated with a double integration time (which is not necessary), and motion blur is reduced since read time and idle time may be removed. Moreover, a power consumption may be reduced due to fewer readouts (i.e. one).
  • the above-mentioned tap-mismatch may be calibrated.
  • the one-shot measurement 80 is carried out at a sampling frequency of 20 MHz and the two-shot measurement 70 is carried out at a sampling frequency of 60 MHz, such that the depth ambiguity of the two-shot measurement 70 can be removed based on the one-shot measurement.
  • Fig. 10 there is depicted a time-of-flight sensor circuitry control method 90 according to the present disclosure in a block diagram.
  • an IQ-one-shot is carried out at 20 MHz for obtaining coarse-depth data which are indicative for a coarse depth.
  • the coarse depth of is computed based on the IQ-one-shot.
  • an IQ-two-shot is carried out at 60 MHz for obtaining fine-depth data which are indicative for a fine depth.
  • the fine depth is computed based on the IQ- two-shot.
  • an unwrapped depth is computed (i.e. the depth ambiguity of the fine-depth data is removed with an index dual unwrapping technique (which will be discussed under reference of Fig. 12).
  • resulting-depth data are output which are indicative of an unwrapped depth of high frequency-measurement.
  • Fig. 11 depicts a block diagram of a time-of-flight sensor circuitry control method 100 according to the present disclosure.
  • the time-of-flight sensor circuitry control method 100 is different from the method 90 in that after 95, a software processing layer is applied, at 101, which is configured to determine if/how the first frequency can be combined with the second frequency, or in other words, if/how the low frequency can be combined with the high frequency measurement, i.e. a mixed dual unwrapping technique is used in this method 100.
  • the software processing layer includes an artificial intelligence which utilized a machine learning algorithm which has been trained, based on a supervised training, to determine how to remove the depth ambiguity from the fine-depth data, i.e. how to output the resulting-depth data.
  • pixels in the low-frequency measurement are detected which have a high accuracy such that it is not detrimental to combine these pixels with the high-frequency measurement (i.e. for these pixels an unwrapping may not be needed since they are already correct within a predetermined threshold). These pixels may be detected based on an amplitude of the signal and/ or due to smart filtering of the artificial intelligence (which includes a neural network in this embodiment).
  • resulting-depth data are output.
  • the combined unwrapped depth of the high frequency is output with the software processed depth of the low frequency as the correct depth.
  • Fig. 12 shows an illustration of the index unwrapping principle 110.
  • a coarse depth 111 is determined which is subject to a coarse-depth measurement error 112. Furthermore, a maximum measurement error 113 is depicted for which a depth ambiguity of a high-frequency measurement can be removed successfully.
  • a fine depth 114 is determined, which is subject to a depth ambiguity, i.e. in fact three fine depth values 114 are determined in this embodiment.
  • the fine depth 114 is subject to a fine-depth measurement error 115, which is however smaller than the coarse-depth measurement error 112.
  • the measurement errors are considered to be Gaussian, but the present disclosure is not limited to such an error type and the error type may be depending on the specific usecase.
  • the depth noise is defined as: wherein 71 is one hundred and eighty degrees in rad, c is the speed of light, f mo d is the (de)modulation frequency, and is noise of a phase domain which is defined as constant between demodulation frequencies.
  • the noise of the phase domain may increase with increasing modulation frequency, for example due to modulation contrast.
  • such influences can be neglected since they may be sufficiently small compared to a change of depth noise due to the modulation frequency fmod- In other words: the depth noise increases with decreasing modulation frequency.
  • a resulting depth 116 is determined based on a comparison of the coarse depth with the fine depth. It is determined that one of the fine depth values lies within the measurement error range of the coarse depth 111. Hence, the coarse depth value 111 and the fine depth value 114 do not have to align exactly since it is sufficient when the fine depth value 114 lies within a predefined error range 113 of the coarse depth value 111.
  • This fine depth value 114 is then considered to be the true depth and is output accordingly as the resulting depth 116.
  • Fig. 11 can be applied to a measurement as shown in Fig. 12, i.e. a dual unwrapping method can be applied as well by adding a software processing layer, as discussed under reference of Fig. 11.
  • the measurement error (accuracy and/or noise) of the first capture i.e. of the coarse depth
  • the measurement error (accuracy and/or noise) of the first capture should be (roughly) less than half of the unambiguous range of the second capture (i.e. the fine depth) .
  • the maximum measurement error 113 extends roughly to the half of the unambiguous range of the 60 MHz measurement.
  • a measurement value on the extreme left of the error bars 113 may, however, result in an unwrapping error.
  • a measurement value on the extreme right of the error bars 113 may be closer to the next bin of the 60 MHz measurement (since the phase is cyclic).
  • the above statement that the measurement error should be “less than half of the unambiguous range” should be considered, in some embodiments, as a rule of thumb and may depend on the specific use-case, in particular since the high-frequency measurement may also be impacted by its own combined accuracy and/ or noise error, such that it may be possible to have an unwrapping error with less than half a wavelength accuracy and noise values.
  • the maximum combined accuracy plus noise value may be considered as rather high (when the high frequency is still below a predetermined threshold), such that for the unwrapping embodiment of Fig. 12, IQ-one-shot may be used as the low-frequency measurement since it may be within requirements for combined accuracy and noise for unwrapping.
  • the high frequency is 100 MHz (instead of 60 MHz as discussed above)
  • the unambiguous distance is roughly one and a half meters, such that combined accuracy plus noise only needs to be less than roughly seventy-five centimeters which may be considered as being in a usable range.
  • IQ-one-shot may result in a reduced accuracy (e.g. based on tap-mismatch and ambient light impact)
  • the accuracy plus noise of IQ-one-shot may lie within requirements for correctly unwrapping the high-frequency measurement (which may have a higher accuracy (e.g. IQ-two- shot)) .
  • a schematic illustration 120 is depicted for explanation how to combine spot-ToF with an imaging unit according to the present disclosure.
  • This embodiment may be envisaged when performing a long-range outdoor measurement, for example when the object which should be measured is too far away such that also the low-frequency measurement will have a depth ambiguity.
  • This embodiment may additionally or alternatively be envisaged for mobile platforms (e.g. a smartphone) when power consumption is important.
  • the imaging unit 20 of Fig. 3 is shown, but the present disclosure is not limited to using this specific embodiment for the combination with spot-ToF.
  • a light spot 121 is depicted which represents a reflected light spot from a light system. Based on the position of the spot and utilizing a ToF sensor circuitry control method according to the present disclosure, the power consumption may be further reduced (i.e. by reducing an amount of emitted light) .
  • the shape of the light spots can be modified to a shape that it fits to a pattern of the imaging unit, as shown in an illustration 130 of Fig. 14.
  • an oval spot 131 is depicted on the imaging unit 20 such that only the imaging elements must be read out on which the spot lies.
  • Fig. 15 depicts an embodiment of an illumination pattern 135 according to the present disclosure.
  • the illumination of an iToF system may roughly correspond to the demodulation signals. This may be achieved by delaying the illumination signal onto the imaging element for readout. First, for an IQ-one-shot, a low frequency is emitted. Second, after a time for reading out the IQ- one-shot, an IQ-two-shot is carried out at a high frequency, as discussed herein.
  • a time-of- flight (ToF) imaging apparatus 140 which can be used for depth sensing or providing a distance measurement, in particular for the technology as discussed herein, wherein the ToF imaging apparatus 140 is configured as an iToF camera.
  • the ToF imaging apparatus 140 has time-of-flight image sensor circuitry 147, which is configured to perform the methods as discussed herein and which forms a control of the ToF imaging apparatus 140 (and it includes, not shown, corresponding processors, memory and storage, as it is generally known to the skilled person).
  • the ToF imaging apparatus 140 has a modulated light source 141 and it includes light emitting elements (based on laser diodes), wherein in the present embodiment, the light emitting elements are narrow band laser elements.
  • the light source 141 emits light, i.e. modulated light, as discussed herein, to a scene 142 (region of interest or object), which reflects the light.
  • the reflected light is focused by an optical stack 143 to a light detector 144.
  • the light detector 144 has a time-of-flight imaging portion, as discussed herein, which is implemented based on multiple CAPDs formed in an array of pixels and a micro lens array 146 which focuses the light reflected from the scene 142 to the time-of-flight imaging portion 145 (to each pixel of the image sensor 65).
  • the light emission time and modulation information is fed to the time-of-flight image sensor circuitry or control 147 including a time-of-flight measurement unit 148, which also receives respective information from the time-of-flight imaging portion 145, when the light is detected which is reflected from the scene 142.
  • the time-of-flight measurement unit 148 computes a phase shift of the received modulated light which has been emitted from the light source 141 and reflected by the scene 142 and on the basis thereon it computes a distance d (depth information) between the image sensor 145 and the scene 142.
  • the depth information is fed from the time-of-flight measurement unit 148 to a 3D image reconstruction unit 149 of the time-of-flight image sensor circuitry 147, which reconstructs (generates) a 3D image of the scene 142 based on the depth data. Moreover, object ROI detection, image labeling, applying a morphological operation, and mobile phone recognition, as discussed herein is performed.
  • Fig. 17 depicts a block diagram of a further embodiment of a time-of-flight image sensor circuitry control method 150 according to the present disclosure.
  • a low-frequency demodulation pattern is applied, i.e. a one-shot measurement at 20 MHz is performed, as discussed herein.
  • a high-frequency demodulation pattern is applied, i.e. a two-shot measurement at 60 MHz is performed, as discussed herein.
  • resulting-depth data is generated, as discussed herein.
  • a further embodiment of a time-of-flight image sensor circuitry control method 160 is depicted in a block diagram.
  • the method 160 is different from the method 150 in that after 152, at 161, readout-phases are shifted for performing the two-shot measurement.
  • the resulting depth data is generated, at 162, by comparing depths, i.e. by using the technique as discussed under reference of Fig. 12.
  • Fig. 19 depicts a further embodiment of a time-of-flight image sensor circuitry control method 170 according to the present disclosure in a block diagram.
  • the method 170 is different from the method 160 in that, after 163, at 171, the resulting depth data is generated by calibrating the two-shot measurement based on the coarse depth, as discussed herein.
  • a further embodiment of a time-of-flight image sensor circuitry control method 180 according to the present disclosure is depicted in a block diagram.
  • the method 180 is different from the method 150 in that after 152, at 181, resulting-depth data are generated based on a spotted light pattern, as discussed under reference of Figs. 13 or 14.
  • the technology according to an embodiment of the present disclosure is applicable to various products.
  • the technology according to an embodiment of the present disclosure may be implemented as a device included in a mobile body that is any of kinds of automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobility vehicles, airplanes, drones, ships, robots, construction machinery, agricultural machinery (tractors), and the like.
  • Fig. 21 is a block diagram depicting an example of schematic configuration of a vehicle control system 7000 as an example of a mobile body control system to which the technology according to an embodiment of the present disclosure can be applied.
  • the vehicle control system 7000 includes a plurality of electronic control units connected to each other via a communication network 7010.
  • the vehicle control system 7000 includes a driving system control unit 7100, a body system control unit 7200, a battery control unit 7300, an outside-vehicle information detecting unit 7400, an in-vehicle information detecting unit 7500, and an integrated control unit 7600.
  • the communication network 7010 connecting the plurality of control units to each other may, for example, be a vehicle-mounted communication network compliant with an arbitrary standard such as controller area network (CAN), local interconnect network (LIN), local area network (LAN), FlexRay (registered trademark), or the like.
  • CAN controller area network
  • LIN local interconnect network
  • LAN local area network
  • FlexRay registered trademark
  • Each of the control units includes: a microcomputer that performs arithmetic processing according to various kinds of programs; a storage section that stores the programs executed by the microcomputer, parameters used for various kinds of operations, or the like; and a driving circuit that drives various kinds of control target devices.
  • Each of the control units further includes: a network interface (I/F) for performing communication with other control units via the communication network 7010; and a communication I/F for performing communication with a device, a sensor, or the like within and without the vehicle by wire communication or radio communication.
  • I/F network interface
  • the 21 includes a microcomputer 7610, a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon receiving section 7650, an in-vehicle device I/F 7660, a sound/image output section 7670, a vehicle-mounted network I/F 7680, and a storage section 7690.
  • the other control units similarly include a microcomputer, a communication I/F, a storage section, and the like.
  • the driving system control unit 7100 controls the operation of devices related to the driving system of the vehicle in accordance with various kinds of programs.
  • the driving system control unit 7100 functions as a control device for a driving force generating device for generating the driving force of the vehicle, such as an internal combustion engine, a driving motor, or the like, a driving force transmitting mechanism for transmitting the driving force to wheels, a steering mechanism for adjusting the steering angle of the vehicle, a braking device for generating the braking force of the vehicle, and the like.
  • the driving system control unit 7100 may have a function as a control device of an antilock brake system (ABS), electronic stability control (ESC), or the like.
  • ABS antilock brake system
  • ESC electronic stability control
  • the driving system control unit 7100 is connected with a vehicle state detecting section 7110.
  • the vehicle state detecting section 7110 includes at least one of a gyro sensor that detects the angular velocity of axial rotational movement of a vehicle body, an acceleration sensor that detects the acceleration of the vehicle, and sensors for detecting an amount of operation of an accelerator pedal, an amount of operation of a brake pedal, the steering angle of a steering wheel, an engine speed or the rotational speed of wheels, and the like.
  • the driving system control unit 7100 performs arithmetic processing using a signal input from the vehicle state detecting section 7110, and controls the internal combustion engine, the driving motor, an electric power steering device, the brake device, and the like.
  • the body system control unit 7200 controls the operation of various kinds of devices provided to the vehicle body in accordance with various kinds of programs.
  • the body system control unit 7200 functions as a control device for a keyless entry system, a smart key system, a power window device, or various kinds of lamps such as a headlamp, a backup lamp, a brake lamp, a turn signal, a fog lamp, or the like.
  • radio waves transmitted from a mobile device as an alternative to a key or signals of various kinds of switches can be input to the body system control unit 7200.
  • the body system control unit 7200 receives these input radio waves or signals, and controls a door lock device, the power window device, the lamps, or the like of the vehicle.
  • the battery control unit 7300 controls a secondary battery 7310, which is a power supply source for the driving motor, in accordance with various kinds of programs.
  • the battery control unit 7300 is supplied with information about a battery temperature, a battery output voltage, an amount of charge remaining in the battery, or the like from a battery device including the secondary battery 7310.
  • the battery control unit 7300 performs arithmetic processing using these signals, and performs control for regulating the temperature of the secondary battery 7310 or controls a cooling device provided to the battery device or the like.
  • the outside-vehicle information detecting unit 7400 detects information about the outside of the vehicle including the vehicle control system 7000.
  • the outside-vehicle information detecting unit 7400 is connected with at least one of an imaging section 7410 and an outside-vehicle information detecting section 7420.
  • the imaging section 7410 includes at least one of a time-of- flight (ToF) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras.
  • ToF time-of- flight
  • the outside-vehicle information detecting section 7420 includes at least one of an environmental sensor for detecting current atmospheric conditions or weather conditions and a peripheral information detecting sensor for detecting another vehicle, an obstacle, a pedestrian, or the like on the periphery of the vehicle including the vehicle control system 7000.
  • the environmental sensor may be at least one of a rain drop sensor detecting rain, a fog sensor detecting a fog, a sunshine sensor detecting a degree of sunshine, and a snow sensor detecting a snowfall.
  • the peripheral information detecting sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR device (Light detection and Ranging device, or Laser imaging detection and ranging device).
  • Each of the imaging section 7410 and the outside-vehicle information detecting section 7420 may be provided as an independent sensor or device, or may be provided as a device in which a plurality of sensors or devices are integrated.
  • Fig. 22 depicts an example of installation positions of the imaging section 7410 and the outside-vehicle information detecting section 7420.
  • Imaging sections 7910, 7912, 7914, 7916, and 7918 are, for example, disposed at at least one of positions on a front nose, sideview mirrors, a rear bumper, and a back door of the vehicle 7900 and a position on an upper portion of a windshield within the interior of the vehicle.
  • the imaging section 7910 provided to the front nose and the imaging section 7918 provided to the upper portion of the windshield within the interior of the vehicle obtain mainly an image of the front of the vehicle 7900.
  • the imaging sections 7912 and 7914 provided to the sideview mirrors obtain mainly an image of the sides of the vehicle 7900.
  • the imaging section 7916 provided to the rear bumper or the back door obtains mainly an image of the rear of the vehicle 7900.
  • the imaging section 7918 provided to the upper portion of the windshield within the interior of the vehicle is used mainly to detect a preceding vehicle, a pedestrian, an obstacle, a signal, a traffic sign, a lane, or the like.
  • Fig. 22 depicts an example of photographing ranges of the respective imaging sections 7910, 7912, 7914, and 7916.
  • An imaging range a represents the imaging range of the imaging section 7910 provided to the front nose.
  • Imaging ranges b and c respectively represent the imaging ranges of the imaging sections 7912 and 7914 provided to the sideview mirrors.
  • An imaging range d represents the imaging range of the imaging section 7916 provided to the rear bumper or the back door.
  • a bird’s-eye image of the vehicle 7900 as viewed from above can be obtained by superimposing image data imaged by the imaging sections 7910, 7912, 7914, and 7916, for example.
  • Outside-vehicle information detecting sections 7920, 7922, 7924, 7926, 7928, and 7930 provided to the front, rear, sides, and corners of the vehicle 7900 and the upper portion of the windshield within the interior of the vehicle may be, for example, an ultrasonic sensor or a radar device.
  • the outsidevehicle information detecting sections 7920, 7926, and 7930 provided to the front nose of the vehicle 7900, the rear bumper, the back door of the vehicle 7900, and the upper portion of the windshield within the interior of the vehicle may be a LIDAR device, for example.
  • These outside-vehicle information detecting sections 7920 to 7930 are used mainly to detect a preceding vehicle, a pedestrian, an obstacle, or the like.
  • the outside-vehicle information detecting unit 7400 makes the imaging section 7410 image an image of the outside of the vehicle, and receives imaged image data.
  • the outside-vehicle information detecting unit 7400 receives detection information from the outside-vehicle information detecting section 7420 connected to the out- side-vehicle information detecting unit 7400.
  • the outside-vehicle information detecting section 7420 is an ultrasonic sensor, a radar device, or a LIDAR device
  • the outside-vehicle information detecting unit 7400 transmits an ultrasonic wave, an electromagnetic wave, or the like, and receives information of a received reflected wave.
  • the outside-vehicle information detecting unit 7400 may perform processing of detecting an object such as a human, a vehicle, an obstacle, a sign, a character on a road surface, or the like, or processing of detecting a distance thereto.
  • the outside-vehicle information detecting unit 7400 may perform environment recognition processing of recognizing a rainfall, a fog, road surface conditions, or the like on the basis of the received information.
  • the outside -vehicle information detecting unit 7400 may calculate a distance to an object outside the vehicle on the basis of the received information.
  • the outside-vehicle information detecting unit 7400 may perform image recognition processing of recognizing a human, a vehicle, an obstacle, a sign, a character on a road surface, or the like, or processing of detecting a distance thereto.
  • the out- side-vehicle information detecting unit 7400 may subject the received image data to processing such as distortion correction, alignment, or the like, and combine the image data imaged by a plurality of different imaging sections 7410 to generate a bird’s-eye image or a panoramic image.
  • the outsidevehicle information detecting unit 7400 may perform viewpoint conversion processing using the image data imaged by the imaging section 7410 including the different imaging parts.
  • the in-vehicle information detecting unit 7500 detects information about the inside of the vehicle.
  • the in-vehicle information detecting unit 7500 is, for example, connected with a driver state detecting section 7510 that detects the state of a driver.
  • the driver state detecting section 7510 may include a camera that images the driver, a biosensor that detects biological information of the driver, a microphone that collects sound within the interior of the vehicle, or the like.
  • the biosensor is, for example, disposed in a seat surface, the steering wheel, or the like, and detects biological information of an occupant sitting in a seat or the driver holding the steering wheel.
  • the in-vehicle information detecting unit 7500 may calculate a degree of fatigue of the driver or a degree of concentration of the driver, or may determine whether the driver is dozing.
  • the in-vehicle information detecting unit 7500 may subject an audio signal obtained by the collection of the sound to processing such as noise canceling processing or the like.
  • the in-vehicle information detecting unit may include time-of-flight image sensor circuitry according to the present disclosure for performing ICM, as discussed herein.
  • the integrated control unit 7600 controls general operation within the vehicle control system 7000 in accordance with various kinds of programs.
  • the integrated control unit 7600 is connected with an input section 7800.
  • the input section 7800 is implemented by a device capable of input operation by an occupant, such, for example, as a touch panel, a button, a microphone, a switch, a lever, or the like.
  • the integrated control unit 7600 may be supplied with data obtained by voice recognition of voice input through the microphone.
  • the input section 7800 may, for example, be a remote control device using infrared rays or other radio waves, or an external connecting device such as a mobile telephone, a personal digital assistant (PDA), or the like that supports operation of the vehicle control system 7000.
  • the input section 7800 may be, for example, a camera.
  • an occupant can input information by gesture.
  • data may be input which is obtained by detecting the movement of a wearable device that an occupant wears.
  • the input section 7800 may, for example, include an input control circuit or the like that generates an input signal on the basis of information input by an occupant or the like using the above-described input section 7800, and which outputs the generated input signal to the integrated control unit 7600.
  • An occupant or the like inputs various kinds of data or gives an instruction for processing operation to the vehicle control system 7000 by operating the input section 7800.
  • the storage section 7690 may include a read only memory (ROM) that stores various kinds of programs executed by the microcomputer and a random access memory (RAM) that stores various kinds of parameters, operation results, sensor values, or the like.
  • ROM read only memory
  • RAM random access memory
  • the storage section 7690 may be implemented by a magnetic storage device such as a hard disc drive (HDD) or the like, a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
  • the general-purpose communication I/F 7620 is a communication I/F used widely, which communication I/F mediates communication with various apparatuses present in an external environment 7750.
  • the general-purpose communication I/F 7620 may implement a cellular communication protocol such as global system for mobile communications (GSM (registered trademark)), worldwide interoperability for microwave access (WiMAX (registered trademark)), long term evolution (LTE (registered trademark)), LTE-advanced (LTE- A), or the like, or another wireless communication protocol such as wireless LAN (referred to also as wireless fidelity (Wi-Fi (registered trademark)), Bluetooth (registered trademark), or the like.
  • GSM global system for mobile communications
  • WiMAX worldwide interoperability for microwave access
  • LTE registered trademark
  • LTE-advanced LTE-advanced
  • WiFi wireless fidelity
  • Bluetooth registered trademark
  • the general-purpose communication I/F 7620 may, for example, connect to an apparatus (for example, an application server or a control server) present on an external network (for example, the Internet, a cloud network, or a company-specific network) via a base station or an access point.
  • the general-purpose communication I/F 7620 may connect to a terminal present in the vicinity of the vehicle (which terminal is, for example, a terminal of the driver, a pedestrian, or a store, or a machine type communication (MTC) terminal) using a peer to peer (P2P) technology, for example.
  • an apparatus for example, an application server or a control server
  • an external network for example, the Internet, a cloud network, or a company-specific network
  • MTC machine type communication
  • P2P peer to peer
  • the dedicated communication I/F 7630 is a communication I/F that supports a communication protocol developed for use in vehicles.
  • the dedicated communication I/F 7630 may implement a standard protocol such, for example, as wireless access in vehicle environment (WAVE), which is a combination of institute of electrical and electronic engineers (IEEE) 802.1 Ip as a lower layer and IEEE 1609 as a higher layer, dedicated short range communications (DSRC), or a cellular communication protocol.
  • WAVE wireless access in vehicle environment
  • IEEE institute of electrical and electronic engineers
  • DSRC dedicated short range communications
  • the dedicated communication I/F 7630 typically carries out V2X communication as a concept including one or more of communication between a vehicle and a vehicle (V ehicle to Vehicle), communication between a road and a vehicle (V ehicle to Infrastructure), communication between a vehicle and a home (V ehicle to Home), and communication between a pedestrian and a vehicle (Vehicle to Pedestrian).
  • the positioning section 7640 performs positioning by receiving a global navigation satellite system (GNSS) signal from a GNSS satellite (for example, a GPS signal from a global positioning system (GPS) satellite), and generates positional information including the latitude, longitude, and altitude of the vehicle.
  • GNSS global navigation satellite system
  • GPS global positioning system
  • the positioning section 7640 may identify a current position by exchanging signals with a wireless access point, or may obtain the positional information from a terminal such as a mobile telephone, a personal handyphone system (PHS), or a smart phone that has a positioning function.
  • the beacon receiving section 7650 receives a radio wave or an electromagnetic wave transmitted from a radio station installed on a road or the like, and thereby obtains information about the current position, congestion, a closed road, a necessary time, or the like.
  • the function of the beacon receiving section 7650 may be included in the dedicated communication I/F 7630 described above.
  • the in-vehicle device I/F 7660 is a communication interface that mediates connection between the microcomputer 7610 and various in-vehicle devices 7760 present within the vehicle.
  • the in-vehicle device I/F 7660 may establish wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), near field communication (NFC), or wireless universal serial bus (WUSB).
  • a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), near field communication (NFC), or wireless universal serial bus (WUSB).
  • WUSB wireless universal serial bus
  • the in-vehicle device I/F 7660 may establish wired connection by universal serial bus (USB), high-definition multimedia interface (HDMI (registered trademark)), mobile high-definition link (MHL), or the like via a connection terminal (and a cable if necessary) not depicted in the figures.
  • USB universal serial bus
  • HDMI high-definition multimedia interface
  • MHL mobile high-definition link
  • the in-vehicle devices 7760 may, for example, include at least one of a mobile device and a wearable device possessed by an occupant and an information device carried into or attached to the vehicle.
  • the in-vehicle devices 7760 may also include a navigation device that searches for a path to an arbitrary destination.
  • the in-vehicle device I/F 7660 exchanges control signals or data signals with these in-vehicle devices 7760.
  • the vehicle-mounted network I/F 7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010.
  • the vehicle-mounted network I/F 7680 transmits and receives signals or the like in conformity with a predetermined protocol supported by the communication network 7010.
  • the microcomputer 7610 of the integrated control unit 7600 controls the vehicle control system 7000 in accordance with various kinds of programs on the basis of information obtained via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning section 7640, the beacon receiving section 7650, the in-vehicle device I/F 7660, and the vehicle-mounted network I/F 7680.
  • the microcomputer 7610 may calculate a control target value for the driving force generating device, the steering mechanism, or the braking device on the basis of the obtained information about the inside and outside of the vehicle, and output a control command to the driving system control unit 7100.
  • the microcomputer 7610 may perform cooperative control intended to implement functions of an advanced driver assistance system (ADAS) which functions include collision avoidance or shock mitigation for the vehicle, following driving based on a following distance, vehicle speed maintaining driving, a warning of collision of the vehicle, a warning of deviation of the vehicle from a lane, or the like.
  • ADAS advanced driver assistance system
  • the microcomputer 7610 may perform cooperative control intended for automatic driving, which makes the vehicle to travel autonomously without depending on the operation of the driver, or the like, by controlling the driving force generating device, the steering mechanism, the braking device, or the like on the basis of the obtained information about the surroundings of the vehicle.
  • the microcomputer 7610 may generate three-dimensional distance information between the vehicle and an object such as a surrounding structure, a person, or the like, and generate local map information including information about the surroundings of the current position of the vehicle, on the basis of information obtained via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning section 7640, the beacon receiving section 7650, the in-vehicle device I/F 7660, and the vehicle-mounted network I/F 7680.
  • the microcomputer 7610 may predict danger such as collision of the vehicle, approaching of a pedestrian or the like, an entry to a closed road, or the like on the basis of the obtained information, and generate a warning signal.
  • the warning signal may, for example, be a signal for producing a warning sound or lighting a warning lamp.
  • the sound/image output section 7670 transmits an output signal of at least one of a sound and an image to an output device capable of visually or auditorily notifying information to an occupant of the vehicle or the outside of the vehicle.
  • an audio speaker 7710, a display section 7720, and an instrument panel 7730 are illustrated as the output device.
  • the display section 7720 may, for example, include at least one of an on-board display and a head-up display.
  • the display section 7720 may have an augmented reality (AR) display function.
  • the output device may be other than these devices, and may be another device such as headphones, a wearable device such as an eyeglass type display worn by an occupant or the like, a projector, a lamp, or the like.
  • the output device is a display device
  • the display device visually displays results obtained by various kinds of processing performed by the microcomputer 7610 or information received from another control unit in various forms such as text, an image, a table, a graph, or the like.
  • the audio output device converts an audio signal constituted of reproduced audio data or sound data or the like into an analog signal, and auditorily outputs the analog signal.
  • At least two control units connected to each other via the communication network 7010 in the example depicted in Fig. 21 may be integrated into one control unit.
  • each individual control unit may include a plurality of control units.
  • the vehicle control system 7000 may include another control unit not depicted in the figures.
  • part or the whole of the functions performed by one of the control units in the above description may be assigned to another control unit. That is, predetermined arithmetic processing may be performed by any of the control units as long as information is transmitted and received via the communication network 7010.
  • a sensor or a device connected to one of the control units may be connected to another control unit, and a plurality of control units may mutually transmit and receive detection information via the communication network 7010.
  • a computer program for realizing at least one of the time-of-flight sensor circuitry control methods according to the present disclosure can be implemented in one of the control units or the like.
  • a computer readable recording medium storing such a computer program can also be provided.
  • the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like.
  • the above-described computer program may be distributed via a network, for example, without the recording medium being used.
  • any time-of-flight image sensor circuitry according to the present disclosure can be applied to the integrated control unit 7600 in the application example depicted in Fig. 21.
  • any of the time-of-flight sensor circuitry according to the present disclosure may be implemented in a module (for example, an integrated circuit module formed with a single die) for the integrated control unit 7600 depicted in Fig. 21.
  • the time-of-flight sensor circuitry may be implemented by a plurality of control units of the vehicle control system 7000 depicted in Fig. 21.
  • control 147 could be implemented by a respective programmed processor, field programmable gate array (FPGA) and the like.
  • FPGA field programmable gate array
  • the method can also be implemented as a computer program causing a computer and/ or a processor, such as processor 147 discussed above, to perform the method, when being carried out on the computer and/or processor.
  • a non-transitory computer-readable recording medium is provided that stores therein a computer program product, which, when executed by a processor, such as the processor described above, causes the method described to be performed.
  • a time-of-flight image sensor circuitry comprising: an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the circuitry is further configured to: apply a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; apply a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generate resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
  • the time-of-flight image sensor circuitry of (4) further configured to: shift readout-phases of the first and the second imaging portions for applying the two phase-demodulations.
  • a time-of-flight image sensor circuitry control method for controlling a time-of-flight image sensor circuitry including an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the method further comprises: applying a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; applying a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generating resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
  • (21) A computer program comprising program code causing a computer to perform the method according to anyone of (11) to (20), when being carried out on a computer.
  • (22) A non-transitory computer-readable recording medium that stores therein a computer program product, which, when executed by a processor, causes the method according to anyone of (11) to (20) to be performed.

Abstract

Time-of-flight image sensor circuitry including: an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the circuitry is further configured to: apply a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; apply a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generate resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.

Description

TIME-OF-FLIGHT IMAGE SENSOR CIRCUITRY AND TIME-OF-
FLIGHT IMAGE SENSOR CIRCUITRY CONTROL METHOD
TECHNICAL FIELD
The present disclosure generally pertains to time-of-flight image sensor circuitry and a time-of-flight image sensor circuitry control method.
TECHNICAL BACKGROUND
Generally, time-of-flight (ToF) devices for measuring a depth of a scene are generally known. For example, it may be distinguished between indirect ToF (iToF), direct ToF (dToF), spot ToF, or the like.
In dToF, a roundtrip delay of light, which has been emitted from the dToF device is directly measured in terms of a time in which the light travels from the light source and gets reflected back on an image sensor of the dToF device. Based on the time, taking into account the speed of light, the depth may be estimated.
In spot ToF, a spotted light pattern may be emitted from a spotted light source. One application of spot ToF is to measure a displacement and/ or a deterioration of the light spots which may be indicative for the depth of the scene. In another application, spot ToF can be used combined with iToF (explained further below), i.e. a spotted light source is used (as spot ToF component) in combination with a CAPD (current assisted photonic demodulator) sensor (iToF component).
In iToF, modulated light may be emitted based on a modulation signal which is sent to a light source, wherein the same or a similar modulation signal may be applied to a pixel, such that a phaseshift of the modulated signal can be measured based on the reflected modulated light. This phaseshift may be indicative for the distance.
However, due to a cyclic nature of the modulation signal and due to the speed of light being a fixed value, a maximum measurement distance may be present, which is generally known as unambiguous distance and integer multiples or fractions of the unambiguous (real) distance may be measured, as well. For example, for a modulation frequency of a hundred mega-Hertz, the unambiguous distance may roughly correspond to one and a half meters, but if the scene (or object) is one meter away, also fifty centimeters or two meters may be a result of the measurement.
Although there exist techniques for reducing an unambiguous distance measurement error, it is generally desirable to provide time-of-flight image sensor circuitry and time-of-flight image sensor control circuitry. SUMMARY
According to a first aspect, the disclosure provides time-of-flight image sensor circuitry comprising: an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the circuitry is further configured to: apply a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; apply a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generate resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
According to a second aspect, the disclosure provides a time-of-flight image sensor circuitry control method for controlling a time-of-flight image sensor circuitry including an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the method further comprises: applying a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; applying a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generating resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
Further aspects are set forth in the dependent claims, the following description and the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments are explained byway of example with respect to the accompanying drawings, in which:
Fig. 1 depicts a block diagram of a known iToF dual-frequency measurement;
Fig. 2 depicts an example of a phase measurement which is subject to a depth ambiguity;
Fig. 3 depicts an embodiment of an imaging unit according to the present disclosure;
Fig. 4 depicts a further embodiment of an imaging unit according to the present disclosure;
Fig. 5 depicts a further embodiment of an imaging unit according to the present disclosure;
Fig. 6 depicts a further embodiment of an imaging unit according to the present disclosure; Fig. 7 depicts a further embodiment of an imaging unit according to the present disclosure;
Fig. 8 illustrates an embodiment of a two-shot measurement according to the present disclosure in a block diagram;
Fig. 9 illustrates an embodiment of a one-shot measurement according to the present disclosure in a block diagram;
Fig. 10 depicts an embodiment of a time-of-flight image sensor circuitry control method according to the present disclosure in block diagram;
Fig. 11 depicts a further embodiment of a time-of-flight image sensor circuitry control method according to the present disclosure in a block diagram;
Fig. 12 shows an illustration of index unwrapping according to the present disclosure;
Fig. 13 depicts a schematic illustration for explaining how to combine spot-ToF with an imaging unit according to the present disclosure;
Fig. 14 depicts a further schematic illustration for explaining how to combine spot-ToF with an imaging unit according to the present disclosure;
Fig. 15 depicts an embodiment of an illumination pattern according to the present disclosure;
Fig. 16 illustrates an embodiment of a ToF imaging apparatus;
Fig. 17 depicts a further embodiment of a time-of-flight image sensor circuitry control method in a block diagram;
Fig. 18 depicts a further embodiment of a time-of-flight image sensor circuitry control method in a block diagram;
Fig. 19 depicts a further embodiment of a time-of-flight image sensor circuitry control method in a block diagram;
Fig. 20 depicts a further embodiment of a time-of-flight image sensor circuitry control method in a block diagram;
Fig. 21 is a block diagram depicting an example of schematic configuration of a vehicle control system; and
Fig. 22 is a diagram of assistance in explaining an example of installation positions of an outside-vehicle information detecting section and an imaging section. DETAILED DESCRIPTION OF EMBODIMENTS
Before a detailed description of the embodiments under reference of Fig. 3 is given, general explanations are made.
As mentioned in the outset, indirect ToF cameras are generally known. However, these known cameras may be limited in a maximum measurement distance without ambiguity (also known as “unambiguous distance”, after which the depth “wraps” back to zero).
The cause for this wrapping may be a cyclic modulation signal which is applied to a pixel (for readout) and to a light source for emitting light according to the modulation signal (i.e. modulated light is emitted, as generally known). For reducing a noise component of the modulation signal, it may be desirable to increase the modulation frequency. However, by increasing the frequency, the unambiguous distance may decrease.
Hence, it has been recognized that a dual-frequency approach (for dealiasing) may be envisaged.
Generally, dual-frequency iToF systems are known. For example, known iToF may use a dual-frequency iToF method 1, as depicted in a block diagram of Fig. 1.
At 2, four captures are performed at a first frequency. As it is generally known, four demodulation signals may be applied to an iToF pixel which are phase-shifted with respect to each other. The first frequency may be higher or lower than a second frequency.
At 3, a depth is computed based on the four captures of 2 by determining a phase difference of the four captures of the first frequency.
At 4, four captures are performed at the second frequency, and at 5, the depth at the second frequency is computed.
In other words, at a dual-frequency capture, a scene is imaged sequentially while varying the modulation frequency. For example, a scene might be first imaged with a modulation frequency of 40 MHz and then by a frequency of 60 MHz.
Based on the computed depths of the first frequency and the second frequency, an unwrapped depth is computed using an unwrapping technique, such as an index dual or mixed dual unwrapping technique, for example combined unwrapping (for example the NCR (New Chinese Remainder) algorithm), indexing unwrapping, or the like.
In combined unwrapping, two (or more) measurements with a (relatively) high frequency each are carried out, such as at 60 MHz and 100 MHz. The measurements may be simultaneously unwrapped and combined, such that a weighted average may be the result, such that an optimum of a highest possible noise performance is achieved while an increased unambiguous distance may be obtained.
However, it has been recognized that this method may be sensitive to a measurement error (combined accuracy and noise). If the measurement error is too high (in one or both measurements), an unwrapping error may be the result, which may the cause a depth error which is larger than it could be described with noise alone. For example, a magnitude of error may be in the range of meters when centimeters would be expected only due to noise.
In indexing unwrapping, one of the two measurements may use a (much) lower modulation frequency than the other. For example, the higher frequency may be at least twice as high as the lower frequency (e.g. 20 MHz and 100 MHz). The result from the lower frequency measurement may be used to “index” the high frequency measurement. In other words, the lower frequency measurement is not necessarily intended to improve the overall noise performance of the system, but to unwrap the higher frequency measurement. Indexing unwrapping may have a lower error-proneness than combined unwrapping (i.e. there is less possibility of “unwrapping errors”), however, the noise performance is reduced.
By using an unwrapping technique, an increased unambiguous distance may be determined, wherein the depth noise may be reduced, as well. However, it has been recognized that extra scene captures may be needed for performing a dual-frequency capture in known ToF devices which may lead to increased power consumption and motion blur.
It has been recognized that in known ToF devices, at least three sequential captures (which may be referred to components, in some embodiments) of a scene may be required (with phase offset of the modulation signal on the pixel(s) and/ or the illumination ), wherein it is desirable to reduce the number of captures for imaging a scene.
Moreover, in known devices, four captures (for example with sequential phase offsets of zero degrees, ninety degrees, one hundred and eighty degrees, and two hundred and seventy degrees) may be performed for each modulation frequency, resulting in eight captures for two different modulation frequencies.
However, in other known devices, fewer or more captures may be performed (e.g. three or five), possibly at the cost of a confidence or may lead to motion blur, or single captures from different frequencies may be interleaved. In other words, for example three captures may result in a reduced accuracy and/ or may require more complicated computations to recover depth. Furthermore, it may be possible to acquire a depth with a measurement of only two components, e.g. with zero degrees and ninety degrees (other combinations of phase offsets may be possible, as well) . However, such measurements may result in a low accuracy and may require extensive calibration.
It has been recognized that it is desirable to reduce motion blur and/ or power consumption.
For example, this may be the case when ToF is used for ICM (in-cabin monitoring) in which a (human) gesture may have to be detected at a high frame rate. In known devices, only a single frequency may be used in such cases since additional captures may need to be avoided in order to operate at a high frame rate since an additional capture may lead to motion blur.
Furthermore, in known ICM devices, the modulation frequency may be limited to around 60 MHz (to achieve an unambiguous range of ca. two and a half meters, hence for visualizing the whole cabin without depth unwrapping), such that the depth noise performance may be modulation frequency limited. However, it has been recognized that it may be desirable to increase the frequency for ICM.
Therefore, some embodiments pertain to a time-of-flight image sensor circuitry including: an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the circuitry is further configured to: apply a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; apply a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generate resulting-depth data in which the ambiguity of the fine- depth data is removed based on the coarse-depth data.
The time-of-flight image sensor circuitry may include an imaging unit (e.g. an image sensor), such as an iToF sensor which may include a plurality of CAPDs (current assisted photonic demodulators), or the like. Further, the circuitry may include components for controlling the imaging unit and each imaging element (e.g. pixel) of the image sensor, such as a processor (e.g. CPU (central processing unit)), a microcontroller, a FPGA (field-programmable gate array), and the like, or combinations thereof. For example, the imaging unit may be controlled row-wise, for example, such that each row may constitute an imaging portion. For each imaging portion, control circuitry may be envisaged, which may be synchronized (e.g. with synchronization circuitry) with one another, such that each imaging element of each imaging portion may be read out, for example simultaneously, consecutively, or the like. Furthermore, to each imaging portion, a demodulation signal may be applied by corresponding readout circuitry. The demodulation signal, as it is generally known to the skilled person, may include a repetitive signal (e.g. oscillating) which may be applied to a tap (e.g. a readout channel) of an imaging element in order to detect a charge which is generated in response to a detection of light.
As it is generally known in iToF, multiple (e.g. four) phase-shifted demodulation signals may be applied for obtaining information about a depth (e.g. a distance from a ToF camera to a scene (e.g. an object)).
Hence, the sensor circuitry may further include demodulation circuitry configured to apply such demodulation signals, as will be discussed further below.
As already discussed, the sensor circuitry may include an imaging unit. In turn, the imaging unit may include a first imaging portion and a second imaging portion. The first imaging portion and the second imaging portion may include the same type of pixels. Which exact pixel belongs to which imaging portion may be configured spontaneously. In other words: the imaging portions may be understood functionally.
For example, the first and the second imaging portions may be distinguished in that different modulation signals may be applied. For example, generally the modulation signals may be the same, but may be phase-shifted in the second imaging portion with respect to the first imaging portion. For example, if a sine-function is applied for reading out the first imaging portion, a (corresponding) co- sine-function may be applied to the second imaging portion.
In other words: the first and the second imaging portions are driven based on a predefined readout phase-shift. However, the respective demodulation signals are not limited to be the same (mathematical) function type. For example, a sine-like signal may be applied to the first imaging portion and a rectangle-like signal may be applied to the second imaging portion (without limiting the present disclosure to any signal type), wherein these signals may be phase-shifted.
The readout phase-shift may be predefined in that it may be fixed at every readout (e.g. ninety degrees, forty-five degrees, one hundred and eighty degrees, or the like) or it may be configured to be defined before every readout, e.g. at a first readout, a ninety degrees phase-shift may be applied and at a second readout, a forty-five degrees phase-shift may be applied.
Thereby, the first imaging portion and the second imaging portion may constitute a predefined imaging portion pattern on the imaging unit. For example, each row/ column may be an own imaging portion, all odd rows/ columns may constitute the first imaging portion and all even rows/ columns may constitute the second imaging portion. Furthermore, a checkerboard pattern may be defined by the first and the second imaging portions in that each imaging element of the first imaging portion may be arranged alternating with each imaging element of the second imaging portion, or the like.
In known iToF sensors, four consecutive readouts may be carried out at each pixel, e.g. at zero degrees, ninety degrees, one hundred and eighty degrees, and two hundred and seventy degrees.
However, according to the present disclosure, a demodulation pattern may be applied.
For example, a low-frequency demodulation pattern may be applied to the first and the second imaging portions. In such embodiments, a demodulation signal may be applied to each imaging portion of the first and the second imaging portions, wherein, as discussed above, the predefined readoutphase shift may be present, such that the demodulation pattern may include a first demodulation signal and a second demodulation signal with the predefined readout phase-shift, wherein the first and the second demodulation signals may include the same signal type (e.g. a rectangle signal).
Furthermore, the first and the second demodulation signals may include the same frequency (e.g. 20 MHz, 10 MHz, or the like), which may correspond to a low frequency compared with a frequency of a high-frequency demodulation pattern, which will be discussed further below.
Based on the low-frequency demodulation pattern, coarse-depth data may be generated. As it is generally known, the lower the demodulation frequency, the higher the depth measurement range may be. On the other hand, the higher the frequency, the shorter the range. This may lead to a depth ambiguity in that (roughly) integer multiples of the real depth may be measured since a double roundtrip of light (e.g. due to a multipath delay) may have the same demodulation result. Since a low- frequency demodulation pattern may be indicative for a higher depth, although there may also be a depth ambiguity present, the low-frequency demodulation pattern may be used to determine a coarse depth, which may have a larger measurement error than a fine depth from a high-frequency demodulation, but may be used for removing the depth ambiguity of the high-frequency measurement (e.g. it may be used for calibrating a high-frequency measurement).
Fig. 2 depicts an example of a phase measurement which is subject to a depth ambiguity, also known as phase wrapping.
In Fig. 2, there is depicted a diagram 10 depicted which has a measured phase (in rad) on the ordinate, and a true phase (in rad) on the abscissa. The phase can be transformed into a distance, as it is generally known, and the higher the phase, the higher the distance. However, in a measurement, it may not be possible to distinguish a phase of 2% and 4%, for example, which is why the ordinate (i.e. the measured phase) only exceeds to 2 , but a maximum of the abscissa (the true phase) may theoretically be considered as infinite. However, a distance to an object is typically unique, such that there is only one unambiguous distance.
Hence, in some embodiments, a high-frequency demodulation pattern is applied to the first and the second imaging portions.
The high-frequency demodulation pattern may include the same or different demodulation signals than the low-frequency demodulation pattern. However, a frequency of the respective demodulation signals may be higher than the low-frequency demodulation signals.
Furthermore, the high-frequency demodulation pattern may include multiple measurements. For example, in a first measurement, the first imaging portion may have a phase of zero degrees and the second imaging portion may have a phase of ninety degrees, and in a second measurement, the first imaging portion may have a phase of one hundred and eighty degreed and the second imaging portion may have a phase of two hundred and seventy degrees. Thereby, fine-depth data may be generated which may be indicative for a more exact depth (i.e. a fine depth, i.e. with a lower measurement error) than the coarse depth.
Generally, also in the low-frequency demodulation pattern, multiple measurements may be performed. However, for determining a coarse depth, this may not always be necessary, such that one measurement may be sufficient for removing the depth ambiguity.
Hence, in some embodiments, one “shot” is performed at the low-frequency demodulation pattern and two shots are performed at the high-frequency demodulation pattern.
As already discussed herein, the coarse-depth data may be used to the remove the ambiguity of the fine-depth data. Depth data in which the ambiguity is removed may be referred to as resulting-depth data herein.
Hence, in some embodiments, resulting-depth data are generated in which the ambiguity of the fine- depth data is removed based on the coarse-depth data.
According to the present disclosure, a similar noise performance is achieved as in a known four components ToF measurement, since the integration time may be similar. For example, in the high- frequency demodulation pattern, two consecutive measurements may be carried out, as discussed herein, leading to a similar integration time, although the total amount of light on each pixel may be half than in the four-component measurement.
At the low-frequency demodulation pattern, the total integration time may be halved, if a one-shot is performed (as discussed herein), such that this measurement may be subject to a higher noise error. However, the integration time may be increased unto a value that the fine-depth data may be successfully unwrapped, i.e. that the resulting-depth data may be generated without an unwrapping error. Hence, it may not be necessary to double the integration time in the low-frequency measurement in order to compensate for the increased noise, such that in total, a measurement time may still be smaller than in a known four component ToF measurement.
Generally, the present disclosure is not limited to only two demodulation patterns with two different frequencies since more demodulation patterns can be applied, as well. For example, a first, second a third demodulation pattern can be applied, each having different demodulation frequencies (and/ or signal types), wherein the principles of the present disclosure for determining a depth can be applied accordingly when more than two (three or even more) demodulation patterns are envisaged.
In some embodiments, the first imaging portion and the second imaging portion are arranged in a predefined pattern on the imaging unit, as discussed herein.
To such imaging units, it may be referred to as IQ mosaic imaging unit. In an IQ mosaic imaging unit, a spatial phase offset pattern may be applied to the imaging elements, as discussed herein. Thereby, instead of changing phase offsets in time (i.e. in sequential captures), the phase offsets can be arranged spatially.
In such embodiments, with only having one or two captures, a similar noise performance may be achieved as with four captures of a known iToF sensor.
In some embodiments, the low-frequency demodulation pattern includes one phase demodulation, as discussed herein.
In some embodiments, the high-frequency demodulation pattern includes two phase-demodulations, as discussed herein.
In some embodiments, the time-of-flight image sensor circuitry is further configured to: shift readout-phases of the first and the second imaging portions for applying the two phase-demodulations, as discussed herein. It should be noted that the shifting of the readout-phases is not limited to the embodiment described above. For example, in a first shot of the high-frequency demodulation pattern, the first imaging portion may have one hundred eighty degrees of phase and the second imaging portion may have a phase of ninety degrees and in a second shot, the first imaging portion may have a phase of zero degrees and the second imaging portion may have a phase of two hundred seventy degrees. Generally, any predetermined phase-shifting pattern (and high-frequency demodulation pattern) may be applied as it is needed for the specific use-case. For example, it may be envisaged to have a forty-five degrees phase-shift in a first shot and a ninety degrees phase-shift in a second shot. The present disclosure is also not limited to have two shots in the high-frequency demodulation pattern as more shots may be envisaged as well.
In some embodiments, in the high-frequency demodulation pattern, only one shot may be carried out. In such embodiments, the imaging unit may have a first to fourth imaging portion, which may have predefined phase-shifts to each other, such as zero degrees (of the first imaging portion), ninety degrees (of the second imaging portion), one hundred and eighty degrees (of the third imaging portion), and two hundred and seventy degrees (of the fourth imaging portion), or the like.
In some embodiments, the time-of-flight image sensor circuitry of is further configured to: remove the ambiguity by comparing a coarse depth with a fine depth.
In such embodiments, the coarse depth and the fine depth may be determined based on the respective coarse and fine-depth data and the fine depth may be corrected based on the coarse depth.
In some embodiments, the time-of-flight image sensor circuitry is further configured to: determine the coarse depth based on the coarse-depth data for calibrating the fine-depth data for generating the resulting-depth data.
In such embodiments, only the coarse depth may be determined, and the fine-depth data may be corrected, marked, processed, or the like, such that the fine-depth data may not include the ambiguity anymore.
In some embodiments, the high-frequency of the high-frequency demodulation pattern is a multiple of a low-frequency of the low-frequency demodulation pattern.
For example, the low frequency may be 20 MHz and the high frequency may be 100 MHz, or the low frequency may be 10 MHz and the high frequency may be 80 MHz, without limiting the present disclosure in that regard.
When the high frequency is an (integer) multiple of the low frequency, an incidence of fine-depth data points may be the same multiple of an incidence of coarse-depth data points, such that a calibration of the fine-depth data may be simplified.
In some embodiments, the resulting-depth data is generated based on an artificial intelligence which is configured to determine how to remove the ambiguity of the fine-depth data based on the coarse- depth data.
The artificial intelligence may be any type of artificial intelligence (strong or weak), such as a neural network, a support vector machine, a Bayesian network, a genetic algorithm, or the like. The artificial intelligence may adopt a machine learning algorithm, for example, including supervised learning, semi-supervised learning, unsupervised learning, reinforcement learning, feature learning, sparse dictionary learning, anomaly detection learning, decision tree learning, association rule learning, or the like.
The machine learning algorithm may further be based on at least one of the following: Feature extraction techniques, classifier techniques or deep-learning techniques. Feature extraction may be based on at least one of: Scale Invariant Feature Transfer (SIFT), Cray Level Co-occurrence Matrix (GLCM), Gaboo Features, Tubeness or the like. Classifiers may be based on at least one of: Random Forest; Support Vector Machine; Neural Net, Bayes Net or the like. Deep learning may be based on at least one of: Autoencoders, Generative Adversarial Network, Weakly Supervised Learning, Boot-Strapping or the like, without limiting the present disclosure in that regard.
In some embodiments, the time-of-flight image sensor circuitry is further configured to: determine the resulting-depth data based on a detection of a spotted light pattern on the imaging unit.
For example, a spotted light source may be combined with the time-of-flight image sensor circuitry according to the present disclosure. As it is generally known for spot-ToF, a deterioration of light spots may be indicative for the depth of the scene.
By using an imaging unit according to the present disclosure, the light spots may be read out with a phase-shifted demodulation pattern, as well, which will be discussed further below in more detail.
Some embodiments pertain to a time-of-flight image sensor circuitry control method for controlling a time-of-flight image sensor circuitry including an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the method further includes: applying a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; applying a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generating resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
The method may be carried out with ToF image sensor circuitry according to the present disclosure or with control circuitry (e.g. a processor or any external device), for example.
In some embodiments, the first imaging portion and the second imaging portion are arranged in a predefined pattern on the imaging unit, as discussed herein. In some embodiments, the low-fre- quency demodulation pattern includes one phase demodulation as discussed herein. In some embodiments, the high-frequency demodulation pattern includes two phase-demodulations as discussed herein. In some embodiments, the time-of-flight image sensor circuitry control method of further includes: shifting readout-phases of the first and the second imaging portions for applying the two phase-demodulations as discussed herein. In some embodiments, the time-of-flight image sensor circuitry control method further includes: removing the ambiguity by comparing a coarse depth with a fine depth as discussed herein. In some embodiments, the time-of-flight image sensor circuitry control method further includes: determining the coarse depth based on the coarse-depth data for calibrating the fine-depth data for generating the resulting-depth data as discussed herein. In some embodiments, a high-frequency of the high-frequency demodulation pattern is a multiple of a low-frequency of the low-frequency demodulation pattern as discussed herein. In some embodiments, the resulting-depth data is generated based on an artificial intelligence which is configured to determine how to remove the ambiguity of the fine-depth data based on the coarse-depth data as discussed herein. In some embodiments, the time-of-flight image sensor circuitry control method further includes: determining the resulting-depth data based on a detection of a spotted light pattern on the imaging unit, as discussed herein.
The methods as described herein are also implemented in some embodiments as a computer program causing a computer and/ or a processor to perform the method, when being carried out on the computer and/or processor. In some embodiments, also a non-transitory computer-readable recording medium is provided that stores therein a computer program product, which, when executed by a processor, such as the processor described above, causes the methods described herein to be performed.
Returning to Fig. 3, there is depicted an imaging unit 20 according to the present disclosure. The imaging unit 20 includes a first imaging portion 21, which has a plurality of imaging elements 22 having a phase of 0. The imaging unit 20 further includes a second imaging portion 23, which has a plurality of imaging elements 24 having a phase of 0+7l/2. Hence, the predefined readout-phase shift between the first and the second imaging portions is 7l/2.
For example, in a first phase measurement, 0 is zero degrees and 71 is one hundred and eighty degrees. Since the phase offset is 7l/2, zero and ninety degrees are measured in the first phase measurement. In a second phase measurement, 0 may be one hundred and eighty degrees and 71 is still one hundred and eighty degrees (but only 7l/2 is added), such that the four phases zero, ninety, one hundred and eighty, and two hundred and seventy may be measured in two shots. As discussed herein, the two shots may be performed at a high frequency, and one shot, having for example 0 as zero degrees and 7l/2 as ninety degrees without changing 0, may be performed at a low frequency, and the one-shot-measurement may be used for unwrapping the two-shot measurement, i.e. for removing the ambiguity of the two-shot measurement.
Such a technique may be carried out with each of the image sensors discussed under reference of Figs. 3 to 6, such that a repetitive description will be omitted in the following.
Generally, by using an imaging unit according to these embodiments of Figs. 3 to 6, by only performing two shots (i.e. a first shot with first phase offsets and a second shot with second phase offsets), four phases can be acquired, such that a noise performance may be achieved which is comparable to known ToF devices using four acquisitions.
Fig. 4 depicts an imaging unit 30 according to the present disclosure. Generally, the imaging unit 30 may be the same as imaging unit 20 as discussed under reference of Fig. 3, but the readout-phases may be defined differently. Hence, there is no need to manufacture a completely new imaging unit since it may be sufficient to amend the readout and apply the demodulation signals differently. This concept is generally applicable to any imaging unit as discussed herein, e.g. under reference of Figs. 3 to 7.
The imaging unit 30 of Fig. 4 is different from the imaging unit 20 of Fig. 3 in that the imaging portions are arranged row-wise instead of column-wise.
Fig. 5 depicts an imaging unit 40 having imaging portion arranged two-column-wise, i.e. two columns of a first imaging portion 41 are arranged alternating with two columns of a second imaging portion 42.
An imaging unit 50 of Fig. 6 depicts imaging elements 51 of a first imaging portion alternatingly arranged with imaging elements 52 of a second imaging portion, such that the first and the second imaging portions constitute a checkerboard pattern on the imaging unit 51.
Fig. 7 depicts an imaging unit 60 with four imaging portions, having imaging elements 61 to 64 which are arranged alternating and are phase-shifted with a predefined phase-shift of ninety degrees (7l/2) to each other. Hence, the second imaging portion has a phase-shift of ninety degrees with respect to the first imaging portion. The third imaging portion has a phase-shift of one hundred and eighty degrees with respect to the first imaging portion. The fourth imaging portion has a phaseshift of two hundred and seventy degrees with respect to the first imaging portion. In this embodiment, a whole four-phase-measurement can be carried out in one shot. Hence, it is not necessary according to this embodiment, to carry out a two-shot measurement in the high frequency since one shot may be sufficient.
In Fig. 8, there is illustrated an embodiment of a two-shot measurement 70 with the imaging unit 20 which has been described under reference of Fig. 3.
In a first capture, the first imaging portion 21 has a phase of zero degrees and the second imaging portion 23 has a phase of ninety degrees. In a second capture, the first imaging portion 21 has a phase of one hundred and eighty degrees and the second imaging portion 23 has a phase of two hundred and seventy degrees. The readouts are subtracted from each other, resulting in a Lvalue on the first imaging portion 21 and in a Q-value on the second imaging portion 23. Thereby, mosaiced I-Q-values are obtained.
In other words: By doing a subtraction of the two captures, the two required differential measurements (known as I and Q) are obtained and can be combined to form a phase image.
Hence, applying an IQ mosaic pattern according to the present disclosure may reduce a total number of captures required, as already discussed herein, and further, power consumption may be reduced since a lower number of sensor reads is needed. Furthermore, motion blur may be reduced significantly.
A loss of spatial resolution resulting from such a method may be recovered by a demosaicking algorithm, for example for finding “missing” I and Q values.
Further measurements errors which may be introduced based on the methods discussed herein (e.g. increased fixed pattern noise (FPN)) are considered as small and may be corrected by calibration.
In Fig. 9, a one-shot measurement 80 is illustrated which is carried out with the imaging unit 20 of Fig. 3.
As discussed, throughout a one-shot measurement, only one phase-measurement is performed according to the present disclosure. The first imaging portion 21 has a phase of zero degrees and the second imaging portion 23 has a phase of ninety degrees. The one-shot measurement 80 is indicative of mosaiced I-Q-values on the respective imaging portion. In particular, the readouts first imaging portion 21 are indicative of an Lvalue and the readouts of the second imaging portion 23 are indicative of a Q-value.
In other words: In the embodiment of Fig. 9, the columns of one capture are directly used as the I and Q values, such that there is no second capture and no differential is taken. Although this may reduce an accuracy (since the differential may reduce an impact of ambient light and may reduce an impact of a pixel-tap-mismatch), this measurement may be used to reduce the depth ambiguity of the two-shot measurement of Fig. 8, such the resulting depth of this measurement may only be used as the coarse depth, as discussed herein.
Furthermore, a reduced noise performance may be compensated with a double integration time (which is not necessary), and motion blur is reduced since read time and idle time may be removed. Moreover, a power consumption may be reduced due to fewer readouts (i.e. one).
In some embodiments, the above-mentioned tap-mismatch may be calibrated.
The one-shot measurement 80 is carried out at a sampling frequency of 20 MHz and the two-shot measurement 70 is carried out at a sampling frequency of 60 MHz, such that the depth ambiguity of the two-shot measurement 70 can be removed based on the one-shot measurement.
In Fig. 10, there is depicted a time-of-flight sensor circuitry control method 90 according to the present disclosure in a block diagram.
At 91, an IQ-one-shot is carried out at 20 MHz for obtaining coarse-depth data which are indicative for a coarse depth.
At 92, the coarse depth of is computed based on the IQ-one-shot.
At 93, an IQ-two-shot is carried out at 60 MHz for obtaining fine-depth data which are indicative for a fine depth.
At 94, the fine depth is computed based on the IQ- two-shot.
At 95, an unwrapped depth is computed (i.e. the depth ambiguity of the fine-depth data is removed with an index dual unwrapping technique (which will be discussed under reference of Fig. 12).
At 96, resulting-depth data are output which are indicative of an unwrapped depth of high frequency-measurement.
Fig. 11 depicts a block diagram of a time-of-flight sensor circuitry control method 100 according to the present disclosure.
The time-of-flight sensor circuitry control method 100 is different from the method 90 in that after 95, a software processing layer is applied, at 101, which is configured to determine if/how the first frequency can be combined with the second frequency, or in other words, if/how the low frequency can be combined with the high frequency measurement, i.e. a mixed dual unwrapping technique is used in this method 100. The software processing layer includes an artificial intelligence which utilized a machine learning algorithm which has been trained, based on a supervised training, to determine how to remove the depth ambiguity from the fine-depth data, i.e. how to output the resulting-depth data.
Furthermore, in the software processing layer, pixels in the low-frequency measurement are detected which have a high accuracy such that it is not detrimental to combine these pixels with the high-frequency measurement (i.e. for these pixels an unwrapping may not be needed since they are already correct within a predetermined threshold). These pixels may be detected based on an amplitude of the signal and/ or due to smart filtering of the artificial intelligence (which includes a neural network in this embodiment).
Hence, at 102 resulting-depth data are output. In other words, the combined unwrapped depth of the high frequency is output with the software processed depth of the low frequency as the correct depth.
As indicated above, Fig. 12 shows an illustration of the index unwrapping principle 110.
In a low-frequency measurement at 20 MHz, a coarse depth 111 is determined which is subject to a coarse-depth measurement error 112. Furthermore, a maximum measurement error 113 is depicted for which a depth ambiguity of a high-frequency measurement can be removed successfully.
Furthermore, in a high-frequency measurement at 60 MHz, a fine depth 114 is determined, which is subject to a depth ambiguity, i.e. in fact three fine depth values 114 are determined in this embodiment. The fine depth 114 is subject to a fine-depth measurement error 115, which is however smaller than the coarse-depth measurement error 112.
In this embodiment, the measurement errors are considered to be Gaussian, but the present disclosure is not limited to such an error type and the error type may be depending on the specific usecase.
In this embodiment, the depth noise is defined as:
Figure imgf000019_0001
wherein 71 is one hundred and eighty degrees in rad, c is the speed of light, fmod is the (de)modulation frequency, and
Figure imgf000019_0002
is noise of a phase domain which is defined as constant between demodulation frequencies. However, the noise of the phase domain may increase with increasing modulation frequency, for example due to modulation contrast. However, such influences can be neglected since they may be sufficiently small compared to a change of depth noise due to the modulation frequency fmod- In other words: the depth noise increases with decreasing modulation frequency. Returning to Fig. 12, a resulting depth 116 is determined based on a comparison of the coarse depth with the fine depth. It is determined that one of the fine depth values lies within the measurement error range of the coarse depth 111. Hence, the coarse depth value 111 and the fine depth value 114 do not have to align exactly since it is sufficient when the fine depth value 114 lies within a predefined error range 113 of the coarse depth value 111.
This fine depth value 114 is then considered to be the true depth and is output accordingly as the resulting depth 116.
Note that also the embodiment of Fig. 11 can be applied to a measurement as shown in Fig. 12, i.e. a dual unwrapping method can be applied as well by adding a software processing layer, as discussed under reference of Fig. 11.
According to the present disclosure, for unwrapping the second capture (i.e. without an unwrapping error), the measurement error (accuracy and/or noise) of the first capture (i.e. of the coarse depth) should be (roughly) less than half of the unambiguous range of the second capture (i.e. the fine depth) .
For example, in the embodiment of Fig. 12, the maximum measurement error 113 extends roughly to the half of the unambiguous range of the 60 MHz measurement. A measurement value on the extreme left of the error bars 113 may, however, result in an unwrapping error. A measurement value on the extreme right of the error bars 113 may be closer to the next bin of the 60 MHz measurement (since the phase is cyclic).
Hence, the above statement that the measurement error should be “less than half of the unambiguous range” should be considered, in some embodiments, as a rule of thumb and may depend on the specific use-case, in particular since the high-frequency measurement may also be impacted by its own combined accuracy and/ or noise error, such that it may be possible to have an unwrapping error with less than half a wavelength accuracy and noise values.
Hence, it should be noted that the error bars of Fig. 12 are only for illustrational purposes and should not be considered as binding since the measurement error may depend of the measurement environment, as generally known.
However, as already discussed above, the maximum combined accuracy plus noise value may be considered as rather high (when the high frequency is still below a predetermined threshold), such that for the unwrapping embodiment of Fig. 12, IQ-one-shot may be used as the low-frequency measurement since it may be within requirements for combined accuracy and noise for unwrapping. For example, when the high frequency is 100 MHz (instead of 60 MHz as discussed above), the unambiguous distance is roughly one and a half meters, such that combined accuracy plus noise only needs to be less than roughly seventy-five centimeters which may be considered as being in a usable range.
Therefore, although IQ-one-shot may result in a reduced accuracy (e.g. based on tap-mismatch and ambient light impact), the accuracy plus noise of IQ-one-shot may lie within requirements for correctly unwrapping the high-frequency measurement (which may have a higher accuracy (e.g. IQ-two- shot)) .
However, it should be noted that the present disclosure is not limited to a combination of IQ-one- shot and IQ-two-shot since a combination of IQ-one-shot and standard four component ToF may be envisaged, as well, as already discussed above.
In Fig. 13, a schematic illustration 120 is depicted for explanation how to combine spot-ToF with an imaging unit according to the present disclosure. This embodiment may be envisaged when performing a long-range outdoor measurement, for example when the object which should be measured is too far away such that also the low-frequency measurement will have a depth ambiguity. This embodiment may additionally or alternatively be envisaged for mobile platforms (e.g. a smartphone) when power consumption is important.
Exemplarily, the imaging unit 20 of Fig. 3 is shown, but the present disclosure is not limited to using this specific embodiment for the combination with spot-ToF.
On the imaging unit 20, a light spot 121 is depicted which represents a reflected light spot from a light system. Based on the position of the spot and utilizing a ToF sensor circuitry control method according to the present disclosure, the power consumption may be further reduced (i.e. by reducing an amount of emitted light) .
Furthermore, the shape of the light spots can be modified to a shape that it fits to a pattern of the imaging unit, as shown in an illustration 130 of Fig. 14. In this embodiment, an oval spot 131 is depicted on the imaging unit 20 such that only the imaging elements must be read out on which the spot lies.
Fig. 15 depicts an embodiment of an illumination pattern 135 according to the present disclosure. As it is generally known, the illumination of an iToF system may roughly correspond to the demodulation signals. This may be achieved by delaying the illumination signal onto the imaging element for readout. First, for an IQ-one-shot, a low frequency is emitted. Second, after a time for reading out the IQ- one-shot, an IQ-two-shot is carried out at a high frequency, as discussed herein.
Referring to Fig. 16, there is illustrated an embodiment of a time-of- flight (ToF) imaging apparatus 140, which can be used for depth sensing or providing a distance measurement, in particular for the technology as discussed herein, wherein the ToF imaging apparatus 140 is configured as an iToF camera. The ToF imaging apparatus 140 has time-of-flight image sensor circuitry 147, which is configured to perform the methods as discussed herein and which forms a control of the ToF imaging apparatus 140 (and it includes, not shown, corresponding processors, memory and storage, as it is generally known to the skilled person).
The ToF imaging apparatus 140 has a modulated light source 141 and it includes light emitting elements (based on laser diodes), wherein in the present embodiment, the light emitting elements are narrow band laser elements.
The light source 141 emits light, i.e. modulated light, as discussed herein, to a scene 142 (region of interest or object), which reflects the light. The reflected light is focused by an optical stack 143 to a light detector 144.
The light detector 144 has a time-of-flight imaging portion, as discussed herein, which is implemented based on multiple CAPDs formed in an array of pixels and a micro lens array 146 which focuses the light reflected from the scene 142 to the time-of-flight imaging portion 145 (to each pixel of the image sensor 65).
The light emission time and modulation information is fed to the time-of-flight image sensor circuitry or control 147 including a time-of-flight measurement unit 148, which also receives respective information from the time-of-flight imaging portion 145, when the light is detected which is reflected from the scene 142. On the basis of the modulated light received from the light source 141, the time-of-flight measurement unit 148 computes a phase shift of the received modulated light which has been emitted from the light source 141 and reflected by the scene 142 and on the basis thereon it computes a distance d (depth information) between the image sensor 145 and the scene 142.
The depth information is fed from the time-of-flight measurement unit 148 to a 3D image reconstruction unit 149 of the time-of-flight image sensor circuitry 147, which reconstructs (generates) a 3D image of the scene 142 based on the depth data. Moreover, object ROI detection, image labeling, applying a morphological operation, and mobile phone recognition, as discussed herein is performed. Fig. 17 depicts a block diagram of a further embodiment of a time-of-flight image sensor circuitry control method 150 according to the present disclosure.
At 151, a low-frequency demodulation pattern is applied, i.e. a one-shot measurement at 20 MHz is performed, as discussed herein.
At 152, a high-frequency demodulation pattern is applied, i.e. a two-shot measurement at 60 MHz is performed, as discussed herein.
At 153, resulting-depth data is generated, as discussed herein.
In Fig. 18, a further embodiment of a time-of-flight image sensor circuitry control method 160 is depicted in a block diagram.
The method 160 is different from the method 150 in that after 152, at 161, readout-phases are shifted for performing the two-shot measurement.
The resulting depth data is generated, at 162, by comparing depths, i.e. by using the technique as discussed under reference of Fig. 12.
Fig. 19 depicts a further embodiment of a time-of-flight image sensor circuitry control method 170 according to the present disclosure in a block diagram.
The method 170 is different from the method 160 in that, after 163, at 171, the resulting depth data is generated by calibrating the two-shot measurement based on the coarse depth, as discussed herein.
In Fig. 20, a further embodiment of a time-of-flight image sensor circuitry control method 180 according to the present disclosure is depicted in a block diagram.
The method 180 is different from the method 150 in that after 152, at 181, resulting-depth data are generated based on a spotted light pattern, as discussed under reference of Figs. 13 or 14.
The technology according to an embodiment of the present disclosure is applicable to various products. For example, the technology according to an embodiment of the present disclosure may be implemented as a device included in a mobile body that is any of kinds of automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobility vehicles, airplanes, drones, ships, robots, construction machinery, agricultural machinery (tractors), and the like.
Fig. 21 is a block diagram depicting an example of schematic configuration of a vehicle control system 7000 as an example of a mobile body control system to which the technology according to an embodiment of the present disclosure can be applied. The vehicle control system 7000 includes a plurality of electronic control units connected to each other via a communication network 7010. In the example depicted in Fig. 21, the vehicle control system 7000 includes a driving system control unit 7100, a body system control unit 7200, a battery control unit 7300, an outside-vehicle information detecting unit 7400, an in-vehicle information detecting unit 7500, and an integrated control unit 7600. The communication network 7010 connecting the plurality of control units to each other may, for example, be a vehicle-mounted communication network compliant with an arbitrary standard such as controller area network (CAN), local interconnect network (LIN), local area network (LAN), FlexRay (registered trademark), or the like.
Each of the control units includes: a microcomputer that performs arithmetic processing according to various kinds of programs; a storage section that stores the programs executed by the microcomputer, parameters used for various kinds of operations, or the like; and a driving circuit that drives various kinds of control target devices. Each of the control units further includes: a network interface (I/F) for performing communication with other control units via the communication network 7010; and a communication I/F for performing communication with a device, a sensor, or the like within and without the vehicle by wire communication or radio communication. A functional configuration of the integrated control unit 7600 illustrated in Fig. 21 includes a microcomputer 7610, a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon receiving section 7650, an in-vehicle device I/F 7660, a sound/image output section 7670, a vehicle-mounted network I/F 7680, and a storage section 7690. The other control units similarly include a microcomputer, a communication I/F, a storage section, and the like.
The driving system control unit 7100 controls the operation of devices related to the driving system of the vehicle in accordance with various kinds of programs. For example, the driving system control unit 7100 functions as a control device for a driving force generating device for generating the driving force of the vehicle, such as an internal combustion engine, a driving motor, or the like, a driving force transmitting mechanism for transmitting the driving force to wheels, a steering mechanism for adjusting the steering angle of the vehicle, a braking device for generating the braking force of the vehicle, and the like. The driving system control unit 7100 may have a function as a control device of an antilock brake system (ABS), electronic stability control (ESC), or the like.
The driving system control unit 7100 is connected with a vehicle state detecting section 7110. The vehicle state detecting section 7110, for example, includes at least one of a gyro sensor that detects the angular velocity of axial rotational movement of a vehicle body, an acceleration sensor that detects the acceleration of the vehicle, and sensors for detecting an amount of operation of an accelerator pedal, an amount of operation of a brake pedal, the steering angle of a steering wheel, an engine speed or the rotational speed of wheels, and the like. The driving system control unit 7100 performs arithmetic processing using a signal input from the vehicle state detecting section 7110, and controls the internal combustion engine, the driving motor, an electric power steering device, the brake device, and the like.
The body system control unit 7200 controls the operation of various kinds of devices provided to the vehicle body in accordance with various kinds of programs. For example, the body system control unit 7200 functions as a control device for a keyless entry system, a smart key system, a power window device, or various kinds of lamps such as a headlamp, a backup lamp, a brake lamp, a turn signal, a fog lamp, or the like. In this case, radio waves transmitted from a mobile device as an alternative to a key or signals of various kinds of switches can be input to the body system control unit 7200. The body system control unit 7200 receives these input radio waves or signals, and controls a door lock device, the power window device, the lamps, or the like of the vehicle.
The battery control unit 7300 controls a secondary battery 7310, which is a power supply source for the driving motor, in accordance with various kinds of programs. For example, the battery control unit 7300 is supplied with information about a battery temperature, a battery output voltage, an amount of charge remaining in the battery, or the like from a battery device including the secondary battery 7310. The battery control unit 7300 performs arithmetic processing using these signals, and performs control for regulating the temperature of the secondary battery 7310 or controls a cooling device provided to the battery device or the like.
The outside-vehicle information detecting unit 7400 detects information about the outside of the vehicle including the vehicle control system 7000. For example, the outside-vehicle information detecting unit 7400 is connected with at least one of an imaging section 7410 and an outside-vehicle information detecting section 7420. The imaging section 7410 includes at least one of a time-of- flight (ToF) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras. The outside-vehicle information detecting section 7420, for example, includes at least one of an environmental sensor for detecting current atmospheric conditions or weather conditions and a peripheral information detecting sensor for detecting another vehicle, an obstacle, a pedestrian, or the like on the periphery of the vehicle including the vehicle control system 7000.
The environmental sensor, for example, may be at least one of a rain drop sensor detecting rain, a fog sensor detecting a fog, a sunshine sensor detecting a degree of sunshine, and a snow sensor detecting a snowfall. The peripheral information detecting sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR device (Light detection and Ranging device, or Laser imaging detection and ranging device). Each of the imaging section 7410 and the outside-vehicle information detecting section 7420 may be provided as an independent sensor or device, or may be provided as a device in which a plurality of sensors or devices are integrated. Fig. 22 depicts an example of installation positions of the imaging section 7410 and the outside-vehicle information detecting section 7420. Imaging sections 7910, 7912, 7914, 7916, and 7918 are, for example, disposed at at least one of positions on a front nose, sideview mirrors, a rear bumper, and a back door of the vehicle 7900 and a position on an upper portion of a windshield within the interior of the vehicle. The imaging section 7910 provided to the front nose and the imaging section 7918 provided to the upper portion of the windshield within the interior of the vehicle obtain mainly an image of the front of the vehicle 7900. The imaging sections 7912 and 7914 provided to the sideview mirrors obtain mainly an image of the sides of the vehicle 7900. The imaging section 7916 provided to the rear bumper or the back door obtains mainly an image of the rear of the vehicle 7900. The imaging section 7918 provided to the upper portion of the windshield within the interior of the vehicle is used mainly to detect a preceding vehicle, a pedestrian, an obstacle, a signal, a traffic sign, a lane, or the like.
Incidentally, Fig. 22 depicts an example of photographing ranges of the respective imaging sections 7910, 7912, 7914, and 7916. An imaging range a represents the imaging range of the imaging section 7910 provided to the front nose. Imaging ranges b and c respectively represent the imaging ranges of the imaging sections 7912 and 7914 provided to the sideview mirrors. An imaging range d represents the imaging range of the imaging section 7916 provided to the rear bumper or the back door. A bird’s-eye image of the vehicle 7900 as viewed from above can be obtained by superimposing image data imaged by the imaging sections 7910, 7912, 7914, and 7916, for example.
Outside-vehicle information detecting sections 7920, 7922, 7924, 7926, 7928, and 7930 provided to the front, rear, sides, and corners of the vehicle 7900 and the upper portion of the windshield within the interior of the vehicle may be, for example, an ultrasonic sensor or a radar device. The outsidevehicle information detecting sections 7920, 7926, and 7930 provided to the front nose of the vehicle 7900, the rear bumper, the back door of the vehicle 7900, and the upper portion of the windshield within the interior of the vehicle may be a LIDAR device, for example. These outside-vehicle information detecting sections 7920 to 7930 are used mainly to detect a preceding vehicle, a pedestrian, an obstacle, or the like.
Returning to Fig. 21, the description will be continued. The outside-vehicle information detecting unit 7400 makes the imaging section 7410 image an image of the outside of the vehicle, and receives imaged image data. In addition, the outside-vehicle information detecting unit 7400 receives detection information from the outside-vehicle information detecting section 7420 connected to the out- side-vehicle information detecting unit 7400. In a case where the outside-vehicle information detecting section 7420 is an ultrasonic sensor, a radar device, or a LIDAR device, the outside-vehicle information detecting unit 7400 transmits an ultrasonic wave, an electromagnetic wave, or the like, and receives information of a received reflected wave. On the basis of the received information, the outside-vehicle information detecting unit 7400 may perform processing of detecting an object such as a human, a vehicle, an obstacle, a sign, a character on a road surface, or the like, or processing of detecting a distance thereto. The outside-vehicle information detecting unit 7400 may perform environment recognition processing of recognizing a rainfall, a fog, road surface conditions, or the like on the basis of the received information. The outside -vehicle information detecting unit 7400 may calculate a distance to an object outside the vehicle on the basis of the received information.
In addition, on the basis of the received image data, the outside-vehicle information detecting unit 7400 may perform image recognition processing of recognizing a human, a vehicle, an obstacle, a sign, a character on a road surface, or the like, or processing of detecting a distance thereto. The out- side-vehicle information detecting unit 7400 may subject the received image data to processing such as distortion correction, alignment, or the like, and combine the image data imaged by a plurality of different imaging sections 7410 to generate a bird’s-eye image or a panoramic image. The outsidevehicle information detecting unit 7400 may perform viewpoint conversion processing using the image data imaged by the imaging section 7410 including the different imaging parts.
The in-vehicle information detecting unit 7500 detects information about the inside of the vehicle. The in-vehicle information detecting unit 7500 is, for example, connected with a driver state detecting section 7510 that detects the state of a driver. The driver state detecting section 7510 may include a camera that images the driver, a biosensor that detects biological information of the driver, a microphone that collects sound within the interior of the vehicle, or the like. The biosensor is, for example, disposed in a seat surface, the steering wheel, or the like, and detects biological information of an occupant sitting in a seat or the driver holding the steering wheel. On the basis of detection information input from the driver state detecting section 7510, the in-vehicle information detecting unit 7500 may calculate a degree of fatigue of the driver or a degree of concentration of the driver, or may determine whether the driver is dozing. The in-vehicle information detecting unit 7500 may subject an audio signal obtained by the collection of the sound to processing such as noise canceling processing or the like.
Furthermore, the in-vehicle information detecting unit may include time-of-flight image sensor circuitry according to the present disclosure for performing ICM, as discussed herein.
The integrated control unit 7600 controls general operation within the vehicle control system 7000 in accordance with various kinds of programs. The integrated control unit 7600 is connected with an input section 7800. The input section 7800 is implemented by a device capable of input operation by an occupant, such, for example, as a touch panel, a button, a microphone, a switch, a lever, or the like. The integrated control unit 7600 may be supplied with data obtained by voice recognition of voice input through the microphone. The input section 7800 may, for example, be a remote control device using infrared rays or other radio waves, or an external connecting device such as a mobile telephone, a personal digital assistant (PDA), or the like that supports operation of the vehicle control system 7000. The input section 7800 may be, for example, a camera. In that case, an occupant can input information by gesture. Alternatively, data may be input which is obtained by detecting the movement of a wearable device that an occupant wears. Further, the input section 7800 may, for example, include an input control circuit or the like that generates an input signal on the basis of information input by an occupant or the like using the above-described input section 7800, and which outputs the generated input signal to the integrated control unit 7600. An occupant or the like inputs various kinds of data or gives an instruction for processing operation to the vehicle control system 7000 by operating the input section 7800.
The storage section 7690 may include a read only memory (ROM) that stores various kinds of programs executed by the microcomputer and a random access memory (RAM) that stores various kinds of parameters, operation results, sensor values, or the like. In addition, the storage section 7690 may be implemented by a magnetic storage device such as a hard disc drive (HDD) or the like, a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
The general-purpose communication I/F 7620 is a communication I/F used widely, which communication I/F mediates communication with various apparatuses present in an external environment 7750. The general-purpose communication I/F 7620 may implement a cellular communication protocol such as global system for mobile communications (GSM (registered trademark)), worldwide interoperability for microwave access (WiMAX (registered trademark)), long term evolution (LTE (registered trademark)), LTE-advanced (LTE- A), or the like, or another wireless communication protocol such as wireless LAN (referred to also as wireless fidelity (Wi-Fi (registered trademark)), Bluetooth (registered trademark), or the like. The general-purpose communication I/F 7620 may, for example, connect to an apparatus (for example, an application server or a control server) present on an external network (for example, the Internet, a cloud network, or a company-specific network) via a base station or an access point. In addition, the general-purpose communication I/F 7620 may connect to a terminal present in the vicinity of the vehicle (which terminal is, for example, a terminal of the driver, a pedestrian, or a store, or a machine type communication (MTC) terminal) using a peer to peer (P2P) technology, for example.
The dedicated communication I/F 7630 is a communication I/F that supports a communication protocol developed for use in vehicles. The dedicated communication I/F 7630 may implement a standard protocol such, for example, as wireless access in vehicle environment (WAVE), which is a combination of institute of electrical and electronic engineers (IEEE) 802.1 Ip as a lower layer and IEEE 1609 as a higher layer, dedicated short range communications (DSRC), or a cellular communication protocol. The dedicated communication I/F 7630 typically carries out V2X communication as a concept including one or more of communication between a vehicle and a vehicle (V ehicle to Vehicle), communication between a road and a vehicle (V ehicle to Infrastructure), communication between a vehicle and a home (V ehicle to Home), and communication between a pedestrian and a vehicle (Vehicle to Pedestrian).
The positioning section 7640, for example, performs positioning by receiving a global navigation satellite system (GNSS) signal from a GNSS satellite (for example, a GPS signal from a global positioning system (GPS) satellite), and generates positional information including the latitude, longitude, and altitude of the vehicle. Incidentally, the positioning section 7640 may identify a current position by exchanging signals with a wireless access point, or may obtain the positional information from a terminal such as a mobile telephone, a personal handyphone system (PHS), or a smart phone that has a positioning function.
The beacon receiving section 7650, for example, receives a radio wave or an electromagnetic wave transmitted from a radio station installed on a road or the like, and thereby obtains information about the current position, congestion, a closed road, a necessary time, or the like. Incidentally, the function of the beacon receiving section 7650 may be included in the dedicated communication I/F 7630 described above.
The in-vehicle device I/F 7660 is a communication interface that mediates connection between the microcomputer 7610 and various in-vehicle devices 7760 present within the vehicle. The in-vehicle device I/F 7660 may establish wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), near field communication (NFC), or wireless universal serial bus (WUSB). In addition, the in-vehicle device I/F 7660 may establish wired connection by universal serial bus (USB), high-definition multimedia interface (HDMI (registered trademark)), mobile high-definition link (MHL), or the like via a connection terminal (and a cable if necessary) not depicted in the figures. The in-vehicle devices 7760 may, for example, include at least one of a mobile device and a wearable device possessed by an occupant and an information device carried into or attached to the vehicle. The in-vehicle devices 7760 may also include a navigation device that searches for a path to an arbitrary destination. The in-vehicle device I/F 7660 exchanges control signals or data signals with these in-vehicle devices 7760.
The vehicle-mounted network I/F 7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010. The vehicle-mounted network I/F 7680 transmits and receives signals or the like in conformity with a predetermined protocol supported by the communication network 7010.
The microcomputer 7610 of the integrated control unit 7600 controls the vehicle control system 7000 in accordance with various kinds of programs on the basis of information obtained via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning section 7640, the beacon receiving section 7650, the in-vehicle device I/F 7660, and the vehicle-mounted network I/F 7680. For example, the microcomputer 7610 may calculate a control target value for the driving force generating device, the steering mechanism, or the braking device on the basis of the obtained information about the inside and outside of the vehicle, and output a control command to the driving system control unit 7100. For example, the microcomputer 7610 may perform cooperative control intended to implement functions of an advanced driver assistance system (ADAS) which functions include collision avoidance or shock mitigation for the vehicle, following driving based on a following distance, vehicle speed maintaining driving, a warning of collision of the vehicle, a warning of deviation of the vehicle from a lane, or the like. In addition, the microcomputer 7610 may perform cooperative control intended for automatic driving, which makes the vehicle to travel autonomously without depending on the operation of the driver, or the like, by controlling the driving force generating device, the steering mechanism, the braking device, or the like on the basis of the obtained information about the surroundings of the vehicle.
The microcomputer 7610 may generate three-dimensional distance information between the vehicle and an object such as a surrounding structure, a person, or the like, and generate local map information including information about the surroundings of the current position of the vehicle, on the basis of information obtained via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning section 7640, the beacon receiving section 7650, the in-vehicle device I/F 7660, and the vehicle-mounted network I/F 7680. In addition, the microcomputer 7610 may predict danger such as collision of the vehicle, approaching of a pedestrian or the like, an entry to a closed road, or the like on the basis of the obtained information, and generate a warning signal. The warning signal may, for example, be a signal for producing a warning sound or lighting a warning lamp.
The sound/image output section 7670 transmits an output signal of at least one of a sound and an image to an output device capable of visually or auditorily notifying information to an occupant of the vehicle or the outside of the vehicle. In the example of Fig. 21, an audio speaker 7710, a display section 7720, and an instrument panel 7730 are illustrated as the output device. The display section 7720 may, for example, include at least one of an on-board display and a head-up display. The display section 7720 may have an augmented reality (AR) display function. The output device may be other than these devices, and may be another device such as headphones, a wearable device such as an eyeglass type display worn by an occupant or the like, a projector, a lamp, or the like. In a case where the output device is a display device, the display device visually displays results obtained by various kinds of processing performed by the microcomputer 7610 or information received from another control unit in various forms such as text, an image, a table, a graph, or the like. In addition, in a case where the output device is an audio output device, the audio output device converts an audio signal constituted of reproduced audio data or sound data or the like into an analog signal, and auditorily outputs the analog signal.
Incidentally, at least two control units connected to each other via the communication network 7010 in the example depicted in Fig. 21 may be integrated into one control unit. Alternatively, each individual control unit may include a plurality of control units. Further, the vehicle control system 7000 may include another control unit not depicted in the figures. In addition, part or the whole of the functions performed by one of the control units in the above description may be assigned to another control unit. That is, predetermined arithmetic processing may be performed by any of the control units as long as information is transmitted and received via the communication network 7010. Similarly, a sensor or a device connected to one of the control units may be connected to another control unit, and a plurality of control units may mutually transmit and receive detection information via the communication network 7010.
Incidentally, a computer program for realizing at least one of the time-of-flight sensor circuitry control methods according to the present disclosure can be implemented in one of the control units or the like. In addition, a computer readable recording medium storing such a computer program can also be provided. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. In addition, the above-described computer program may be distributed via a network, for example, without the recording medium being used.
In the vehicle control system 7000 described above, any time-of-flight image sensor circuitry according to the present disclosure can be applied to the integrated control unit 7600 in the application example depicted in Fig. 21.
In addition or alternatively, at least part of any of the time-of-flight sensor circuitry according to the present disclosure may be implemented in a module (for example, an integrated circuit module formed with a single die) for the integrated control unit 7600 depicted in Fig. 21. Alternatively, the time-of-flight sensor circuitry may be implemented by a plurality of control units of the vehicle control system 7000 depicted in Fig. 21.
It should be recognized that the embodiments describe methods with an exemplary ordering of method steps. The specific ordering of method steps is however given for illustrative purposes only and should not be construed as binding. For example, the ordering of 151 and 152 in the embodiment of Fig. 17 may be exchanged. Also, the ordering of 92 and 93 in the embodiment of Fig. 10 or 11 may be exchanged. Further, also the ordering of 152 (and 161) and 171 in the embodiment of Fig. 19 may be exchanged. Other changes of the ordering of method steps may be apparent to the skilled person.
Please note that the division of the control 147 into units 148 and 149 is only made for illustration purposes and that the present disclosure is not limited to any specific division of functions in specific units. For instance, the control 147 could be implemented by a respective programmed processor, field programmable gate array (FPGA) and the like.
The method can also be implemented as a computer program causing a computer and/ or a processor, such as processor 147 discussed above, to perform the method, when being carried out on the computer and/or processor. In some embodiments, also a non-transitory computer-readable recording medium is provided that stores therein a computer program product, which, when executed by a processor, such as the processor described above, causes the method described to be performed.
All units and entities described in this specification and claimed in the appended claims can, if not stated otherwise, be implemented as integrated circuit logic, for example on a chip, and functionality provided by such units and entities can, if not stated otherwise, be implemented by software.
In so far as the embodiments of the disclosure described above are implemented, at least in part, using software-controlled data processing apparatus, it will be appreciated that a computer program providing such software control and a transmission, storage or other medium by which such a computer program is provided are envisaged as aspects of the present disclosure.
Note that the present technology can also be configured as described below.
(1) A time-of-flight image sensor circuitry comprising: an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the circuitry is further configured to: apply a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; apply a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generate resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
(2) The time-of-flight image sensor circuitry of (1), wherein the first imaging portion and the second imaging portion are arranged in a predefined pattern on the imaging unit.
(3) The time-of-flight image sensor circuitry of anyone of (1) or (2), wherein the low-frequency demodulation pattern includes one phase demodulation.
(4) The time-of-flight image sensor circuitry of anyone of (1) to (3), wherein the high-frequency demodulation pattern includes two phase-demodulations.
(5) The time-of-flight image sensor circuitry of (4), further configured to: shift readout-phases of the first and the second imaging portions for applying the two phase-demodulations.
(6) The time-of-flight image sensor circuitry of anyone of (1) to (5), further configured to: remove the ambiguity by comparing a coarse depth with a fine depth.
(7) The time-of-flight image sensor circuitry of anyone of (1) to (6), further configured to: determine the coarse depth based on the coarse-depth data for calibrating the fine-depth data for generating the resulting-depth data.
(8) The time-of-flight image sensor circuitry of anyone of (1) to (7), wherein a high-frequency of the high-frequency demodulation pattern is a multiple of a low-frequency of the low-frequency demodulation pattern.
(9) The time-of-flight image sensor circuitry of anyone of (1) to (8), wherein the resulting-depth data is generated based on an artificial intelligence which is configured to determine how to remove the ambiguity of the fine-depth data based on the coarse-depth data.
(10) The time-of-flight image sensor circuitry of anyone of (1) to (9), further configured to: determine the resulting-depth data based on a detection of a spotted light pattern on the imaging unit.
(11) A time-of-flight image sensor circuitry control method for controlling a time-of-flight image sensor circuitry including an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the method further comprises: applying a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; applying a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generating resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
(12) The time-of-flight image sensor circuitry control method of (11), wherein the first imaging portion and the second imaging portion are arranged in a predefined pattern on the imaging unit.
(13) The time-of-flight image sensor circuitry control method of anyone of (11) or (12), wherein the low-frequency demodulation pattern includes one phase demodulation.
(14) The time-of-flight image sensor circuitry control method of anyone of (11) to (13), wherein the high-frequency demodulation pattern includes two phase-demodulations.
(15) The time-of-flight image sensor circuitry control method of (14), further comprising: shifting readout-phases of the first and the second imaging portions for applying the two phase-demodulations.
(16) The time-of-flight image sensor circuitry control method of anyone of (11) to (15), further comprising: removing the ambiguity by comparing a coarse depth with a fine depth.
(17) The time-of-flight image sensor circuitry control method of anyone of (11) to (16), further comprising: determining the coarse depth based on the coarse-depth data for calibrating the fine- depth data for generating the resulting-depth data.
(18) The time-of-flight image sensor circuitry control method of anyone of (11) to (17), wherein a high-frequency of the high-frequency demodulation pattern is a multiple of a low-frequency of the low-frequency demodulation pattern.
(19) The time-of-flight image sensor circuitry control method of anyone of (11) to (18), wherein the resulting-depth data is generated based on an artificial intelligence which is configured to determine how to remove the ambiguity of the fine-depth data based on the coarse-depth data.
(20) The time-of-flight image sensor circuitry control method of anyone of (11) to (19), further comprising: determining the resulting-depth data based on a detection of a spotted light pattern on the imaging unit.
(21) A computer program comprising program code causing a computer to perform the method according to anyone of (11) to (20), when being carried out on a computer. (22) A non-transitory computer-readable recording medium that stores therein a computer program product, which, when executed by a processor, causes the method according to anyone of (11) to (20) to be performed.

Claims

34 CLAIMS
1. Time-of-flight image sensor circuitry comprising: an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase-shift, wherein the circuitry is further configured to: apply a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; apply a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generate resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
2. The time-of-flight image sensor circuitry of claim 1, wherein the first imaging portion and the second imaging portion are arranged in a predefined pattern on the imaging unit.
3. The time-of-flight image sensor circuitry of claim 1, wherein the low-frequency demodulation pattern includes one phase demodulation.
4. The time-of-flight image sensor circuitry of claim 1, wherein the high-frequency demodulation pattern includes two phase-demodulations.
5. The time-of-flight image sensor circuitry of claim 4, further configured to: shift readoutphases of the first and the second imaging portions for applying the two phase-demodulations.
6. The time-of-flight image sensor circuitry of claim 1, further configured to: remove the ambiguity by comparing a coarse depth with a fine depth.
7. The time-of-flight image sensor circuitry of claim 1, further configured to: determine the coarse depth based on the coarse-depth data for calibrating the fine-depth data for generating the resulting-depth data.
8. The time-of-flight image sensor circuitry of claim 1, wherein a high-frequency of the high- frequency demodulation pattern is a multiple of a low-frequency of the low-frequency demodulation pattern.
9. The time-of-flight image sensor circuitry of claim 1, wherein the resulting-depth data is generated based on an artificial intelligence which is configured to determine how to remove the ambiguity of the fine-depth data based on the coarse-depth data.
10. The time-of-flight image sensor circuitry of claim 1, further configured to: determine the resulting-depth data based on a detection of a spotted light pattern on the imaging unit. 35
11. A time-of-flight image sensor circuitry control method for controlling a time-of-flight image sensor circuitry including an imaging unit including a first imaging portion and a second imaging portion, wherein the first and the second imaging portions are driven based on a predefined readout-phase -shift, wherein the method further comprises: applying a low-frequency demodulation pattern to the first and the second imaging portions for generating coarse-depth data; applying a high-frequency demodulation pattern to the first and the second imaging portions for generating fine-depth data, wherein the fine-depth data is subject to a depth ambiguity; and generating resulting-depth data in which the ambiguity of the fine-depth data is removed based on the coarse-depth data.
12. The time-of-flight image sensor circuitry control method of claim 11, wherein the first imaging portion and the second imaging portion are arranged in a predefined pattern on the imaging unit.
13. The time-of-flight image sensor circuitry control method of claim 11, wherein the low-frequency demodulation pattern includes one phase demodulation.
14. The time-of-flight image sensor circuitry control method of claim 11, wherein the high-frequency demodulation pattern includes two phase-demodulations.
15. The time-of-flight image sensor circuitry control method of claim 14, further comprising: shifting readout-phases of the first and the second imaging portions for applying the two phase-demodulations.
16. The time-of-flight image sensor circuitry control method of claim 11, further comprising: removing the ambiguity by comparing a coarse depth with a fine depth.
17. The time-of-flight image sensor circuitry control method of claim 11, further comprising: determining the coarse depth based on the coarse-depth data for calibrating the fine-depth data for generating the resulting-depth data.
18. The time-of-flight image sensor circuitry control method of claim 11, wherein a high-frequency of the high-frequency demodulation pattern is a multiple of a low-frequency of the low-frequency demodulation pattern.
19. The time-of-flight image sensor circuitry control method of claim 11, wherein the resulting- depth data is generated based on an artificial intelligence which is configured to determine how to remove the ambiguity of the fine-depth data based on the coarse-depth data.
20. The time-of-flight image sensor circuitry control method of claim 11, further comprising: determining the resulting-depth data based on a detection of a spotted light pattern on the imaging unit.
PCT/EP2021/085600 2020-12-15 2021-12-14 Time-of-flight image sensor circuitry and time-of-flight image sensor circuitry control method WO2022128985A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180082922.XA CN116635748A (en) 2020-12-15 2021-12-14 Time-of-flight image sensor circuit and time-of-flight image sensor circuit control method
EP21836157.4A EP4264323A1 (en) 2020-12-15 2021-12-14 Time-of-flight image sensor circuitry and time-of-flight image sensor circuitry control method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP20214251.9 2020-12-15
EP20214251 2020-12-15

Publications (1)

Publication Number Publication Date
WO2022128985A1 true WO2022128985A1 (en) 2022-06-23

Family

ID=73854614

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/085600 WO2022128985A1 (en) 2020-12-15 2021-12-14 Time-of-flight image sensor circuitry and time-of-flight image sensor circuitry control method

Country Status (3)

Country Link
EP (1) EP4264323A1 (en)
CN (1) CN116635748A (en)
WO (1) WO2022128985A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110188028A1 (en) * 2007-10-02 2011-08-04 Microsoft Corporation Methods and systems for hierarchical de-aliasing time-of-flight (tof) systems
US20140327741A1 (en) * 2013-05-02 2014-11-06 Infineon Technologies Ag 3D Camera And Method Of Image Processing 3D Images
WO2020016075A1 (en) * 2018-07-17 2020-01-23 Sony Semiconductor Solutions Corporation Electronic device and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110188028A1 (en) * 2007-10-02 2011-08-04 Microsoft Corporation Methods and systems for hierarchical de-aliasing time-of-flight (tof) systems
US20140327741A1 (en) * 2013-05-02 2014-11-06 Infineon Technologies Ag 3D Camera And Method Of Image Processing 3D Images
WO2020016075A1 (en) * 2018-07-17 2020-01-23 Sony Semiconductor Solutions Corporation Electronic device and method
US20210318443A1 (en) * 2018-07-17 2021-10-14 Sony Semiconductor Solutions Corporation Electronic device and method

Also Published As

Publication number Publication date
CN116635748A (en) 2023-08-22
EP4264323A1 (en) 2023-10-25

Similar Documents

Publication Publication Date Title
JP6834964B2 (en) Image processing equipment, image processing methods, and programs
WO2017159382A1 (en) Signal processing device and signal processing method
US10904503B2 (en) Image processing device, information generation device, and information generation method
JP6764573B2 (en) Image processing equipment, image processing methods, and programs
WO2017212928A1 (en) Image processing device, image processing method, and vehicle
US11255959B2 (en) Apparatus, method and computer program for computer vision
CN115244427A (en) Simulated laser radar apparatus and system
WO2020022137A1 (en) Photodetector and distance measurement apparatus
EP3585045B1 (en) Information processing device, information processing method, and program
CN113227836A (en) Distance measuring device and distance measuring method
WO2019163315A1 (en) Information processing device, imaging device, and imaging system
WO2022128985A1 (en) Time-of-flight image sensor circuitry and time-of-flight image sensor circuitry control method
EP4278330A1 (en) Object recognition method and time-of-flight object recognition circuitry
US11436706B2 (en) Image processing apparatus and image processing method for improving quality of images by removing weather elements
US20230161026A1 (en) Circuitry and method
US20230316546A1 (en) Camera-radar fusion using correspondences
US20230119187A1 (en) Circuitry and method
WO2022196316A1 (en) Information processing device, information processing method, and program
WO2023162734A1 (en) Distance measurement device
WO2023234033A1 (en) Ranging device
WO2022176532A1 (en) Light receiving device, ranging device, and signal processing method for light receiving device
CN117741575A (en) Generating 3D mesh map and point cloud using data fusion of mesh radar sensors
WO2024052392A1 (en) Circuitry and method
CN116457843A (en) Time-of-flight object detection circuit and time-of-flight object detection method
JP2022181125A (en) Information processing device, calibration system, and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21836157

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180082922.X

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021836157

Country of ref document: EP

Effective date: 20230717