WO2021177045A1 - Dispositif de traitement de signal, procédé de traitement de signal et module de télémétrie - Google Patents

Dispositif de traitement de signal, procédé de traitement de signal et module de télémétrie Download PDF

Info

Publication number
WO2021177045A1
WO2021177045A1 PCT/JP2021/006075 JP2021006075W WO2021177045A1 WO 2021177045 A1 WO2021177045 A1 WO 2021177045A1 JP 2021006075 W JP2021006075 W JP 2021006075W WO 2021177045 A1 WO2021177045 A1 WO 2021177045A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
signal processing
luminance
pixel
model
Prior art date
Application number
PCT/JP2021/006075
Other languages
English (en)
Japanese (ja)
Inventor
基 三原
俊 海津
優介 森内
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021177045A1 publication Critical patent/WO2021177045A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • G01S17/36Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating

Definitions

  • the present technology relates to a signal processing device, a signal processing method, and a ranging module, and more particularly to a signal processing device, a signal processing method, and a ranging module capable of more appropriately correcting characteristic variations between taps.
  • a distance measuring module is mounted on a mobile terminal such as a so-called smartphone.
  • smartphones are equipped with a distance measurement module that measures distance by the Indirect ToF (Time of Flight) method.
  • IndirectToF Time of Flight
  • the irradiation light is emitted toward the object, the reflected light is reflected on the surface of the object and returned, and the flight from the emission of the irradiation light to the reception of the reflected light is detected. This is a method of calculating the distance to an object based on time.
  • the light receiving sensor side receives the reflected light at the light receiving timings shifted by 0 °, 90 °, 180 °, and 270 ° based on the irradiation timing of the irradiation light.
  • a method of calculating the distance to an object by using four phase images detected in four different phases with respect to the irradiation timing of the irradiation light is called a 4 Phase method or the like.
  • one pixel has two charge storage portions, a first tap and a second tap, and by alternately distributing the received charge to the first tap and the second tap, the phase is inverted in one phase image. Since two phases can be detected, there is also a method called a 2-phase method in which the distance to an object is calculated using two phase images.
  • the 2Phase method has the advantage that the distance can be calculated with half the number of phase images of the 4Phase method, but the characteristics of the first tap and the second tap, which are the two charge storage units of each pixel, vary between taps (tap). Since there is a sensitivity difference between taps), it is necessary to correct the characteristic variation between taps.
  • Patent Document 1 proposes a method of calculating the offset of each tap by pre-measurement to reduce the error of distance calculation.
  • the correction formula for correcting the characteristic variation between taps is a formula for dividing by a value close to 0 when the amplitude of the received light waveform is small, which is sufficient for calculation stability and noise immunity. I could not say.
  • This technology was made in view of such a situation, and makes it possible to more appropriately correct the characteristic variation between taps in a light receiving sensor having two charge storage units in one pixel.
  • the signal processing device on the first aspect of the present technology is based on the pixel data obtained by the pixels that receive the reflected light that is reflected by the object and returned by the irradiation light emitted from the predetermined light emitting source.
  • a signal processing unit that performs processing for calculating the distance to the It has a characteristic calculation unit that calculates correction parameters that correct the characteristics of the first charge detection unit and the second charge detection unit.
  • the signal processing method of the second aspect of the present technology is based on the pixel data obtained by the pixel that receives the reflected light that is reflected by the object and returned by the irradiation light emitted from the predetermined light emitting source.
  • the signal processing device that performs the process of calculating the distance to 2 Calculate the correction parameter that corrects the characteristics with the charge detection unit.
  • the distance measuring module on the third side of the present technology comprises a predetermined light emitting source and a distance measuring sensor having pixels that receive the reflected light that is reflected by an object and returned from the irradiation light emitted from the light emitting source.
  • the distance measuring sensor includes a first charge detection unit and a second charge detection unit of the pixel based on a luminance model corresponding to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel. It has a characteristic calculation unit that calculates correction parameters for correcting characteristics.
  • the first charge detection unit and the second charge detection unit of the pixel are based on the luminance model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel.
  • a correction parameter that corrects the characteristics of and is calculated.
  • the signal processing device and the ranging module may be independent devices or may be modules incorporated in other devices.
  • step S105 of FIG. It is a detailed flowchart of the fixed pattern noise evaluation process executed in step S105 of FIG. It is a perspective view which shows the chip structure example of the distance measuring sensor. It is a block diagram which shows the configuration example of a smartphone as an electronic device equipped with a distance measurement module. It is a block diagram which shows an example of the schematic structure of a vehicle control system. It is explanatory drawing which shows an example of the installation position of the vehicle exterior information detection unit and the image pickup unit.
  • FIG. 1 is a block diagram showing a schematic configuration example of a distance measuring module to which the present technology is applied.
  • the distance measurement module 11 shown in FIG. 1 is a distance measurement module that performs distance measurement by the Indirect ToF method, and has a light emitting unit 12 and a distance measurement sensor 13.
  • the ranging module 11 irradiates an object with light, and the light (irradiation light) receives the light (reflected light) reflected by the object to generate a depth map as distance information to the object. And output.
  • the distance measuring sensor 13 includes a control unit 14, a light receiving unit 15, and a signal processing unit 16.
  • the light emitting unit 12 includes, for example, a VCSEL array in which a plurality of VCSELs (Vertical Cavity Surface Emitting Lasers) are arranged in a plane in a plane as a light emitting source, and responds to a light emitting control signal supplied from the control unit 14. It emits light while being modulated at the timing, and irradiates the object with irradiation light.
  • VCSELs Vertical Cavity Surface Emitting Lasers
  • the control unit 14 controls the operation of the entire ranging module 11.
  • the control unit 14 controls the light emitting unit 12 by supplying a light emitting control signal of a predetermined frequency (for example, 200 MHz or the like) to the light emitting unit 12.
  • the control unit 14 also supplies a light emission control signal to the light receiving unit 15 in order to drive the light receiving unit 15 in accordance with the timing of light emission in the light emitting unit 12.
  • the light receiving unit 15 receives the reflected light from the object in the pixel array 32 in which a plurality of pixels 31 are two-dimensionally arranged, which will be described in detail later with reference to FIG. Then, the light receiving unit 15 supplies the pixel data composed of the detection signals according to the received amount of the received reflected light to the signal processing unit 16 in units of pixels 31 of the pixel array 32.
  • the signal processing unit 16 calculates a depth value, which is the distance from the distance measuring module 11 to the object, based on the pixel data supplied from the light receiving unit 15 for each pixel 31 of the pixel array 32, and the pixel value of each pixel 31. Generates a depth map in which the depth value is stored as, and outputs it to the outside of the module.
  • FIG. 2 is a block diagram showing a detailed configuration example of the light receiving unit 15.
  • the light receiving unit 15 generates a charge corresponding to the amount of received light, and a pixel array 32 and a pixel array in which pixels 31 that output a detection signal corresponding to the charge are two-dimensionally arranged in a matrix in the row direction and the column direction. It has a drive control circuit 33 arranged in a peripheral region of 32.
  • the drive control circuit 33 is, for example, a control signal for controlling the drive of the pixel 31 based on a light emission control signal supplied from the control unit 14, (for example, a distribution signal DIMIX, a selection signal ADDRESS DECODE, or a reset, which will be described later). (Signal RST, etc.) is output.
  • the pixel 31 includes a photodiode 51 as a photoelectric conversion unit that generates an electric charge according to the amount of received light, a first tap 52A (first charge detecting unit) that detects the electric charge generated by the photodiode 51, and a second. It has a tap 52B (second charge detection unit).
  • the electric charge generated by one photodiode 51 is distributed to the first tap 52A or the second tap 52B.
  • the charges distributed to the first tap 52A are output as a detection signal A from the signal line 53A
  • the charges distributed to the second tap 52B are detected signals B from the signal line 53B. Is output as.
  • the first tap 52A is composed of a transfer transistor 41A, an FD (Floating Diffusion) unit 42A, a selection transistor 43A, and a reset transistor 44A.
  • the second tap 52B is composed of a transfer transistor 41B, an FD unit 42B, a selection transistor 43B, and a reset transistor 44B.
  • the reflected light is received by the photodiode 51 with a delay time ⁇ T. It is assumed that the waveform of the reflected light is the same as the emission waveform of the irradiation light except for the delay of the phase (delay time ⁇ T) according to the distance to the object.
  • the distribution signal DIMIX_A controls the on / off of the transfer transistor 41A
  • the distribution signal DIMIX_B controls the on / off of the transfer transistor 41B.
  • the distribution signal DIMIX_A in FIG. 3 is a signal having the same phase as the irradiation light
  • the distribution signal DIMIX_B has a phase in which the distribution signal DIMIX_A is inverted.
  • the electric charge generated by the photodiode 51 receiving the reflected light is transferred to the FD unit 42A according to the distribution signal DIMIX_A while the transfer transistor 41A is on, and is transferred to the transfer transistor according to the distribution signal DIMIX_B. While 41B is on, it is transferred to the FD unit 42B.
  • the electric charges transferred via the transfer transistor 41A are sequentially accumulated in the FD section 42A and transferred via the transfer transistor 41B during a predetermined period in which the irradiation light of the irradiation time T is periodically irradiated.
  • the electric charge is sequentially accumulated in the FD unit 42B.
  • the selection transistor 43A is turned on according to the selection signal ADDRESS DECODE_A after the period for accumulating the electric charge
  • the electric charge accumulated in the FD unit 42A is read out via the signal line 53A and corresponds to the amount of the electric charge.
  • the detection signal A is output from the light receiving unit 15.
  • the selection transistor 43B is turned on according to the selection signal ADDRESS DECODE_B
  • the electric charge accumulated in the FD unit 42B is read out via the signal line 53B, and the detection signal B corresponding to the amount of the electric charge is transmitted from the light receiving unit 15. It is output.
  • the electric charge stored in the FD section 42A is discharged when the reset transistor 44A is turned on according to the reset signal RST_A, and the electric charge stored in the FD section 42B is discharged when the reset transistor 44B is turned on according to the reset signal RST_B. Will be done.
  • the pixel 31 distributes the electric charge generated by the reflected light received by the photodiode 51 to the first tap 52A or the second tap 52B according to the delay time ⁇ T, and outputs the detection signal A and the detection signal B to the pixels. Output as data.
  • the signal processing unit 16 calculates the depth value based on the detection signal A and the detection signal B supplied as pixel data from each pixel 31.
  • a method for calculating the depth value there are a 2Phase method using two types of phase detection signals and a 4Phase method using four types of phase detection signals.
  • the light receiving unit 15 transmits the reflected light at the light receiving timings shifted by 0 °, 90 °, 180 °, and 270 ° with respect to the irradiation timing of the irradiation light.
  • Receive light Exposure
  • the light receiving unit 15 receives light with the phase set to 0 ° with respect to the irradiation timing of the irradiation light in a certain frame period, and receives light with the phase set to 90 ° in the next frame period. In the frame period, the phase is set to 180 ° to receive light, and in the next frame period, the phase is set to 270 ° to receive light.
  • the phase of the light receiving timing that is preset with reference to the irradiation timing of the irradiation light is referred to as the set phase.
  • the set phase of 0 °, 90 °, 180 °, or 270 ° represents the set phase of the pixel 31 at the first tap 52A unless otherwise specified. Since the second tap 52B has a phase inverted from that of the first tap 52A, when the first tap 52A has a set phase of 0 °, 90 °, 180 °, or 270 °, the second tap 52B has a phase opposite to that of the first tap 52A, respectively. , 180 °, 270 °, 0 °, or 90 °.
  • FIG. 5 is a diagram showing the exposure periods of the first tap 52A of the pixel 31 in each set phase of 0 °, 90 °, 180 °, and 270 ° side by side so that the difference in the set phase can be easily understood.
  • the reflected light is received by the light receiving unit 15 in a state in which the phase is delayed from the irradiation timing of the irradiation light by the time ⁇ T corresponding to the distance to the object.
  • the phase difference generated according to the distance to the object is also referred to as a distance phase to distinguish it from the set phase.
  • FIG. 5 is a diagram illustrating a method of calculating the depth value d by the 2Phase method and the 4Phase method.
  • the depth value d can be obtained by the following equation (1).
  • equation (1) c is the speed of light
  • ⁇ T is the delay time
  • f is the modulation frequency of light.
  • ⁇ in the equation (1) represents the phase shift amount [rad] of the reflected light, that is, the metric phase, and is represented by the following equation (2).
  • I of equation (2) Q is set phase 0 °, 90 °, 180 °, and 270 detect signals I A (0) obtained in ° to I A (270) and the detection signal I B (0) through using I B (270), is calculated by the following formula (3).
  • I and Q are signals obtained by converting the phase of the sine wave from polar coordinates to a Cartesian coordinate system (IQ plane), assuming that the change in brightness of the irradiation light is a sine wave.
  • the depth value d to the object can be obtained by using only two set phases that are orthogonal to each other.
  • I and Q are given by the following equation (5).
  • the characteristic variation between taps existing in each pixel cannot be removed, but the depth value d to the object can be obtained only from the detection signals of the two set phases, which is twice that of the 4Phase method.
  • Distance measurement can be performed at the frame rate.
  • the distance to the object can be measured accurately and at high speed by adopting the 2Phase method.
  • the offset c 0 and the gain c 1 are set by the least squares method. There is a way to calculate.
  • the value of the equation (8) approaches zero as much as possible when the amplitude of the emission waveform of the irradiation light that controls the irradiation timing is small. It becomes a value. For example, when the amplitude of the emission waveform becomes 1/100, the value of the equation (8) becomes about 1/8000. Therefore, the calculation of the offset c 0 and the gain c 1 becomes unstable when the reflectance is low, the object is far away, or the amount of light emitted is reduced to reduce power consumption.
  • the signal processing unit 16 of the distance measuring sensor 13 in FIG. 1 more appropriately corrects the characteristic variation between taps and calculates the distance to the object using the 2Phase method.
  • FIG. 7 is a block diagram showing a first configuration example of the signal processing unit 16 of the distance measuring sensor 13 of FIG.
  • the signal processing unit 16 of FIG. 7 includes a model determination unit 71, a characteristic calculation unit 72, a signal correction unit 73, and a distance calculation unit 74.
  • the luminance waveform detected by each pixel 31 of the light receiving unit 15 is a convolution of the emission waveform of the irradiation light output by the light emitting unit 12 and the exposure waveform when each pixel 31 of the light receiving unit 15 exposes (receives light). It becomes a waveform.
  • the model determination unit 71 assumes (predicts) the shape of the emission waveform of the irradiation light output by the light emitting unit 12 and the shape of the exposure waveform when each pixel 31 of the light receiving unit 15 exposes (receives light). Then, a model (luminance model) of the brightness waveform observed by the light receiving unit 15 is determined.
  • FIG. 8 shows an example of a luminance model determined by the model determination unit 71.
  • the model determination unit 71 assumes a square wave as the shape of the emission waveform of the irradiation light and assumes a square wave as the shape of the exposure waveform as the first luminance model (model 1). Then, it is assumed that the luminance waveform observed by the light receiving unit 15 is a triangular wave.
  • the model determination unit 71 assumes a sine wave as the shape of the emission waveform of the irradiation light as the second luminance model (model 2), assumes a square wave as the shape of the exposure waveform, and observes with the light receiving unit 15. It is assumed that the luminance waveform to be generated is a sine wave.
  • the luminance waveform observed by the light receiving unit 15 is a harmonic as the third luminance model (model 3). Is assumed to be.
  • the model determination unit 71 determines a luminance model, which is a model of the luminance waveform observed by the light receiving unit 15, from among a plurality of luminance models as shown in FIG. 3, and supplies the luminance model to the characteristic calculation unit 72. If the emission waveform is known by the initial setting by the user, the brightness model may be determined based on the set value.
  • the characteristic calculation unit 72 has offset c 0 and gain, which are correction parameters for correcting characteristic variations between taps of each pixel 31 based on a luminance waveform (luminance model) corresponding to the shape of the emission waveform and the shape of the exposure waveform. Calculate c 1. The calculated offset c 0 and gain c 1 are supplied to the signal correction unit 73.
  • the characteristic calculation unit 72 calculates the offset c 0 and the gain c 1 which are the correction parameters, assuming that the luminance waveform is a triangular wave. do.
  • the model determination unit 71 determines the second model as the luminance model
  • the characteristic calculation unit 72 assumes that the luminance waveform is a sine wave, and sets the offset c 0 and the gain c 1 which are correction parameters. calculate.
  • the characteristic calculation unit 72 assumes that the luminance waveform is a harmonic and sets the offset c 0 and the gain c 1 which are correction parameters. calculate.
  • the detailed calculation method of the correction parameter when the luminance waveform is assumed to be a triangular wave, a sine wave, or a harmonic wave will be described later.
  • Signal correcting unit 73 uses the offset c 0 and a gain c 1 calculated by the characteristic calculating section 72, the detection signal I B of the detection signals I A ( ⁇ ) or the second tap 52B of the first tap 52A (theta) Correct one of the signals.
  • the signal correcting unit 73 in accordance with the above equation (6), by correcting the detection signal I B of the second tap 52B of the phase setting 0 ° (0), detection of the first tap 52A of the phase setting 180 °
  • the detection signal I A (0) of the first tap 52A having the set phase 270 ° 180
  • the detection signal I ( ⁇ ) converted using the correction parameter is represented by adding “” ”like the detection signal I” ( ⁇ ).
  • I "A (180) c 0 + c 1 ⁇ I B (0)
  • I "A (270) c 0 + c 1 ⁇ I B (90)
  • the distance calculation unit 74 is the distance to the object by the 2 Phase method based on the phase images of the two set phases having an orthogonal relationship, specifically, the corrected pixel data supplied from the signal correction unit 73. Calculate the depth value. Then, the distance calculation unit 74 generates a depth map in which the depth value is stored as the pixel value of each pixel 31 and outputs the depth map to the outside of the module.
  • the light receiving unit 15 of the distance measuring sensor 13 may sequentially change the set phase to 0 °, 90 °, 180 °, and 270 ° to receive light, and for example, only the set phases of 0 ° and 90 °.
  • the light reception may be performed so that the above steps are alternately repeated.
  • the signal processing unit 16 can generate and output a depth map using two adjacent (two set phases) phase images.
  • the characteristic calculation unit 72 is supplied with four phase images obtained by sequentially setting the set phase ⁇ to 0 °, 90 °, 180 °, and 270 ° from the light receiving unit 15 (FIG. 1). Since the detection signal I A ( ⁇ ) and I B of the reflected light received by the set phase theta in the first tap 52A and the second tap 52B of the pixels 31 of the pixel array 32 (theta) correspond to the function representing the luminance waveform , the detection signal I a ( ⁇ ) and I B the (theta), also referred to as the intensity function I a ( ⁇ ) and I B (theta).
  • Characteristic calculation unit 72 0 ° setting phase, 90 °, 180 °, and, 270 ° of 4 luminance function I A of the first tap 52A of each pixel 31 is detected by the light receiving portion 15 are sequentially set to the phase (0), I a (90 ), and I a (180), and I a (270), the intensity function I B of the second tap 52B (0), I B ( 90), I B (180), and based on the I B (270), calculates the offset c 0 and the gain c 1 is a correction parameter for correcting the variation in characteristics between taps.
  • the intensity function I A detected by the first tap 52A (theta), such as shown in FIG. 9, the central center A, the amplitude # 038 A, by the offset offset A follow the specified triangular wave.
  • the intensity function I B detected by the second tap 52B (theta), such as shown in FIG. 9, the central center B, the amplitude # 038 B, according to the triangular wave is defined by an offset offset B.
  • the offset c 0 and the gain c 1 can be calculated by the following equations (9) and (10).
  • the amplitude # 038 A of the triangular wave of the luminance function I A detected by the first tap 52A (theta), and the amplitude # 038 B of the triangular wave of the luminance function I B detected by the second tap 52B (theta) is For example, as described in the literature “M. Schmidt.” Spatiotemporal Analysis of Range Imagery “. Dissertation, Department of Physics and Astronomy, University of Heidelberg, 2008.", the following equations (11) and (12) ) Can be calculated.
  • the center of the triangular wave can be obtained by Eq. (18).
  • offset A center A -amp A ⁇ (21)
  • offset B center B -amp B ⁇ (22)
  • Characteristic calculating unit 72 assuming a triangular wave as a brightness model, the amplitude # 038 A of the triangular wave of the detected intensity function I A first tap 52A (theta), and the intensity function I detected by the second tap 52B
  • the amplitude amp B of the triangular wave of B ( ⁇ ) is calculated by the above equations (11) and (12), and the offset offset A and the offset offset B are calculated by the above equations (21) and (22). calculate.
  • the characteristic calculation unit 72 calculates the offset c 0 and the gain c 1 by the above equations (9) and (10).
  • FIG. 10 shows a conceptual diagram of correction processing performed by the signal correction unit 73 when the luminance model is assumed to be a triangular wave of the first model.
  • Signal correction unit 73 by correcting using the offset c 0 and the gain c 1 a detection signal I B (0) of the second tap 52B of the phase setting 0 °, the detection of the first tap 52A of the phase setting 180 ° The signal I " A (180) is generated. Further, the signal correction unit 73 corrects the detection signal I B (90) of the second tap 52B having the set phase of 90 ° by using the offset c 0 and the gain c 1. Therefore, the detection signal I "A (270) of the first tap 52A having the set phase of 270 ° is generated.
  • the four-phase detection signals I ( ⁇ ) are aligned, so the depth value can be calculated using equations (1), (2), and (4).
  • the arithmetic expressions for obtaining the amplitude amp A and the amplitude amp B are different from the above-mentioned equations (11) and (12) for the triangular wave.
  • the amplitude amp A and the amplitude amp B when the luminance model is a sine wave are calculated by the following equations (23) and (24).
  • Characteristic calculating unit 72 assuming a sin wave as a brightness model, the amplitude # 038 A of sin wave of the detected intensity function I A first tap 52A (theta), and the luminance detected by the second tap 52B the amplitude # 038 B of sin wave function I B (theta), is calculated by the above equation (23) and (24).
  • the characteristic calculating section 72 calculates the offset c 0 and the gain c 1 by the above equations (9) and (10).
  • FIG. 12 shows a conceptual diagram of correction processing performed by the signal correction unit 73 when the luminance model is assumed to be a sine wave of the second model.
  • the actual emission waveform may be distorted from the rectangle depending on the time constant of the emission circuit, the modulation frequency of the emission waveform and the exposure waveform, and the like. In that case, it may be closer to the actual luminance model to assume the luminance model as a sine wave instead of assuming it as a triangular wave, and calculate the offset c 0 and gain c 1 by assuming the luminance model as a sine wave. In some cases, the calculation result of the distance is better.
  • the characteristic calculation unit 72 calculates the correction parameter using machine learning.
  • the ranging module 11 acquires four phase images with the set phases set to 0 °, 90 °, 180 °, and 270 ° in various scenes (measurement targets).
  • the four phase images of the acquired various scenes are accumulated in the characteristic calculation unit 72.
  • the characteristic calculation unit 72 has a neural network learner having the same configuration for each of the first tap 52A and the second tap 52B.
  • FIG. 13 is a diagram illustrating a learning process for learning the luminance model of the harmonics detected by the first tap 52A.
  • Equation (25) the luminance function I 'A (theta) on the assumption that higher harmonics, centered c A 0 harmonics, and cos function from primary to M order, from the primary to M order It is represented by the sin function.
  • the order M M is a natural number
  • the input v A in of the learner 81A is the first tap 52 A of the predetermined pixel 31 in the four phase images of the accumulated predetermined scenes of set phases 0 °, 90 °, 180 °, and 270 °.
  • the luminance function I A '( ⁇ ) assumed to be the harmonic is restored.
  • harmonic intensity function I 'A ( ⁇ ) is representative good intensity function I A (theta)
  • it sets the phase 0 °, 90 °, 180 °
  • the luminance as input learning device 81A is the value of the function I a ( ⁇ ), the brightness value I a (0), I a (90), I a (180), and I a (270) ⁇ and should be the same.
  • characteristic calculation unit 72 performs learning such that the difference between the value of the intensity function I 'A ( ⁇ ) of the input v A in the output v A out of the learning device 81A is reduced.
  • W A 'of formula (27) represents the weighting coefficients after updating, the eta A is a coefficient representing the learning rate.
  • More weighting factor W update of A the luminance value I A of the first tap 52A of the same pixel in the four phase images of the stored various scenes (0), I A (90 ), I A (180) , and by repeating a predetermined number of times by using the I a (270), the intensity function I 'a harmonics of the formula (25) ( ⁇ ) is obtained.
  • the characteristic calculation unit 72 performs the same learning for the second tap 52B.
  • Equation (28) the luminance function I 'B (theta) on the assumption that higher harmonics, centered c B 0 harmonic, and cos function from primary to M order, from the primary to M order It is represented by the sin function.
  • the order M M is a natural number
  • the luminance value of the second tap 52B of predetermined pixels 31 I B (0), I B (90), I B (180), and I inputs B (270). That is, the input v B in ⁇ I B ( 0), I B (90), I B (180), and I B (270) ⁇ is.
  • the characteristic calculating section 72 performs learning such that the difference between the intensity function I 'B ( ⁇ ) of the input v B in the output v B out of the learning device 81B decreases.
  • W B 'is of formula (30) represents the weighting coefficients after updating, eta B is a coefficient representing the learning rate.
  • the intensity function I 'A ( ⁇ ) and the intensity function I' B to be restored since the center of the amplitude is considered to differ with the characteristics of the same light emission waveform as exposure waveform, the intensity function I '( It is considered that the shape of the function obtained by whitening ⁇ ) is the same. Therefore, as a constraint condition when updating the weighting coefficient W of each node of the learner 81, the first evaluation functions L A 1 and L B 1 described above are subjected to the second evaluation represented by the following equation (31). A function L 2 may be added to obtain a weighting coefficient W such that the second evaluation function L 2 is also small. The update of the weighting coefficient W is represented by the equation (32).
  • FIG. 14 shows a conceptual diagram of an operation for obtaining the second evaluation function L 2.
  • intensity function I of the detected reflected light at the second tap 52B' luminance function I of the reflected light detected by the first tap 52A B ( ⁇ ) is estimated.
  • characteristic calculation unit 72 uses 'and A (theta), intensity function I of the second tap 52B' luminance function I of the first tap 52A estimated and B (theta), the following equation (33)
  • the gain c 1 is calculated by the formula (34)
  • the offset c 0 is calculated by the equation (34).
  • the characteristic calculation unit 72 can calculate the correction parameters using machine learning as described above.
  • the method of calculating harmonics when the luminance model is assumed as harmonics is not limited to machine learning, and other methods may be used. For example, a method as disclosed in the document "Marvin Lindner and Andreas Kolb,” Lateral and Depth Calibration of PMD-Distance Sensors, International Symposium on Visual Computing ", 2006.” may be adopted. The technique disclosed in this document captures multiple known depths and obtains a harmonic component from the difference between the known depth and the actually measured depth, providing a look-up table for depth correction. The method of generation is disclosed.
  • the correction parameters can be calculated more accurately than with a simple model.
  • step S1 the control unit 14 of the distance measuring sensor 13 determines whether to estimate the correction parameter for correcting the characteristic variation between the taps of each pixel 31 of the pixel array 32 of the light receiving unit 15.
  • the correction parameters are estimated. Further, for example, in the first measurement in which the distance measuring module 11 is activated, the correction parameter can be estimated without fail, or the correction parameter can be estimated every predetermined number of measurements.
  • step S1 If it is determined in step S1 that the correction parameters are to be estimated, the process proceeds to step S2, and the distance measuring module 11 estimates the correction parameters for estimating the offset c 0 and the gain c 1 which are the correction parameters of each pixel 31. Execute the process.
  • step S1 determines whether the correction parameter is not estimated. If it is determined in step S1 that the correction parameter is not estimated, the correction parameter estimation process in step S2 is skipped, and the process proceeds to step S3. When the correction parameter is not estimated, the correction parameter has already been set.
  • step S3 the distance measuring module 11 executes a distance measurement process for measuring the distance to the object by the 2 Phase method using the correction parameter for correcting the characteristic variation between the taps of each pixel 31, and ends the process.
  • FIG. 16 is a detailed flowchart of the correction parameter estimation process executed in step S2 of the measurement process of FIG.
  • step S11 the control unit 14 sets the phase difference (set phase) between the light emitting waveform and the exposure waveform to 0 °, and supplies the light emitting control signal to the light emitting unit 12 and the light receiving unit 15.
  • step S12 the ranging module 11 emits the irradiation light and receives the reflected light reflected and returned by the object. That is, the light emitting unit 12 emits light while modulating at a timing corresponding to the light emission control signal supplied from the control unit 14, and irradiates the object with the irradiation light.
  • the light receiving unit 15 receives the reflected light from the object.
  • the light receiving unit 15 supplies pixel data composed of detection signals according to the amount of received reflected light to the signal processing unit 16 in units of pixels 31 of the pixel array 32.
  • step S13 the control unit 14 determines whether or not four phase images having the set phases of 0 °, 90 °, 180 °, and 270 ° have been acquired.
  • step S13 If it is determined in step S13 that the acquisition of the four phase images has not yet been performed, the process proceeds to step S14, and the control unit 14 updates the set phase to a value incremented by 90 ° from the current value. .. After step S14, the process returns to step S12, and steps S12 and S13 described above are repeated.
  • step S13 when it is determined in step S13 that four phase images have been acquired, the process proceeds to step S15, and the characteristic calculation unit 72 of the signal processing unit 16 changes the shape of the emission waveform and the shape of the exposure waveform. Based on the corresponding luminance waveform, the offset c 0 and the gain c 1 which are the correction parameters for correcting the characteristic variation between the taps of each pixel 31 are calculated.
  • the luminance model is determined before the process of step S15, and is supplied from the model determination unit 71 to the characteristic calculation unit 72.
  • step S15 the offset c 0 and the gain c 1 are described above for each of the cases where a triangular wave is assumed as the luminance model, a sine wave is assumed as the luminance model, and a harmonic is assumed as the luminance model. It is calculated by the calculation method.
  • the calculated offset c 0 and gain c 1 are supplied to the signal correction unit 73, the correction parameter estimation process of FIG. 16 is completed, and the process returns to the measurement process of FIG.
  • the correction parameters (offset c 0 and gain c 1 ) were calculated using only four phase images capable of generating one depth map, but the influence of noise is reduced. Therefore, even if eight or more phase images corresponding to a plurality of depth maps are used and the summation average of the plurality of correction parameters obtained as a result is used to calculate the final correction parameter. good.
  • FIG. 17 is a detailed flowchart of the distance measurement process by the 2 Phase method executed in step S3 of the measurement process of FIG.
  • step S31 the control unit 14 sets the phase difference (set phase) between the light emitting waveform and the exposure waveform to 0 °, and supplies the light emitting control signal to the light emitting unit 12 and the light receiving unit 15.
  • step S32 the ranging module 11 emits the irradiation light and receives the reflected light reflected and returned by the object. That is, the light emitting unit 12 emits light while modulating at a timing corresponding to the light emission control signal supplied from the control unit 14, and irradiates the object with the irradiation light.
  • the light receiving unit 15 receives the reflected light from the object.
  • the light receiving unit 15 supplies pixel data composed of detection signals according to the amount of received reflected light to the signal processing unit 16 in units of pixels 31 of the pixel array 32.
  • step S33 the control unit 14 determines whether or not two phase images having the set phases of 0 ° and 90 ° have been acquired.
  • step S33 If it is determined in step S33 that the acquisition of the two phase images has not yet been performed, the process proceeds to step S34, and the control unit 14 sets the phase difference (set phase) between the emission waveform and the exposure waveform to 90 °. Set. After step S34, the process returns to step S32, and steps S32 and S33 described above are repeated.
  • step S33 if it is determined in step S33 that the two phase images have been acquired, the process proceeds to step S35, and the signal correction unit 73 of the signal processing unit 16 has the offset c 0 calculated by the characteristic calculation unit 72. and using the gain c 1, in accordance with the equation (6), to correct the detection signals I B of the second tap 52B (theta).
  • the signal correcting unit 73 uses the detection signal I B of the second tap 52B of the pixels 31 acquired in set phases 0 ° and 90 ° (0) and I B (90), the following Calculate the corrected I " A (180) and I" A (270).
  • I "A (180) c 0 + c 1 ⁇ I B (0)
  • I "A (270) c 0 + c 1 ⁇ I B (90)
  • step S36 the distance calculation unit 74 uses the two phase images of the corrected set phases of 0 ° and 90 ° supplied from the signal correction unit 73 to reach the object for each pixel 31 of the pixel array 32.
  • the depth value which is the distance between, is calculated by the 2 Phase method. Then, the distance calculation unit 74 generates a depth map in which the depth value is stored as the pixel value of each pixel 31 and outputs the depth map to the outside of the module.
  • step S15 of FIG. 16 which is the process of calculating the offset c 0 and the gain c 1 using machine learning when harmonics are assumed as the luminance model. Will be described in more detail with reference to the flowchart of FIG.
  • step S51 characteristic calculation unit 72 sets a predetermined initial value the weight coefficient W A learner 81A corresponding to the first tap 52A.
  • step S52 the characteristic calculation unit 72 within four phase images of the set phases 0 °, 90 °, 180 °, and 270 ° of a predetermined one scene among the accumulated phase images of the plurality of scenes.
  • the luminance value I a of the first tap 52A of predetermined pixels 31 (0), I a ( 90), I a (180), and extracts the I a (270), the input v a in the learning device 81A ⁇ I a (0), I a (90), I a (180), and I a (270) ⁇ and.
  • the luminance function I A '( ⁇ ) assumed to be a harmonic.
  • the weighting coefficients after updating W A ' is calculated.
  • step S56 the characteristic calculating section 72 determines whether performed predetermined number of times to update the weight coefficient W A.
  • step S56 when it is determined that not yet updated weight coefficient W A prescribed number, processing returns to step S52, characteristic calculation unit 72, among the accumulated plural scenes phase images, still Four phase images of set phases 0 °, 90 °, 180 °, and 270 ° of a predetermined unselected scene are acquired, and the above-described processes of steps S52 to S56 are repeated.
  • step S56 when it is determined that made the update of the weight coefficient W A prescribed number, the process proceeds to step S57.
  • the processes of steps S51 to S56 are also executed for the learner 81B that learns the luminance model of the harmonics detected by the second tap 52B, in the same manner as the process of the learner 81A.
  • step S57 the characteristic calculating section 72 uses 'and A (theta), intensity function I of the second tap 52B' luminance function I of the first tap 52A which is calculated and B (theta), the above Expression ( The offset c 0 is calculated by 34), and the gain c 1 is calculated by the above equation (33).
  • the correction parameter for correcting the characteristic variation between the taps of each pixel 31 is set based on the determined luminance model (luminance waveform). By calculating, it is possible to more appropriately correct the characteristic variation between taps.
  • FIG. 19 is a block diagram showing a second configuration example of the signal processing unit 16 of the distance measuring sensor 13 of FIG.
  • FIG. 19 the parts corresponding to the first configuration example shown in FIG. 7 are designated by the same reference numerals, and the description of the parts will be omitted as appropriate.
  • the signal processing unit 16 of FIG. 19 includes a model determination unit 71, a characteristic calculation unit 72, a signal correction unit 73, a distance calculation unit 74, and an optimum model selection unit 91.
  • the optimum model selection unit 91 is further added to the first configuration example of the signal processing unit 16 shown in FIG. 7.
  • the model determination unit 71 determines a predetermined one from a plurality of selectable luminance models, and the characteristic calculation unit 72 is supplied from the model determination unit 71.
  • the correction parameters offset c 0 and gain c 1 ) were calculated based on one luminance model (luminance waveform).
  • the model determination unit 71 supplies all the stored luminance models to the characteristic calculation unit 72, and the characteristic calculation unit 72 supplies all the luminance models (luminance).
  • the correction parameters offset c 0 and gain c 1
  • N brightness models are stored in the model determination unit 71, and the offset c 0 and the gain c 1 calculated for each of the N brightness models are set to (c 0 1 , c 1 1 ), (c). 0 2 , c 1 2 ), ..., (c 0 N , c 1 N ).
  • the characteristic calculation unit 72 supplies the offset c 0 and the gain c 1 calculated for each of the N luminance models to the optimum model selection unit 91.
  • the optimum model selection unit 91 sets the offset and gain of each of the N luminance models (c 0 1 , c 1 1 ), (c 0 2 , c 1 2 ), ..., (C 0 N , c 1).
  • the phase shift amount ⁇ ref of each pixel 31 is calculated by the 4 Phase method. That is, the phase shift amount ⁇ ref of each pixel 31 is calculated by the above-mentioned equations (3) and (2).
  • the optimum model selection unit 91 sets the offset and gain (c 0 1 , c 1 1 ), (c 0 2 , c 1 2 ), ..., (C 0 N , c 1 N ), respectively. using equation (6), to produce "and a (180), the detection signal I of the first tap 52A configuration phase 270 °" detection signal I of the first tap 52A configuration phase 180 ° of the a (180).
  • I "A (180) c 0 + c 1 ⁇ I B (0)
  • I "A (270) c 0 + c 1 ⁇ I B (90)
  • the optimum model selection unit 91 uses the 2Phase method, that is, the phase shift amounts ⁇ (c 0 1 , c 1 1 ) and ⁇ (c 0 2) of each pixel 31 according to the above equations (4) and (2). , C 1 2 ), ⁇ , ⁇ (c 0 N , c 1 N ) is calculated.
  • the optimum model selection unit 91 has a phase shift amount ⁇ ref , a phase shift amount ⁇ (c 0 1 , c 1 1 ), ⁇ (c 0 2 , c 1 2 ), ..., ⁇ (c 0 N). , C 1 N ) to select the optimum offset and gain pair (c 0 opt , c 1 opt). That is, the optimum model selection unit 91 calculates the following equation (35).
  • MSE [] in Eq. (35) is a function that calculates the mean square error in [].
  • argmin is a function that finds the minimum MSE [] (c 0 n , c 1 n). Therefore, in equation (35), (c 0 opt , c 1 opt ) is set to minimize the mean square error of ⁇ ref ⁇ (c 0 n , c 1 n ) ⁇ (c 0 n , c 1 n). Indicates that the decision is made.
  • the function for evaluating the error is not limited to the mean square error, and other functions may be used.
  • the optimum model selection unit 91 supplies the selected offset c 0 opt and gain c 1 opt to the signal correction unit 73.
  • the processing of the signal correction unit 73 and the distance calculation unit 74 is the same as that of the first configuration example.
  • FIG. 20 is a flowchart of the measurement process of the distance measuring module 11 of FIG. 1 when the second configuration example of the signal processing unit 16 is adopted. This process is started, for example, when the control unit of the host device in which the distance measuring module 11 is incorporated instructs the start of measurement.
  • step S71 the control unit 14 of the distance measuring sensor 13 determines whether to estimate the correction parameter for correcting the characteristic variation between the taps of each pixel 31 of the pixel array 32 of the light receiving unit 15.
  • the process of step S71 is the same as step S1 of the measurement process in the first configuration example described in the flowchart of FIG.
  • step S71 If it is determined in step S71 that the correction parameters are to be estimated, the process proceeds to step S2, and the ranging module 11 executes the correction parameter estimation process for estimating the correction parameters for all the luminance models.
  • step S2 in the first configuration example, the correction parameter of one predetermined luminance model determined by the model determination unit 71 is estimated, whereas in the process of step S72 of the second configuration example, the process is The difference is that the model determination unit 71 estimates the correction parameters of all the luminance models stored.
  • the characteristic calculation unit 72 supplies the offset c 0 and the gain c 1 calculated for each of the N luminance models to the optimum model selection unit 91.
  • step S72 the process proceeds to step S73, and the optimum model selection unit 91 selects the optimum luminance model from the N luminance models. That is, by calculating equation (35), the optimum offset c 0 opt and gain c 1 opt are selected from the correction parameters (offset c 0 n and gain c 1 n) of each of the N luminance models. NS.
  • step S71 determines whether the correction parameter is not estimated. If it is determined in step S71 that the correction parameter is not estimated, the processes of steps S72 and S73 are skipped, and the process proceeds to step S74.
  • step S74 the distance measuring module 11 executes a distance measurement process for measuring the distance to the object by the 2 Phase method using the correction parameter for correcting the characteristic variation between the taps of each pixel 31, and ends the process.
  • the optimum luminance model is selected from various luminance models (luminance waveforms), and between the taps of each pixel 31.
  • the correction parameter for correcting the characteristic variation By calculating the correction parameter for correcting the characteristic variation, the characteristic variation between taps can be corrected more appropriately.
  • FIG. 21 is a block diagram showing a third configuration example of the signal processing unit 16 of the distance measuring sensor 13 of FIG.
  • FIG. 21 the parts corresponding to the second configuration example shown in FIG. 19 are designated by the same reference numerals, and the description of the parts will be omitted as appropriate.
  • the signal processing unit 16 of FIG. 21 includes a model determination unit 71, a characteristic calculation unit 72, a signal correction unit 73, a distance calculation unit 74, an optimum model selection unit 91, and an evaluation unit 101.
  • the evaluation unit 101 is further added to the second configuration example of the signal processing unit 16 shown in FIG.
  • the depth map calculated by the distance calculation unit 74 is supplied to the evaluation unit 101.
  • the evaluation unit 101 evaluates whether or not the correction parameter currently used is appropriate by using two depth maps that are continuous in the time direction. If the correction parameter is not a value obtained by appropriately correcting the characteristic variation between taps of the pixel 31, the characteristic variation appears in the depth map as fixed pattern noise. The evaluation unit 101 determines whether or not fixed pattern noise appears in the depth map.
  • the evaluation unit 101 determines that fixed pattern noise appears in the depth map, the evaluation unit 101 instructs the model determination unit 71 to recalculate the correction parameters.
  • the model determination unit 71 supplies all the stored luminance models to the characteristic calculation unit 72.
  • the fixed pattern noise evaluation process performed by the evaluation unit 101 will be described with reference to FIG. 22.
  • the depth map D t-1 at time t-1 and the depth map D t at time t are input to the evaluation unit 101.
  • Evaluation unit 101 initially, each time two consecutive in the direction depth map D t-1 and D t, to apply the spatial filter f for smoothing the image, the filtered depth map D 't Generate -1 and'D t.
  • the depth map D 't-1 and D' t after filtering referred to as a post-processing the depth map D 't-1 and D' t.
  • D' t-1 f (D t-1 )
  • D' t f (D t )
  • the spatial filter f for example, an edge preservation type filter such as a Gaussian filter or a bilateral filter can be adopted.
  • an edge preservation type filter such as a Gaussian filter or a bilateral filter.
  • the evaluation unit 101, the area position of the small region g t-1 and g t obtained by setting the area size in advance, respectively, are sequentially slid in the post-processing the depth map D 't-1 and D' t, the small the dispersion of the depth value d that is detected at each pixel in the region g, both the smallest small region g t of the difference between the depth value d between two of the processed depth map, used in the evaluation of the fixed pattern noise as the representative small area gs' t.
  • This process can be expressed as follows.
  • V () represents the operator for the variance.
  • the evaluation unit 101 the representative small area gs' t subregions gs in the same area position, extracted from each of two depth map D t-1 and D t, the small region gs t-1 and gs t And.
  • the evaluation unit 101 calculates the following equation (37) using the small regions gs t-1 and gs t extracted from the two depth maps D t-1 and D t, respectively, and calculates the small region gs t-1. It is determined whether or not the sum of the difference between gs t and the variance of the small region gs t is larger than the predetermined threshold value Th.
  • the evaluation unit 101 determines that the correction parameter is not the value obtained by appropriately correcting the characteristic variation between taps.
  • FIG. 23 is a flowchart of the measurement process of the distance measuring module 11 of FIG. 1 when the third configuration example of the signal processing unit 16 is adopted. This process is started, for example, when the control unit of the host device in which the distance measuring module 11 is incorporated instructs the start of measurement.
  • steps S101 to S104 of FIG. 23 are the same as the processes of steps S71 to S74 described with reference to FIG. 20, the description thereof will be omitted.
  • the depth map calculated in step S104 is also supplied to the evaluation unit 101, and the evaluation unit 101 stores two depth maps D t-1 and D t that are continuous in the time direction.
  • step S105 the evaluation unit 101 executes a fixed pattern noise evaluation process for evaluating whether or not the correction parameter currently used is appropriate, using two depth maps that are continuous in the time direction. Details of the fixed pattern noise evaluation process in step S105 will be described later with reference to the flowchart of FIG. 24.
  • step S106 the evaluation unit 101 uses the result of the fixed pattern noise evaluation process to determine whether fixed pattern noise is generated in the depth maps D t-1 and D t. Specifically, the evaluation unit 101 determines whether or not the conditional expression of the equation (37) is satisfied.
  • step S106 If the conditional expression of the equation (37) is satisfied in step S106 and it is determined that the fixed pattern noise is generated, the process proceeds to step S107, and the evaluation unit 101 informs the model determination unit 71 of the correction parameter. Instruct the calculation.
  • the model determination unit 71 supplies all the stored luminance models to the characteristic calculation unit 72.
  • the characteristic calculation unit 72 executes a correction parameter estimation process for estimating correction parameters for all luminance models.
  • the process of step S107 after the recalculation of the correction parameter is instructed is the same as that of step S102 and step S72 of FIG.
  • step S108 the optimum model selection unit 91 selects the optimum brightness model from the N brightness models.
  • the process of step S108 is the same as that of step S103 and step S73 of FIG.
  • step S106 determines whether fixed pattern noise has not occurred. If it is determined in step S106 that fixed pattern noise has not occurred, the processes of steps S107 and S108 are skipped, and the measurement process ends.
  • FIG. 24 is a detailed flowchart of the fixed pattern noise evaluation process executed in step S105 of FIG. 23.
  • step S121 the evaluation unit 101 applies a spatial filter f for smoothing an image to each of the two continuous depth maps D t-1 and D t in the time direction, and the processed depth map D. ' t-1 and D' t are generated.
  • step S122 the evaluation unit 101, the processing after the depth map D within 't-1 and D' t, respectively, is slid by setting small regions g t-1 and g t, a representative small region gs' t Set.
  • Small representative subregion gs' t has the formula represented by (36), and the variance of depth value d in the small region g, the smallest both the difference between the depth value d between the two processed depth map It is an area g t.
  • step S123 the evaluation unit 101, the representative small area gs' t subregions gs in the same area position, extracted from each of two depth map D t-1 and D t, the small region gs t-1 and gs Let t .
  • step S124 the evaluation unit 101 uses the small regions gs t-1 and gs t extracted from the two depth maps D t-1 and D t, respectively, to use the left side of the equation (37), that is, the small region gs.
  • the sum of the difference between t-1 and gs t and the variance of the small region gs t is calculated.
  • step S124 the process proceeds to step S106 of FIG. 23, and it is determined whether or not the conditional expression of the equation (37) is satisfied.
  • the signal processing unit 16 may be configured by any of the above-mentioned first configuration example to third configuration example, or may be configured so that the first configuration example to the third configuration example can be selectively executed.
  • stable estimation of offset c 0 and gain c 1 can be expected even when the signal-to-noise ratio of the signal is not sufficiently high at the timing of correcting the characteristic variation between taps.
  • FIG. 25 is a perspective view showing a chip configuration example of the distance measuring sensor 13.
  • the distance measuring sensor 13 can be composed of one chip in which a sensor die 151 as a plurality of dies (boards) and a logic die 152 are laminated.
  • the sensor die 151 is configured with a sensor unit 161 (as a circuit), and the logic die 152 is configured with a logic unit 162.
  • a light receiving unit 15 is formed in the sensor unit 161.
  • the logic unit 162 is formed with, for example, a control unit 14, a signal processing unit 16, input / output terminals, and the like.
  • the distance measuring sensor 13 may be composed of three layers in which another logic die is laminated in addition to the sensor die 151 and the logic die 152. Of course, it may be composed of a stack of four or more dies (boards).
  • the ranging sensor 13 is composed of, for example, as shown in FIG. 25B, a first chip 171 and a second chip 172, and a relay board (interposer board) 173 on which they are mounted. You may.
  • a light receiving portion 15 is formed on the first chip 171.
  • a control unit 14, a signal processing unit 16, and the like are formed on the second chip 172.
  • the circuit arrangement of the sensor die 151 and the logic die 152 in A of FIG. 25 and the circuit arrangement of the first chip 171 and the second chip 172 in B of FIG. 25 are merely examples. Not limited to.
  • a signal processing unit 16 that performs depth map generation processing and the like may be provided outside (separate chip) of the distance measuring sensor 13 as a signal processing device.
  • the distance measuring module 11 described above can be mounted on an electronic device such as a smartphone, a tablet terminal, a mobile phone, a personal computer, a game machine, a television receiver, a wearable terminal, a digital still camera, or a digital video camera.
  • an electronic device such as a smartphone, a tablet terminal, a mobile phone, a personal computer, a game machine, a television receiver, a wearable terminal, a digital still camera, or a digital video camera.
  • FIG. 26 is a block diagram showing a configuration example of a smartphone as an electronic device equipped with a ranging module.
  • the distance measuring module 202, the image pickup device 203, the display 204, the speaker 205, the microphone 206, the communication module 207, the sensor unit 208, the touch panel 209, and the control unit 210 are connected via the bus 211. Is connected and configured. Further, the control unit 210 has functions as an application processing unit 221 and an operating system processing unit 222 by executing a program by the CPU.
  • the distance measuring module 11 of FIG. 1 is applied to the distance measuring module 202.
  • the distance measuring module 202 is arranged in front of the smartphone 201, and by performing distance measurement for the user of the smartphone 201, the depth value of the surface shape of the user's face, hand, finger, etc. is measured as a distance measurement result. Can be output as. It is also possible to recognize the user's gesture by using the distance measurement result by the distance measurement module 202.
  • the image pickup device 203 is arranged in front of the smartphone 201, and by taking an image of the user of the smartphone 201 as a subject, the image taken by the user is acquired. Although not shown, the image pickup device 203 may be arranged on the back surface of the smartphone 201.
  • the display 204 displays an operation screen for performing processing by the application processing unit 221 and the operation system processing unit 222, an image captured by the image pickup device 203, and the like.
  • the speaker 205 and the microphone 206 for example, output the voice of the other party and collect the voice of the user when making a call by the smartphone 201.
  • the communication module 207 communicates via the communication network.
  • the sensor unit 208 senses speed, acceleration, proximity, etc., and the touch panel 209 acquires a touch operation by the user on the operation screen displayed on the display 204.
  • the application processing unit 221 performs processing for providing various services by the smartphone 201.
  • the application processing unit 221 can create a face by computer graphics that virtually reproduces the user's facial expression based on the depth supplied from the distance measuring module 202, and can perform a process of displaying the face on the display 204.
  • the application processing unit 221 can perform a process of creating, for example, three-dimensional shape data of an arbitrary three-dimensional object based on the depth supplied from the distance measuring module 202.
  • the operation system processing unit 222 performs processing for realizing the basic functions and operations of the smartphone 201.
  • the operation system processing unit 222 can perform a process of authenticating the user's face and unlocking the smartphone 201 based on the depth value supplied from the distance measuring module 202.
  • the operation system processing unit 222 performs, for example, a process of recognizing a user's gesture based on the depth value supplied from the distance measuring module 202, and performs a process of inputting various operations according to the gesture. Can be done.
  • the smartphone 201 configured in this way, by applying the distance measuring module 11 described above, for example, a depth map can be generated with high accuracy and high speed. As a result, the smartphone 201 can detect the distance measurement information more accurately.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
  • FIG. 27 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile control system to which the technology according to the present disclosure can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are shown as a functional configuration of the integrated control unit 12050.
  • the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle.
  • the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps.
  • the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches.
  • the body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
  • the vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000.
  • the image pickup unit 12031 is connected to the vehicle exterior information detection unit 12030.
  • the vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image.
  • the vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or a character on the road surface based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received.
  • the image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
  • the in-vehicle information detection unit 12040 detects the in-vehicle information.
  • a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
  • the microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit.
  • a control command can be output to 12010.
  • the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving, etc., which runs autonomously without depending on the operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030.
  • the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs coordinated control for the purpose of anti-glare such as switching the high beam to the low beam. It can be carried out.
  • the audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger of the vehicle or the outside of the vehicle.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices.
  • the display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
  • FIG. 28 is a diagram showing an example of the installation position of the imaging unit 12031.
  • the vehicle 12100 has image pickup units 12101, 12102, 12103, 12104, 12105 as the image pickup unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100, for example.
  • the image pickup unit 12101 provided on the front nose and the image pickup section 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100.
  • the imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100.
  • the images in front acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 28 shows an example of the photographing range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103.
  • the imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
  • the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100). By obtaining can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform coordinated control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
  • automatic brake control including follow-up stop control
  • automatic acceleration control including follow-up start control
  • the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be seen by the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104.
  • pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine.
  • the audio image output unit 12052 When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian.
  • the display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
  • the above is an example of a vehicle control system to which the technology according to the present disclosure can be applied.
  • the technique according to the present disclosure can be applied to the vehicle exterior information detection unit 12030 and the vehicle interior information detection unit 12040 among the configurations described above. Specifically, by using the distance measurement by the distance measuring module 11 as the outside information detection unit 12030 and the inside information detection unit 12040, processing for recognizing the driver's gesture is performed, and various types (for example, for example) according to the gesture are performed. It can perform operations on audio systems, navigation systems, air conditioning systems) and detect the driver's condition more accurately. Further, the distance measurement by the distance measurement module 11 can be used to recognize the unevenness of the road surface and reflect it in the control of the suspension.
  • the structure of the photodiode 51 of the light receiving unit 15 includes a distance measuring sensor having a CAPD (Current Assisted Photonic Demodulator) structure, a gate type distance measuring sensor that alternately applies an electric charge of the photodiode to two gates, and the like. It can be applied to a distance measuring sensor having a structure that distributes charges to two charge storage units.
  • CAPD Current Assisted Photonic Demodulator
  • the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units).
  • the configurations described above as a plurality of devices (or processing units) may be collectively configured as one device (or processing unit).
  • a configuration other than the above may be added to the configuration of each device (or each processing unit).
  • a part of the configuration of one device (or processing unit) may be included in the configuration of another device (or other processing unit). ..
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
  • the present technology can have the following configurations.
  • a signal processing unit that performs processing to calculate the distance to the object based on the pixel data obtained by the pixels that receive the reflected light that is reflected by the object and the irradiation light emitted from the predetermined light emitting source is received.
  • the signal processing unit determines the characteristics of the first charge detection unit and the second charge detection unit of the pixel based on the luminance model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel.
  • a signal processing device having a characteristic calculation unit for calculating correction parameters to be corrected.
  • the signal processing device calculates the correction parameter based on a predetermined luminance model selected from a plurality of luminance models.
  • the predetermined luminance model is a triangular wave.
  • the predetermined luminance model is a sine wave.
  • the signal processing device is a harmonic.
  • the characteristic calculation unit estimates the brightness function representing the brightness waveform assumed to be a harmonic for each of the first charge detection unit and the second charge detection unit by machine learning, and calculates the correction parameter (5).
  • the signal processing apparatus according to.
  • the machine learning learner learns the luminance function corresponding to the first charge detection unit or the second charge detection unit, and learns the brightness function.
  • the signal processing device according to (6) wherein the characteristic calculation unit learns the learner so that the difference between the input of the learner and the value of the luminance function obtained by learning becomes small.
  • the characteristic calculation unit learns based on the condition that the first luminance function corresponding to the first charge detection unit and the second luminance function corresponding to the second charge detection unit are the same.
  • the signal processing device according to (7) above. The characteristic calculation unit calculates the correction parameters for a plurality of luminance models, and calculates the correction parameters.
  • One of the above (1) to (8) further comprising a selection unit for selecting the correction parameter having a small error from the phase shift amount calculated from the input image among the correction parameters of the plurality of luminance models.
  • the signal processing device described.
  • the signal processing apparatus according to any one of (1) to (9), further comprising an evaluation unit for evaluating the correction parameter calculated by the characteristic calculation unit.
  • the signal processing unit further includes a distance calculation unit that calculates a distance to the object based on pixel data corrected by the correction parameter calculated by the characteristic calculation unit (1) to (10).
  • the signal processing device according to any one.
  • a signal processing device that calculates the distance to the object based on the pixel data obtained by the pixels that receive the reflected light that is reflected by the object and the irradiation light emitted from the predetermined light source. , Based on the brightness model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel, a correction parameter for correcting the characteristics of the first charge detection unit and the second charge detection unit of the pixel is calculated. Signal processing method. (13) With a given light source A distance measuring sensor having a pixel that receives the reflected light that is reflected by an object and returned from the irradiation light emitted from the light emitting source is provided.
  • the ranging sensor determines the characteristics of the first charge detection unit and the second charge detection unit of the pixel based on the luminance model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel.
  • a distance measuring module having a characteristic calculation unit that calculates correction parameters to be corrected.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

La présente technologie se rapporte à un dispositif de traitement de signal, un procédé de traitement de signal et un module de télémétrie qui permettent de corriger de manière plus appropriée les variations de caractéristiques de prélèvements. Ledit dispositif de traitement de signal comprend une unité de traitement de signal qui effectue un processus permettant de calculer la distance jusqu'à un objet en fonction de données de pixel obtenues à partir d'un pixel recevant une lumière réfléchie, constituant une lumière d'éclairage émise par une source de lumière prescrite, ayant été réfléchi par l'objet et renvoyée. L'unité de traitement de signal est dotée d'une unité de calcul de caractéristiques qui calcule un paramètre de correction permettant de corriger les caractéristiques d'une première unité de détection de charge et d'une seconde unité de détection de charge du pixel en fonction d'un modèle de luminance correspondant à la forme de la forme d'onde d'émission de lumière de la lumière d'éclairage et à la forme de la forme d'onde d'exposition du pixel. La présente technologie peut être appliquée, par exemple, à un module de télémétrie qui mesure la distance jusqu'à un sujet.
PCT/JP2021/006075 2020-03-04 2021-02-18 Dispositif de traitement de signal, procédé de traitement de signal et module de télémétrie WO2021177045A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-036524 2020-03-04
JP2020036524 2020-03-04

Publications (1)

Publication Number Publication Date
WO2021177045A1 true WO2021177045A1 (fr) 2021-09-10

Family

ID=77612616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/006075 WO2021177045A1 (fr) 2020-03-04 2021-02-18 Dispositif de traitement de signal, procédé de traitement de signal et module de télémétrie

Country Status (1)

Country Link
WO (1) WO2021177045A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115499597A (zh) * 2022-09-13 2022-12-20 豪威集成电路(成都)有限公司 成像系统目标频率光源的识别方法及装置、终端设备
WO2023139916A1 (fr) * 2022-01-21 2023-07-27 株式会社小糸製作所 Dispositif de mesure

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1020036A (ja) * 1996-06-28 1998-01-23 Toyota Central Res & Dev Lab Inc 距離測定方法および装置
JP2008520988A (ja) * 2004-11-23 2008-06-19 アイイーイー インターナショナル エレクトロニクス アンド エンジニアリング エス.エイ. 3dカメラ用誤差補償方法
JP2010203877A (ja) * 2009-03-03 2010-09-16 Topcon Corp 距離測定装置
CN110361751A (zh) * 2019-06-14 2019-10-22 深圳奥比中光科技有限公司 时间飞行深度相机及单频调制解调的降低噪声的距离测量方法
JP2019191119A (ja) * 2018-04-27 2019-10-31 ソニーセミコンダクタソリューションズ株式会社 測距処理装置、測距モジュール、測距処理方法、およびプログラム
JP2020504310A (ja) * 2017-01-20 2020-02-06 カーネギー メロン ユニバーシティ エピポーラ飛行時間撮像のための方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1020036A (ja) * 1996-06-28 1998-01-23 Toyota Central Res & Dev Lab Inc 距離測定方法および装置
JP2008520988A (ja) * 2004-11-23 2008-06-19 アイイーイー インターナショナル エレクトロニクス アンド エンジニアリング エス.エイ. 3dカメラ用誤差補償方法
JP2010203877A (ja) * 2009-03-03 2010-09-16 Topcon Corp 距離測定装置
JP2020504310A (ja) * 2017-01-20 2020-02-06 カーネギー メロン ユニバーシティ エピポーラ飛行時間撮像のための方法
JP2019191119A (ja) * 2018-04-27 2019-10-31 ソニーセミコンダクタソリューションズ株式会社 測距処理装置、測距モジュール、測距処理方法、およびプログラム
CN110361751A (zh) * 2019-06-14 2019-10-22 深圳奥比中光科技有限公司 时间飞行深度相机及单频调制解调的降低噪声的距离测量方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SCHMIDT MIRKO, ZIMMERMANN KLAUS, JAHNE BERND: "High frame rate for 3D time-of-flight cameras by dynamic sensor calibration", 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY (ICCP, 21 April 2011 (2011-04-21), pages 1 - 8, XP031943266, Retrieved from the Internet <URL:https://ieeexplore.ieee.org/document/5753121> DOI: 10.1109/ICCPHOT.2011.5753121 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023139916A1 (fr) * 2022-01-21 2023-07-27 株式会社小糸製作所 Dispositif de mesure
CN115499597A (zh) * 2022-09-13 2022-12-20 豪威集成电路(成都)有限公司 成像系统目标频率光源的识别方法及装置、终端设备

Similar Documents

Publication Publication Date Title
TWI814804B (zh) 距離測量處理設備,距離測量模組,距離測量處理方法及程式
WO2020241294A1 (fr) Dispositif de traitement de signal, procédé de traitement de signal, et module de télémétrie
WO2021085128A1 (fr) Dispositif de mesure de distance, procédé de mesure, et système de mesure de distance
WO2021177045A1 (fr) Dispositif de traitement de signal, procédé de traitement de signal et module de télémétrie
WO2017195459A1 (fr) Dispositif d&#39;imagerie et procédé d&#39;imagerie
US20210174127A1 (en) Image processing device, image processing method, and program
TWI798408B (zh) 測距處理裝置,測距模組,測距處理方法及程式
JP2018007210A (ja) 信号処理装置および方法、並びに撮像装置
WO2020209079A1 (fr) Capteur de mesure de distance, procédé de traitement de signal et module de mesure de distance
WO2021065494A1 (fr) Capteur de mesure de distances, procédé de traitement de signaux et module de mesure de distances
WO2020246264A1 (fr) Capteur de mesure de distance, procédé de traitement de signal et module de mesure de distance
WO2021065500A1 (fr) Capteur de mesure de distance, procédé de traitement de signal, et module de mesure de distance
WO2021039458A1 (fr) Capteur de mesure de distance, procédé de commande associé et module de mesure de distance
JP7476170B2 (ja) 信号処理装置、信号処理方法、および、測距モジュール
CN114424084A (zh) 照明设备、照明设备控制方法以及距离测量模块
US20240168159A1 (en) Distance measuring device, distance measuring system, and distance measuring method
WO2022004441A1 (fr) Dispositif de télémétrie et procédé de télémétrie
WO2021065495A1 (fr) Capteur de télémétrie, procédé de traitement de signal, et module de télémétrie
WO2021131684A1 (fr) Dispositif de télémétrie, procédé de commande de dispositif de télémétrie et appareil électronique
WO2021124918A1 (fr) Dispositif de traitement de signal, procédé de traitement de signal et dispositif de télémétrie

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21765493

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21765493

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP