WO2021177045A1 - Signal processing device, signal processing method, and range-finding module - Google Patents

Signal processing device, signal processing method, and range-finding module Download PDF

Info

Publication number
WO2021177045A1
WO2021177045A1 PCT/JP2021/006075 JP2021006075W WO2021177045A1 WO 2021177045 A1 WO2021177045 A1 WO 2021177045A1 JP 2021006075 W JP2021006075 W JP 2021006075W WO 2021177045 A1 WO2021177045 A1 WO 2021177045A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
signal processing
luminance
pixel
model
Prior art date
Application number
PCT/JP2021/006075
Other languages
French (fr)
Japanese (ja)
Inventor
基 三原
俊 海津
優介 森内
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021177045A1 publication Critical patent/WO2021177045A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • G01S17/36Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating

Definitions

  • the present technology relates to a signal processing device, a signal processing method, and a ranging module, and more particularly to a signal processing device, a signal processing method, and a ranging module capable of more appropriately correcting characteristic variations between taps.
  • a distance measuring module is mounted on a mobile terminal such as a so-called smartphone.
  • smartphones are equipped with a distance measurement module that measures distance by the Indirect ToF (Time of Flight) method.
  • IndirectToF Time of Flight
  • the irradiation light is emitted toward the object, the reflected light is reflected on the surface of the object and returned, and the flight from the emission of the irradiation light to the reception of the reflected light is detected. This is a method of calculating the distance to an object based on time.
  • the light receiving sensor side receives the reflected light at the light receiving timings shifted by 0 °, 90 °, 180 °, and 270 ° based on the irradiation timing of the irradiation light.
  • a method of calculating the distance to an object by using four phase images detected in four different phases with respect to the irradiation timing of the irradiation light is called a 4 Phase method or the like.
  • one pixel has two charge storage portions, a first tap and a second tap, and by alternately distributing the received charge to the first tap and the second tap, the phase is inverted in one phase image. Since two phases can be detected, there is also a method called a 2-phase method in which the distance to an object is calculated using two phase images.
  • the 2Phase method has the advantage that the distance can be calculated with half the number of phase images of the 4Phase method, but the characteristics of the first tap and the second tap, which are the two charge storage units of each pixel, vary between taps (tap). Since there is a sensitivity difference between taps), it is necessary to correct the characteristic variation between taps.
  • Patent Document 1 proposes a method of calculating the offset of each tap by pre-measurement to reduce the error of distance calculation.
  • the correction formula for correcting the characteristic variation between taps is a formula for dividing by a value close to 0 when the amplitude of the received light waveform is small, which is sufficient for calculation stability and noise immunity. I could not say.
  • This technology was made in view of such a situation, and makes it possible to more appropriately correct the characteristic variation between taps in a light receiving sensor having two charge storage units in one pixel.
  • the signal processing device on the first aspect of the present technology is based on the pixel data obtained by the pixels that receive the reflected light that is reflected by the object and returned by the irradiation light emitted from the predetermined light emitting source.
  • a signal processing unit that performs processing for calculating the distance to the It has a characteristic calculation unit that calculates correction parameters that correct the characteristics of the first charge detection unit and the second charge detection unit.
  • the signal processing method of the second aspect of the present technology is based on the pixel data obtained by the pixel that receives the reflected light that is reflected by the object and returned by the irradiation light emitted from the predetermined light emitting source.
  • the signal processing device that performs the process of calculating the distance to 2 Calculate the correction parameter that corrects the characteristics with the charge detection unit.
  • the distance measuring module on the third side of the present technology comprises a predetermined light emitting source and a distance measuring sensor having pixels that receive the reflected light that is reflected by an object and returned from the irradiation light emitted from the light emitting source.
  • the distance measuring sensor includes a first charge detection unit and a second charge detection unit of the pixel based on a luminance model corresponding to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel. It has a characteristic calculation unit that calculates correction parameters for correcting characteristics.
  • the first charge detection unit and the second charge detection unit of the pixel are based on the luminance model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel.
  • a correction parameter that corrects the characteristics of and is calculated.
  • the signal processing device and the ranging module may be independent devices or may be modules incorporated in other devices.
  • step S105 of FIG. It is a detailed flowchart of the fixed pattern noise evaluation process executed in step S105 of FIG. It is a perspective view which shows the chip structure example of the distance measuring sensor. It is a block diagram which shows the configuration example of a smartphone as an electronic device equipped with a distance measurement module. It is a block diagram which shows an example of the schematic structure of a vehicle control system. It is explanatory drawing which shows an example of the installation position of the vehicle exterior information detection unit and the image pickup unit.
  • FIG. 1 is a block diagram showing a schematic configuration example of a distance measuring module to which the present technology is applied.
  • the distance measurement module 11 shown in FIG. 1 is a distance measurement module that performs distance measurement by the Indirect ToF method, and has a light emitting unit 12 and a distance measurement sensor 13.
  • the ranging module 11 irradiates an object with light, and the light (irradiation light) receives the light (reflected light) reflected by the object to generate a depth map as distance information to the object. And output.
  • the distance measuring sensor 13 includes a control unit 14, a light receiving unit 15, and a signal processing unit 16.
  • the light emitting unit 12 includes, for example, a VCSEL array in which a plurality of VCSELs (Vertical Cavity Surface Emitting Lasers) are arranged in a plane in a plane as a light emitting source, and responds to a light emitting control signal supplied from the control unit 14. It emits light while being modulated at the timing, and irradiates the object with irradiation light.
  • VCSELs Vertical Cavity Surface Emitting Lasers
  • the control unit 14 controls the operation of the entire ranging module 11.
  • the control unit 14 controls the light emitting unit 12 by supplying a light emitting control signal of a predetermined frequency (for example, 200 MHz or the like) to the light emitting unit 12.
  • the control unit 14 also supplies a light emission control signal to the light receiving unit 15 in order to drive the light receiving unit 15 in accordance with the timing of light emission in the light emitting unit 12.
  • the light receiving unit 15 receives the reflected light from the object in the pixel array 32 in which a plurality of pixels 31 are two-dimensionally arranged, which will be described in detail later with reference to FIG. Then, the light receiving unit 15 supplies the pixel data composed of the detection signals according to the received amount of the received reflected light to the signal processing unit 16 in units of pixels 31 of the pixel array 32.
  • the signal processing unit 16 calculates a depth value, which is the distance from the distance measuring module 11 to the object, based on the pixel data supplied from the light receiving unit 15 for each pixel 31 of the pixel array 32, and the pixel value of each pixel 31. Generates a depth map in which the depth value is stored as, and outputs it to the outside of the module.
  • FIG. 2 is a block diagram showing a detailed configuration example of the light receiving unit 15.
  • the light receiving unit 15 generates a charge corresponding to the amount of received light, and a pixel array 32 and a pixel array in which pixels 31 that output a detection signal corresponding to the charge are two-dimensionally arranged in a matrix in the row direction and the column direction. It has a drive control circuit 33 arranged in a peripheral region of 32.
  • the drive control circuit 33 is, for example, a control signal for controlling the drive of the pixel 31 based on a light emission control signal supplied from the control unit 14, (for example, a distribution signal DIMIX, a selection signal ADDRESS DECODE, or a reset, which will be described later). (Signal RST, etc.) is output.
  • the pixel 31 includes a photodiode 51 as a photoelectric conversion unit that generates an electric charge according to the amount of received light, a first tap 52A (first charge detecting unit) that detects the electric charge generated by the photodiode 51, and a second. It has a tap 52B (second charge detection unit).
  • the electric charge generated by one photodiode 51 is distributed to the first tap 52A or the second tap 52B.
  • the charges distributed to the first tap 52A are output as a detection signal A from the signal line 53A
  • the charges distributed to the second tap 52B are detected signals B from the signal line 53B. Is output as.
  • the first tap 52A is composed of a transfer transistor 41A, an FD (Floating Diffusion) unit 42A, a selection transistor 43A, and a reset transistor 44A.
  • the second tap 52B is composed of a transfer transistor 41B, an FD unit 42B, a selection transistor 43B, and a reset transistor 44B.
  • the reflected light is received by the photodiode 51 with a delay time ⁇ T. It is assumed that the waveform of the reflected light is the same as the emission waveform of the irradiation light except for the delay of the phase (delay time ⁇ T) according to the distance to the object.
  • the distribution signal DIMIX_A controls the on / off of the transfer transistor 41A
  • the distribution signal DIMIX_B controls the on / off of the transfer transistor 41B.
  • the distribution signal DIMIX_A in FIG. 3 is a signal having the same phase as the irradiation light
  • the distribution signal DIMIX_B has a phase in which the distribution signal DIMIX_A is inverted.
  • the electric charge generated by the photodiode 51 receiving the reflected light is transferred to the FD unit 42A according to the distribution signal DIMIX_A while the transfer transistor 41A is on, and is transferred to the transfer transistor according to the distribution signal DIMIX_B. While 41B is on, it is transferred to the FD unit 42B.
  • the electric charges transferred via the transfer transistor 41A are sequentially accumulated in the FD section 42A and transferred via the transfer transistor 41B during a predetermined period in which the irradiation light of the irradiation time T is periodically irradiated.
  • the electric charge is sequentially accumulated in the FD unit 42B.
  • the selection transistor 43A is turned on according to the selection signal ADDRESS DECODE_A after the period for accumulating the electric charge
  • the electric charge accumulated in the FD unit 42A is read out via the signal line 53A and corresponds to the amount of the electric charge.
  • the detection signal A is output from the light receiving unit 15.
  • the selection transistor 43B is turned on according to the selection signal ADDRESS DECODE_B
  • the electric charge accumulated in the FD unit 42B is read out via the signal line 53B, and the detection signal B corresponding to the amount of the electric charge is transmitted from the light receiving unit 15. It is output.
  • the electric charge stored in the FD section 42A is discharged when the reset transistor 44A is turned on according to the reset signal RST_A, and the electric charge stored in the FD section 42B is discharged when the reset transistor 44B is turned on according to the reset signal RST_B. Will be done.
  • the pixel 31 distributes the electric charge generated by the reflected light received by the photodiode 51 to the first tap 52A or the second tap 52B according to the delay time ⁇ T, and outputs the detection signal A and the detection signal B to the pixels. Output as data.
  • the signal processing unit 16 calculates the depth value based on the detection signal A and the detection signal B supplied as pixel data from each pixel 31.
  • a method for calculating the depth value there are a 2Phase method using two types of phase detection signals and a 4Phase method using four types of phase detection signals.
  • the light receiving unit 15 transmits the reflected light at the light receiving timings shifted by 0 °, 90 °, 180 °, and 270 ° with respect to the irradiation timing of the irradiation light.
  • Receive light Exposure
  • the light receiving unit 15 receives light with the phase set to 0 ° with respect to the irradiation timing of the irradiation light in a certain frame period, and receives light with the phase set to 90 ° in the next frame period. In the frame period, the phase is set to 180 ° to receive light, and in the next frame period, the phase is set to 270 ° to receive light.
  • the phase of the light receiving timing that is preset with reference to the irradiation timing of the irradiation light is referred to as the set phase.
  • the set phase of 0 °, 90 °, 180 °, or 270 ° represents the set phase of the pixel 31 at the first tap 52A unless otherwise specified. Since the second tap 52B has a phase inverted from that of the first tap 52A, when the first tap 52A has a set phase of 0 °, 90 °, 180 °, or 270 °, the second tap 52B has a phase opposite to that of the first tap 52A, respectively. , 180 °, 270 °, 0 °, or 90 °.
  • FIG. 5 is a diagram showing the exposure periods of the first tap 52A of the pixel 31 in each set phase of 0 °, 90 °, 180 °, and 270 ° side by side so that the difference in the set phase can be easily understood.
  • the reflected light is received by the light receiving unit 15 in a state in which the phase is delayed from the irradiation timing of the irradiation light by the time ⁇ T corresponding to the distance to the object.
  • the phase difference generated according to the distance to the object is also referred to as a distance phase to distinguish it from the set phase.
  • FIG. 5 is a diagram illustrating a method of calculating the depth value d by the 2Phase method and the 4Phase method.
  • the depth value d can be obtained by the following equation (1).
  • equation (1) c is the speed of light
  • ⁇ T is the delay time
  • f is the modulation frequency of light.
  • ⁇ in the equation (1) represents the phase shift amount [rad] of the reflected light, that is, the metric phase, and is represented by the following equation (2).
  • I of equation (2) Q is set phase 0 °, 90 °, 180 °, and 270 detect signals I A (0) obtained in ° to I A (270) and the detection signal I B (0) through using I B (270), is calculated by the following formula (3).
  • I and Q are signals obtained by converting the phase of the sine wave from polar coordinates to a Cartesian coordinate system (IQ plane), assuming that the change in brightness of the irradiation light is a sine wave.
  • the depth value d to the object can be obtained by using only two set phases that are orthogonal to each other.
  • I and Q are given by the following equation (5).
  • the characteristic variation between taps existing in each pixel cannot be removed, but the depth value d to the object can be obtained only from the detection signals of the two set phases, which is twice that of the 4Phase method.
  • Distance measurement can be performed at the frame rate.
  • the distance to the object can be measured accurately and at high speed by adopting the 2Phase method.
  • the offset c 0 and the gain c 1 are set by the least squares method. There is a way to calculate.
  • the value of the equation (8) approaches zero as much as possible when the amplitude of the emission waveform of the irradiation light that controls the irradiation timing is small. It becomes a value. For example, when the amplitude of the emission waveform becomes 1/100, the value of the equation (8) becomes about 1/8000. Therefore, the calculation of the offset c 0 and the gain c 1 becomes unstable when the reflectance is low, the object is far away, or the amount of light emitted is reduced to reduce power consumption.
  • the signal processing unit 16 of the distance measuring sensor 13 in FIG. 1 more appropriately corrects the characteristic variation between taps and calculates the distance to the object using the 2Phase method.
  • FIG. 7 is a block diagram showing a first configuration example of the signal processing unit 16 of the distance measuring sensor 13 of FIG.
  • the signal processing unit 16 of FIG. 7 includes a model determination unit 71, a characteristic calculation unit 72, a signal correction unit 73, and a distance calculation unit 74.
  • the luminance waveform detected by each pixel 31 of the light receiving unit 15 is a convolution of the emission waveform of the irradiation light output by the light emitting unit 12 and the exposure waveform when each pixel 31 of the light receiving unit 15 exposes (receives light). It becomes a waveform.
  • the model determination unit 71 assumes (predicts) the shape of the emission waveform of the irradiation light output by the light emitting unit 12 and the shape of the exposure waveform when each pixel 31 of the light receiving unit 15 exposes (receives light). Then, a model (luminance model) of the brightness waveform observed by the light receiving unit 15 is determined.
  • FIG. 8 shows an example of a luminance model determined by the model determination unit 71.
  • the model determination unit 71 assumes a square wave as the shape of the emission waveform of the irradiation light and assumes a square wave as the shape of the exposure waveform as the first luminance model (model 1). Then, it is assumed that the luminance waveform observed by the light receiving unit 15 is a triangular wave.
  • the model determination unit 71 assumes a sine wave as the shape of the emission waveform of the irradiation light as the second luminance model (model 2), assumes a square wave as the shape of the exposure waveform, and observes with the light receiving unit 15. It is assumed that the luminance waveform to be generated is a sine wave.
  • the luminance waveform observed by the light receiving unit 15 is a harmonic as the third luminance model (model 3). Is assumed to be.
  • the model determination unit 71 determines a luminance model, which is a model of the luminance waveform observed by the light receiving unit 15, from among a plurality of luminance models as shown in FIG. 3, and supplies the luminance model to the characteristic calculation unit 72. If the emission waveform is known by the initial setting by the user, the brightness model may be determined based on the set value.
  • the characteristic calculation unit 72 has offset c 0 and gain, which are correction parameters for correcting characteristic variations between taps of each pixel 31 based on a luminance waveform (luminance model) corresponding to the shape of the emission waveform and the shape of the exposure waveform. Calculate c 1. The calculated offset c 0 and gain c 1 are supplied to the signal correction unit 73.
  • the characteristic calculation unit 72 calculates the offset c 0 and the gain c 1 which are the correction parameters, assuming that the luminance waveform is a triangular wave. do.
  • the model determination unit 71 determines the second model as the luminance model
  • the characteristic calculation unit 72 assumes that the luminance waveform is a sine wave, and sets the offset c 0 and the gain c 1 which are correction parameters. calculate.
  • the characteristic calculation unit 72 assumes that the luminance waveform is a harmonic and sets the offset c 0 and the gain c 1 which are correction parameters. calculate.
  • the detailed calculation method of the correction parameter when the luminance waveform is assumed to be a triangular wave, a sine wave, or a harmonic wave will be described later.
  • Signal correcting unit 73 uses the offset c 0 and a gain c 1 calculated by the characteristic calculating section 72, the detection signal I B of the detection signals I A ( ⁇ ) or the second tap 52B of the first tap 52A (theta) Correct one of the signals.
  • the signal correcting unit 73 in accordance with the above equation (6), by correcting the detection signal I B of the second tap 52B of the phase setting 0 ° (0), detection of the first tap 52A of the phase setting 180 °
  • the detection signal I A (0) of the first tap 52A having the set phase 270 ° 180
  • the detection signal I ( ⁇ ) converted using the correction parameter is represented by adding “” ”like the detection signal I” ( ⁇ ).
  • I "A (180) c 0 + c 1 ⁇ I B (0)
  • I "A (270) c 0 + c 1 ⁇ I B (90)
  • the distance calculation unit 74 is the distance to the object by the 2 Phase method based on the phase images of the two set phases having an orthogonal relationship, specifically, the corrected pixel data supplied from the signal correction unit 73. Calculate the depth value. Then, the distance calculation unit 74 generates a depth map in which the depth value is stored as the pixel value of each pixel 31 and outputs the depth map to the outside of the module.
  • the light receiving unit 15 of the distance measuring sensor 13 may sequentially change the set phase to 0 °, 90 °, 180 °, and 270 ° to receive light, and for example, only the set phases of 0 ° and 90 °.
  • the light reception may be performed so that the above steps are alternately repeated.
  • the signal processing unit 16 can generate and output a depth map using two adjacent (two set phases) phase images.
  • the characteristic calculation unit 72 is supplied with four phase images obtained by sequentially setting the set phase ⁇ to 0 °, 90 °, 180 °, and 270 ° from the light receiving unit 15 (FIG. 1). Since the detection signal I A ( ⁇ ) and I B of the reflected light received by the set phase theta in the first tap 52A and the second tap 52B of the pixels 31 of the pixel array 32 (theta) correspond to the function representing the luminance waveform , the detection signal I a ( ⁇ ) and I B the (theta), also referred to as the intensity function I a ( ⁇ ) and I B (theta).
  • Characteristic calculation unit 72 0 ° setting phase, 90 °, 180 °, and, 270 ° of 4 luminance function I A of the first tap 52A of each pixel 31 is detected by the light receiving portion 15 are sequentially set to the phase (0), I a (90 ), and I a (180), and I a (270), the intensity function I B of the second tap 52B (0), I B ( 90), I B (180), and based on the I B (270), calculates the offset c 0 and the gain c 1 is a correction parameter for correcting the variation in characteristics between taps.
  • the intensity function I A detected by the first tap 52A (theta), such as shown in FIG. 9, the central center A, the amplitude # 038 A, by the offset offset A follow the specified triangular wave.
  • the intensity function I B detected by the second tap 52B (theta), such as shown in FIG. 9, the central center B, the amplitude # 038 B, according to the triangular wave is defined by an offset offset B.
  • the offset c 0 and the gain c 1 can be calculated by the following equations (9) and (10).
  • the amplitude # 038 A of the triangular wave of the luminance function I A detected by the first tap 52A (theta), and the amplitude # 038 B of the triangular wave of the luminance function I B detected by the second tap 52B (theta) is For example, as described in the literature “M. Schmidt.” Spatiotemporal Analysis of Range Imagery “. Dissertation, Department of Physics and Astronomy, University of Heidelberg, 2008.", the following equations (11) and (12) ) Can be calculated.
  • the center of the triangular wave can be obtained by Eq. (18).
  • offset A center A -amp A ⁇ (21)
  • offset B center B -amp B ⁇ (22)
  • Characteristic calculating unit 72 assuming a triangular wave as a brightness model, the amplitude # 038 A of the triangular wave of the detected intensity function I A first tap 52A (theta), and the intensity function I detected by the second tap 52B
  • the amplitude amp B of the triangular wave of B ( ⁇ ) is calculated by the above equations (11) and (12), and the offset offset A and the offset offset B are calculated by the above equations (21) and (22). calculate.
  • the characteristic calculation unit 72 calculates the offset c 0 and the gain c 1 by the above equations (9) and (10).
  • FIG. 10 shows a conceptual diagram of correction processing performed by the signal correction unit 73 when the luminance model is assumed to be a triangular wave of the first model.
  • Signal correction unit 73 by correcting using the offset c 0 and the gain c 1 a detection signal I B (0) of the second tap 52B of the phase setting 0 °, the detection of the first tap 52A of the phase setting 180 ° The signal I " A (180) is generated. Further, the signal correction unit 73 corrects the detection signal I B (90) of the second tap 52B having the set phase of 90 ° by using the offset c 0 and the gain c 1. Therefore, the detection signal I "A (270) of the first tap 52A having the set phase of 270 ° is generated.
  • the four-phase detection signals I ( ⁇ ) are aligned, so the depth value can be calculated using equations (1), (2), and (4).
  • the arithmetic expressions for obtaining the amplitude amp A and the amplitude amp B are different from the above-mentioned equations (11) and (12) for the triangular wave.
  • the amplitude amp A and the amplitude amp B when the luminance model is a sine wave are calculated by the following equations (23) and (24).
  • Characteristic calculating unit 72 assuming a sin wave as a brightness model, the amplitude # 038 A of sin wave of the detected intensity function I A first tap 52A (theta), and the luminance detected by the second tap 52B the amplitude # 038 B of sin wave function I B (theta), is calculated by the above equation (23) and (24).
  • the characteristic calculating section 72 calculates the offset c 0 and the gain c 1 by the above equations (9) and (10).
  • FIG. 12 shows a conceptual diagram of correction processing performed by the signal correction unit 73 when the luminance model is assumed to be a sine wave of the second model.
  • the actual emission waveform may be distorted from the rectangle depending on the time constant of the emission circuit, the modulation frequency of the emission waveform and the exposure waveform, and the like. In that case, it may be closer to the actual luminance model to assume the luminance model as a sine wave instead of assuming it as a triangular wave, and calculate the offset c 0 and gain c 1 by assuming the luminance model as a sine wave. In some cases, the calculation result of the distance is better.
  • the characteristic calculation unit 72 calculates the correction parameter using machine learning.
  • the ranging module 11 acquires four phase images with the set phases set to 0 °, 90 °, 180 °, and 270 ° in various scenes (measurement targets).
  • the four phase images of the acquired various scenes are accumulated in the characteristic calculation unit 72.
  • the characteristic calculation unit 72 has a neural network learner having the same configuration for each of the first tap 52A and the second tap 52B.
  • FIG. 13 is a diagram illustrating a learning process for learning the luminance model of the harmonics detected by the first tap 52A.
  • Equation (25) the luminance function I 'A (theta) on the assumption that higher harmonics, centered c A 0 harmonics, and cos function from primary to M order, from the primary to M order It is represented by the sin function.
  • the order M M is a natural number
  • the input v A in of the learner 81A is the first tap 52 A of the predetermined pixel 31 in the four phase images of the accumulated predetermined scenes of set phases 0 °, 90 °, 180 °, and 270 °.
  • the luminance function I A '( ⁇ ) assumed to be the harmonic is restored.
  • harmonic intensity function I 'A ( ⁇ ) is representative good intensity function I A (theta)
  • it sets the phase 0 °, 90 °, 180 °
  • the luminance as input learning device 81A is the value of the function I a ( ⁇ ), the brightness value I a (0), I a (90), I a (180), and I a (270) ⁇ and should be the same.
  • characteristic calculation unit 72 performs learning such that the difference between the value of the intensity function I 'A ( ⁇ ) of the input v A in the output v A out of the learning device 81A is reduced.
  • W A 'of formula (27) represents the weighting coefficients after updating, the eta A is a coefficient representing the learning rate.
  • More weighting factor W update of A the luminance value I A of the first tap 52A of the same pixel in the four phase images of the stored various scenes (0), I A (90 ), I A (180) , and by repeating a predetermined number of times by using the I a (270), the intensity function I 'a harmonics of the formula (25) ( ⁇ ) is obtained.
  • the characteristic calculation unit 72 performs the same learning for the second tap 52B.
  • Equation (28) the luminance function I 'B (theta) on the assumption that higher harmonics, centered c B 0 harmonic, and cos function from primary to M order, from the primary to M order It is represented by the sin function.
  • the order M M is a natural number
  • the luminance value of the second tap 52B of predetermined pixels 31 I B (0), I B (90), I B (180), and I inputs B (270). That is, the input v B in ⁇ I B ( 0), I B (90), I B (180), and I B (270) ⁇ is.
  • the characteristic calculating section 72 performs learning such that the difference between the intensity function I 'B ( ⁇ ) of the input v B in the output v B out of the learning device 81B decreases.
  • W B 'is of formula (30) represents the weighting coefficients after updating, eta B is a coefficient representing the learning rate.
  • the intensity function I 'A ( ⁇ ) and the intensity function I' B to be restored since the center of the amplitude is considered to differ with the characteristics of the same light emission waveform as exposure waveform, the intensity function I '( It is considered that the shape of the function obtained by whitening ⁇ ) is the same. Therefore, as a constraint condition when updating the weighting coefficient W of each node of the learner 81, the first evaluation functions L A 1 and L B 1 described above are subjected to the second evaluation represented by the following equation (31). A function L 2 may be added to obtain a weighting coefficient W such that the second evaluation function L 2 is also small. The update of the weighting coefficient W is represented by the equation (32).
  • FIG. 14 shows a conceptual diagram of an operation for obtaining the second evaluation function L 2.
  • intensity function I of the detected reflected light at the second tap 52B' luminance function I of the reflected light detected by the first tap 52A B ( ⁇ ) is estimated.
  • characteristic calculation unit 72 uses 'and A (theta), intensity function I of the second tap 52B' luminance function I of the first tap 52A estimated and B (theta), the following equation (33)
  • the gain c 1 is calculated by the formula (34)
  • the offset c 0 is calculated by the equation (34).
  • the characteristic calculation unit 72 can calculate the correction parameters using machine learning as described above.
  • the method of calculating harmonics when the luminance model is assumed as harmonics is not limited to machine learning, and other methods may be used. For example, a method as disclosed in the document "Marvin Lindner and Andreas Kolb,” Lateral and Depth Calibration of PMD-Distance Sensors, International Symposium on Visual Computing ", 2006.” may be adopted. The technique disclosed in this document captures multiple known depths and obtains a harmonic component from the difference between the known depth and the actually measured depth, providing a look-up table for depth correction. The method of generation is disclosed.
  • the correction parameters can be calculated more accurately than with a simple model.
  • step S1 the control unit 14 of the distance measuring sensor 13 determines whether to estimate the correction parameter for correcting the characteristic variation between the taps of each pixel 31 of the pixel array 32 of the light receiving unit 15.
  • the correction parameters are estimated. Further, for example, in the first measurement in which the distance measuring module 11 is activated, the correction parameter can be estimated without fail, or the correction parameter can be estimated every predetermined number of measurements.
  • step S1 If it is determined in step S1 that the correction parameters are to be estimated, the process proceeds to step S2, and the distance measuring module 11 estimates the correction parameters for estimating the offset c 0 and the gain c 1 which are the correction parameters of each pixel 31. Execute the process.
  • step S1 determines whether the correction parameter is not estimated. If it is determined in step S1 that the correction parameter is not estimated, the correction parameter estimation process in step S2 is skipped, and the process proceeds to step S3. When the correction parameter is not estimated, the correction parameter has already been set.
  • step S3 the distance measuring module 11 executes a distance measurement process for measuring the distance to the object by the 2 Phase method using the correction parameter for correcting the characteristic variation between the taps of each pixel 31, and ends the process.
  • FIG. 16 is a detailed flowchart of the correction parameter estimation process executed in step S2 of the measurement process of FIG.
  • step S11 the control unit 14 sets the phase difference (set phase) between the light emitting waveform and the exposure waveform to 0 °, and supplies the light emitting control signal to the light emitting unit 12 and the light receiving unit 15.
  • step S12 the ranging module 11 emits the irradiation light and receives the reflected light reflected and returned by the object. That is, the light emitting unit 12 emits light while modulating at a timing corresponding to the light emission control signal supplied from the control unit 14, and irradiates the object with the irradiation light.
  • the light receiving unit 15 receives the reflected light from the object.
  • the light receiving unit 15 supplies pixel data composed of detection signals according to the amount of received reflected light to the signal processing unit 16 in units of pixels 31 of the pixel array 32.
  • step S13 the control unit 14 determines whether or not four phase images having the set phases of 0 °, 90 °, 180 °, and 270 ° have been acquired.
  • step S13 If it is determined in step S13 that the acquisition of the four phase images has not yet been performed, the process proceeds to step S14, and the control unit 14 updates the set phase to a value incremented by 90 ° from the current value. .. After step S14, the process returns to step S12, and steps S12 and S13 described above are repeated.
  • step S13 when it is determined in step S13 that four phase images have been acquired, the process proceeds to step S15, and the characteristic calculation unit 72 of the signal processing unit 16 changes the shape of the emission waveform and the shape of the exposure waveform. Based on the corresponding luminance waveform, the offset c 0 and the gain c 1 which are the correction parameters for correcting the characteristic variation between the taps of each pixel 31 are calculated.
  • the luminance model is determined before the process of step S15, and is supplied from the model determination unit 71 to the characteristic calculation unit 72.
  • step S15 the offset c 0 and the gain c 1 are described above for each of the cases where a triangular wave is assumed as the luminance model, a sine wave is assumed as the luminance model, and a harmonic is assumed as the luminance model. It is calculated by the calculation method.
  • the calculated offset c 0 and gain c 1 are supplied to the signal correction unit 73, the correction parameter estimation process of FIG. 16 is completed, and the process returns to the measurement process of FIG.
  • the correction parameters (offset c 0 and gain c 1 ) were calculated using only four phase images capable of generating one depth map, but the influence of noise is reduced. Therefore, even if eight or more phase images corresponding to a plurality of depth maps are used and the summation average of the plurality of correction parameters obtained as a result is used to calculate the final correction parameter. good.
  • FIG. 17 is a detailed flowchart of the distance measurement process by the 2 Phase method executed in step S3 of the measurement process of FIG.
  • step S31 the control unit 14 sets the phase difference (set phase) between the light emitting waveform and the exposure waveform to 0 °, and supplies the light emitting control signal to the light emitting unit 12 and the light receiving unit 15.
  • step S32 the ranging module 11 emits the irradiation light and receives the reflected light reflected and returned by the object. That is, the light emitting unit 12 emits light while modulating at a timing corresponding to the light emission control signal supplied from the control unit 14, and irradiates the object with the irradiation light.
  • the light receiving unit 15 receives the reflected light from the object.
  • the light receiving unit 15 supplies pixel data composed of detection signals according to the amount of received reflected light to the signal processing unit 16 in units of pixels 31 of the pixel array 32.
  • step S33 the control unit 14 determines whether or not two phase images having the set phases of 0 ° and 90 ° have been acquired.
  • step S33 If it is determined in step S33 that the acquisition of the two phase images has not yet been performed, the process proceeds to step S34, and the control unit 14 sets the phase difference (set phase) between the emission waveform and the exposure waveform to 90 °. Set. After step S34, the process returns to step S32, and steps S32 and S33 described above are repeated.
  • step S33 if it is determined in step S33 that the two phase images have been acquired, the process proceeds to step S35, and the signal correction unit 73 of the signal processing unit 16 has the offset c 0 calculated by the characteristic calculation unit 72. and using the gain c 1, in accordance with the equation (6), to correct the detection signals I B of the second tap 52B (theta).
  • the signal correcting unit 73 uses the detection signal I B of the second tap 52B of the pixels 31 acquired in set phases 0 ° and 90 ° (0) and I B (90), the following Calculate the corrected I " A (180) and I" A (270).
  • I "A (180) c 0 + c 1 ⁇ I B (0)
  • I "A (270) c 0 + c 1 ⁇ I B (90)
  • step S36 the distance calculation unit 74 uses the two phase images of the corrected set phases of 0 ° and 90 ° supplied from the signal correction unit 73 to reach the object for each pixel 31 of the pixel array 32.
  • the depth value which is the distance between, is calculated by the 2 Phase method. Then, the distance calculation unit 74 generates a depth map in which the depth value is stored as the pixel value of each pixel 31 and outputs the depth map to the outside of the module.
  • step S15 of FIG. 16 which is the process of calculating the offset c 0 and the gain c 1 using machine learning when harmonics are assumed as the luminance model. Will be described in more detail with reference to the flowchart of FIG.
  • step S51 characteristic calculation unit 72 sets a predetermined initial value the weight coefficient W A learner 81A corresponding to the first tap 52A.
  • step S52 the characteristic calculation unit 72 within four phase images of the set phases 0 °, 90 °, 180 °, and 270 ° of a predetermined one scene among the accumulated phase images of the plurality of scenes.
  • the luminance value I a of the first tap 52A of predetermined pixels 31 (0), I a ( 90), I a (180), and extracts the I a (270), the input v a in the learning device 81A ⁇ I a (0), I a (90), I a (180), and I a (270) ⁇ and.
  • the luminance function I A '( ⁇ ) assumed to be a harmonic.
  • the weighting coefficients after updating W A ' is calculated.
  • step S56 the characteristic calculating section 72 determines whether performed predetermined number of times to update the weight coefficient W A.
  • step S56 when it is determined that not yet updated weight coefficient W A prescribed number, processing returns to step S52, characteristic calculation unit 72, among the accumulated plural scenes phase images, still Four phase images of set phases 0 °, 90 °, 180 °, and 270 ° of a predetermined unselected scene are acquired, and the above-described processes of steps S52 to S56 are repeated.
  • step S56 when it is determined that made the update of the weight coefficient W A prescribed number, the process proceeds to step S57.
  • the processes of steps S51 to S56 are also executed for the learner 81B that learns the luminance model of the harmonics detected by the second tap 52B, in the same manner as the process of the learner 81A.
  • step S57 the characteristic calculating section 72 uses 'and A (theta), intensity function I of the second tap 52B' luminance function I of the first tap 52A which is calculated and B (theta), the above Expression ( The offset c 0 is calculated by 34), and the gain c 1 is calculated by the above equation (33).
  • the correction parameter for correcting the characteristic variation between the taps of each pixel 31 is set based on the determined luminance model (luminance waveform). By calculating, it is possible to more appropriately correct the characteristic variation between taps.
  • FIG. 19 is a block diagram showing a second configuration example of the signal processing unit 16 of the distance measuring sensor 13 of FIG.
  • FIG. 19 the parts corresponding to the first configuration example shown in FIG. 7 are designated by the same reference numerals, and the description of the parts will be omitted as appropriate.
  • the signal processing unit 16 of FIG. 19 includes a model determination unit 71, a characteristic calculation unit 72, a signal correction unit 73, a distance calculation unit 74, and an optimum model selection unit 91.
  • the optimum model selection unit 91 is further added to the first configuration example of the signal processing unit 16 shown in FIG. 7.
  • the model determination unit 71 determines a predetermined one from a plurality of selectable luminance models, and the characteristic calculation unit 72 is supplied from the model determination unit 71.
  • the correction parameters offset c 0 and gain c 1 ) were calculated based on one luminance model (luminance waveform).
  • the model determination unit 71 supplies all the stored luminance models to the characteristic calculation unit 72, and the characteristic calculation unit 72 supplies all the luminance models (luminance).
  • the correction parameters offset c 0 and gain c 1
  • N brightness models are stored in the model determination unit 71, and the offset c 0 and the gain c 1 calculated for each of the N brightness models are set to (c 0 1 , c 1 1 ), (c). 0 2 , c 1 2 ), ..., (c 0 N , c 1 N ).
  • the characteristic calculation unit 72 supplies the offset c 0 and the gain c 1 calculated for each of the N luminance models to the optimum model selection unit 91.
  • the optimum model selection unit 91 sets the offset and gain of each of the N luminance models (c 0 1 , c 1 1 ), (c 0 2 , c 1 2 ), ..., (C 0 N , c 1).
  • the phase shift amount ⁇ ref of each pixel 31 is calculated by the 4 Phase method. That is, the phase shift amount ⁇ ref of each pixel 31 is calculated by the above-mentioned equations (3) and (2).
  • the optimum model selection unit 91 sets the offset and gain (c 0 1 , c 1 1 ), (c 0 2 , c 1 2 ), ..., (C 0 N , c 1 N ), respectively. using equation (6), to produce "and a (180), the detection signal I of the first tap 52A configuration phase 270 °" detection signal I of the first tap 52A configuration phase 180 ° of the a (180).
  • I "A (180) c 0 + c 1 ⁇ I B (0)
  • I "A (270) c 0 + c 1 ⁇ I B (90)
  • the optimum model selection unit 91 uses the 2Phase method, that is, the phase shift amounts ⁇ (c 0 1 , c 1 1 ) and ⁇ (c 0 2) of each pixel 31 according to the above equations (4) and (2). , C 1 2 ), ⁇ , ⁇ (c 0 N , c 1 N ) is calculated.
  • the optimum model selection unit 91 has a phase shift amount ⁇ ref , a phase shift amount ⁇ (c 0 1 , c 1 1 ), ⁇ (c 0 2 , c 1 2 ), ..., ⁇ (c 0 N). , C 1 N ) to select the optimum offset and gain pair (c 0 opt , c 1 opt). That is, the optimum model selection unit 91 calculates the following equation (35).
  • MSE [] in Eq. (35) is a function that calculates the mean square error in [].
  • argmin is a function that finds the minimum MSE [] (c 0 n , c 1 n). Therefore, in equation (35), (c 0 opt , c 1 opt ) is set to minimize the mean square error of ⁇ ref ⁇ (c 0 n , c 1 n ) ⁇ (c 0 n , c 1 n). Indicates that the decision is made.
  • the function for evaluating the error is not limited to the mean square error, and other functions may be used.
  • the optimum model selection unit 91 supplies the selected offset c 0 opt and gain c 1 opt to the signal correction unit 73.
  • the processing of the signal correction unit 73 and the distance calculation unit 74 is the same as that of the first configuration example.
  • FIG. 20 is a flowchart of the measurement process of the distance measuring module 11 of FIG. 1 when the second configuration example of the signal processing unit 16 is adopted. This process is started, for example, when the control unit of the host device in which the distance measuring module 11 is incorporated instructs the start of measurement.
  • step S71 the control unit 14 of the distance measuring sensor 13 determines whether to estimate the correction parameter for correcting the characteristic variation between the taps of each pixel 31 of the pixel array 32 of the light receiving unit 15.
  • the process of step S71 is the same as step S1 of the measurement process in the first configuration example described in the flowchart of FIG.
  • step S71 If it is determined in step S71 that the correction parameters are to be estimated, the process proceeds to step S2, and the ranging module 11 executes the correction parameter estimation process for estimating the correction parameters for all the luminance models.
  • step S2 in the first configuration example, the correction parameter of one predetermined luminance model determined by the model determination unit 71 is estimated, whereas in the process of step S72 of the second configuration example, the process is The difference is that the model determination unit 71 estimates the correction parameters of all the luminance models stored.
  • the characteristic calculation unit 72 supplies the offset c 0 and the gain c 1 calculated for each of the N luminance models to the optimum model selection unit 91.
  • step S72 the process proceeds to step S73, and the optimum model selection unit 91 selects the optimum luminance model from the N luminance models. That is, by calculating equation (35), the optimum offset c 0 opt and gain c 1 opt are selected from the correction parameters (offset c 0 n and gain c 1 n) of each of the N luminance models. NS.
  • step S71 determines whether the correction parameter is not estimated. If it is determined in step S71 that the correction parameter is not estimated, the processes of steps S72 and S73 are skipped, and the process proceeds to step S74.
  • step S74 the distance measuring module 11 executes a distance measurement process for measuring the distance to the object by the 2 Phase method using the correction parameter for correcting the characteristic variation between the taps of each pixel 31, and ends the process.
  • the optimum luminance model is selected from various luminance models (luminance waveforms), and between the taps of each pixel 31.
  • the correction parameter for correcting the characteristic variation By calculating the correction parameter for correcting the characteristic variation, the characteristic variation between taps can be corrected more appropriately.
  • FIG. 21 is a block diagram showing a third configuration example of the signal processing unit 16 of the distance measuring sensor 13 of FIG.
  • FIG. 21 the parts corresponding to the second configuration example shown in FIG. 19 are designated by the same reference numerals, and the description of the parts will be omitted as appropriate.
  • the signal processing unit 16 of FIG. 21 includes a model determination unit 71, a characteristic calculation unit 72, a signal correction unit 73, a distance calculation unit 74, an optimum model selection unit 91, and an evaluation unit 101.
  • the evaluation unit 101 is further added to the second configuration example of the signal processing unit 16 shown in FIG.
  • the depth map calculated by the distance calculation unit 74 is supplied to the evaluation unit 101.
  • the evaluation unit 101 evaluates whether or not the correction parameter currently used is appropriate by using two depth maps that are continuous in the time direction. If the correction parameter is not a value obtained by appropriately correcting the characteristic variation between taps of the pixel 31, the characteristic variation appears in the depth map as fixed pattern noise. The evaluation unit 101 determines whether or not fixed pattern noise appears in the depth map.
  • the evaluation unit 101 determines that fixed pattern noise appears in the depth map, the evaluation unit 101 instructs the model determination unit 71 to recalculate the correction parameters.
  • the model determination unit 71 supplies all the stored luminance models to the characteristic calculation unit 72.
  • the fixed pattern noise evaluation process performed by the evaluation unit 101 will be described with reference to FIG. 22.
  • the depth map D t-1 at time t-1 and the depth map D t at time t are input to the evaluation unit 101.
  • Evaluation unit 101 initially, each time two consecutive in the direction depth map D t-1 and D t, to apply the spatial filter f for smoothing the image, the filtered depth map D 't Generate -1 and'D t.
  • the depth map D 't-1 and D' t after filtering referred to as a post-processing the depth map D 't-1 and D' t.
  • D' t-1 f (D t-1 )
  • D' t f (D t )
  • the spatial filter f for example, an edge preservation type filter such as a Gaussian filter or a bilateral filter can be adopted.
  • an edge preservation type filter such as a Gaussian filter or a bilateral filter.
  • the evaluation unit 101, the area position of the small region g t-1 and g t obtained by setting the area size in advance, respectively, are sequentially slid in the post-processing the depth map D 't-1 and D' t, the small the dispersion of the depth value d that is detected at each pixel in the region g, both the smallest small region g t of the difference between the depth value d between two of the processed depth map, used in the evaluation of the fixed pattern noise as the representative small area gs' t.
  • This process can be expressed as follows.
  • V () represents the operator for the variance.
  • the evaluation unit 101 the representative small area gs' t subregions gs in the same area position, extracted from each of two depth map D t-1 and D t, the small region gs t-1 and gs t And.
  • the evaluation unit 101 calculates the following equation (37) using the small regions gs t-1 and gs t extracted from the two depth maps D t-1 and D t, respectively, and calculates the small region gs t-1. It is determined whether or not the sum of the difference between gs t and the variance of the small region gs t is larger than the predetermined threshold value Th.
  • the evaluation unit 101 determines that the correction parameter is not the value obtained by appropriately correcting the characteristic variation between taps.
  • FIG. 23 is a flowchart of the measurement process of the distance measuring module 11 of FIG. 1 when the third configuration example of the signal processing unit 16 is adopted. This process is started, for example, when the control unit of the host device in which the distance measuring module 11 is incorporated instructs the start of measurement.
  • steps S101 to S104 of FIG. 23 are the same as the processes of steps S71 to S74 described with reference to FIG. 20, the description thereof will be omitted.
  • the depth map calculated in step S104 is also supplied to the evaluation unit 101, and the evaluation unit 101 stores two depth maps D t-1 and D t that are continuous in the time direction.
  • step S105 the evaluation unit 101 executes a fixed pattern noise evaluation process for evaluating whether or not the correction parameter currently used is appropriate, using two depth maps that are continuous in the time direction. Details of the fixed pattern noise evaluation process in step S105 will be described later with reference to the flowchart of FIG. 24.
  • step S106 the evaluation unit 101 uses the result of the fixed pattern noise evaluation process to determine whether fixed pattern noise is generated in the depth maps D t-1 and D t. Specifically, the evaluation unit 101 determines whether or not the conditional expression of the equation (37) is satisfied.
  • step S106 If the conditional expression of the equation (37) is satisfied in step S106 and it is determined that the fixed pattern noise is generated, the process proceeds to step S107, and the evaluation unit 101 informs the model determination unit 71 of the correction parameter. Instruct the calculation.
  • the model determination unit 71 supplies all the stored luminance models to the characteristic calculation unit 72.
  • the characteristic calculation unit 72 executes a correction parameter estimation process for estimating correction parameters for all luminance models.
  • the process of step S107 after the recalculation of the correction parameter is instructed is the same as that of step S102 and step S72 of FIG.
  • step S108 the optimum model selection unit 91 selects the optimum brightness model from the N brightness models.
  • the process of step S108 is the same as that of step S103 and step S73 of FIG.
  • step S106 determines whether fixed pattern noise has not occurred. If it is determined in step S106 that fixed pattern noise has not occurred, the processes of steps S107 and S108 are skipped, and the measurement process ends.
  • FIG. 24 is a detailed flowchart of the fixed pattern noise evaluation process executed in step S105 of FIG. 23.
  • step S121 the evaluation unit 101 applies a spatial filter f for smoothing an image to each of the two continuous depth maps D t-1 and D t in the time direction, and the processed depth map D. ' t-1 and D' t are generated.
  • step S122 the evaluation unit 101, the processing after the depth map D within 't-1 and D' t, respectively, is slid by setting small regions g t-1 and g t, a representative small region gs' t Set.
  • Small representative subregion gs' t has the formula represented by (36), and the variance of depth value d in the small region g, the smallest both the difference between the depth value d between the two processed depth map It is an area g t.
  • step S123 the evaluation unit 101, the representative small area gs' t subregions gs in the same area position, extracted from each of two depth map D t-1 and D t, the small region gs t-1 and gs Let t .
  • step S124 the evaluation unit 101 uses the small regions gs t-1 and gs t extracted from the two depth maps D t-1 and D t, respectively, to use the left side of the equation (37), that is, the small region gs.
  • the sum of the difference between t-1 and gs t and the variance of the small region gs t is calculated.
  • step S124 the process proceeds to step S106 of FIG. 23, and it is determined whether or not the conditional expression of the equation (37) is satisfied.
  • the signal processing unit 16 may be configured by any of the above-mentioned first configuration example to third configuration example, or may be configured so that the first configuration example to the third configuration example can be selectively executed.
  • stable estimation of offset c 0 and gain c 1 can be expected even when the signal-to-noise ratio of the signal is not sufficiently high at the timing of correcting the characteristic variation between taps.
  • FIG. 25 is a perspective view showing a chip configuration example of the distance measuring sensor 13.
  • the distance measuring sensor 13 can be composed of one chip in which a sensor die 151 as a plurality of dies (boards) and a logic die 152 are laminated.
  • the sensor die 151 is configured with a sensor unit 161 (as a circuit), and the logic die 152 is configured with a logic unit 162.
  • a light receiving unit 15 is formed in the sensor unit 161.
  • the logic unit 162 is formed with, for example, a control unit 14, a signal processing unit 16, input / output terminals, and the like.
  • the distance measuring sensor 13 may be composed of three layers in which another logic die is laminated in addition to the sensor die 151 and the logic die 152. Of course, it may be composed of a stack of four or more dies (boards).
  • the ranging sensor 13 is composed of, for example, as shown in FIG. 25B, a first chip 171 and a second chip 172, and a relay board (interposer board) 173 on which they are mounted. You may.
  • a light receiving portion 15 is formed on the first chip 171.
  • a control unit 14, a signal processing unit 16, and the like are formed on the second chip 172.
  • the circuit arrangement of the sensor die 151 and the logic die 152 in A of FIG. 25 and the circuit arrangement of the first chip 171 and the second chip 172 in B of FIG. 25 are merely examples. Not limited to.
  • a signal processing unit 16 that performs depth map generation processing and the like may be provided outside (separate chip) of the distance measuring sensor 13 as a signal processing device.
  • the distance measuring module 11 described above can be mounted on an electronic device such as a smartphone, a tablet terminal, a mobile phone, a personal computer, a game machine, a television receiver, a wearable terminal, a digital still camera, or a digital video camera.
  • an electronic device such as a smartphone, a tablet terminal, a mobile phone, a personal computer, a game machine, a television receiver, a wearable terminal, a digital still camera, or a digital video camera.
  • FIG. 26 is a block diagram showing a configuration example of a smartphone as an electronic device equipped with a ranging module.
  • the distance measuring module 202, the image pickup device 203, the display 204, the speaker 205, the microphone 206, the communication module 207, the sensor unit 208, the touch panel 209, and the control unit 210 are connected via the bus 211. Is connected and configured. Further, the control unit 210 has functions as an application processing unit 221 and an operating system processing unit 222 by executing a program by the CPU.
  • the distance measuring module 11 of FIG. 1 is applied to the distance measuring module 202.
  • the distance measuring module 202 is arranged in front of the smartphone 201, and by performing distance measurement for the user of the smartphone 201, the depth value of the surface shape of the user's face, hand, finger, etc. is measured as a distance measurement result. Can be output as. It is also possible to recognize the user's gesture by using the distance measurement result by the distance measurement module 202.
  • the image pickup device 203 is arranged in front of the smartphone 201, and by taking an image of the user of the smartphone 201 as a subject, the image taken by the user is acquired. Although not shown, the image pickup device 203 may be arranged on the back surface of the smartphone 201.
  • the display 204 displays an operation screen for performing processing by the application processing unit 221 and the operation system processing unit 222, an image captured by the image pickup device 203, and the like.
  • the speaker 205 and the microphone 206 for example, output the voice of the other party and collect the voice of the user when making a call by the smartphone 201.
  • the communication module 207 communicates via the communication network.
  • the sensor unit 208 senses speed, acceleration, proximity, etc., and the touch panel 209 acquires a touch operation by the user on the operation screen displayed on the display 204.
  • the application processing unit 221 performs processing for providing various services by the smartphone 201.
  • the application processing unit 221 can create a face by computer graphics that virtually reproduces the user's facial expression based on the depth supplied from the distance measuring module 202, and can perform a process of displaying the face on the display 204.
  • the application processing unit 221 can perform a process of creating, for example, three-dimensional shape data of an arbitrary three-dimensional object based on the depth supplied from the distance measuring module 202.
  • the operation system processing unit 222 performs processing for realizing the basic functions and operations of the smartphone 201.
  • the operation system processing unit 222 can perform a process of authenticating the user's face and unlocking the smartphone 201 based on the depth value supplied from the distance measuring module 202.
  • the operation system processing unit 222 performs, for example, a process of recognizing a user's gesture based on the depth value supplied from the distance measuring module 202, and performs a process of inputting various operations according to the gesture. Can be done.
  • the smartphone 201 configured in this way, by applying the distance measuring module 11 described above, for example, a depth map can be generated with high accuracy and high speed. As a result, the smartphone 201 can detect the distance measurement information more accurately.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
  • FIG. 27 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile control system to which the technology according to the present disclosure can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are shown as a functional configuration of the integrated control unit 12050.
  • the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle.
  • the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps.
  • the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches.
  • the body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
  • the vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000.
  • the image pickup unit 12031 is connected to the vehicle exterior information detection unit 12030.
  • the vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image.
  • the vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or a character on the road surface based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received.
  • the image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
  • the in-vehicle information detection unit 12040 detects the in-vehicle information.
  • a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
  • the microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit.
  • a control command can be output to 12010.
  • the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving, etc., which runs autonomously without depending on the operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030.
  • the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs coordinated control for the purpose of anti-glare such as switching the high beam to the low beam. It can be carried out.
  • the audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger of the vehicle or the outside of the vehicle.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices.
  • the display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
  • FIG. 28 is a diagram showing an example of the installation position of the imaging unit 12031.
  • the vehicle 12100 has image pickup units 12101, 12102, 12103, 12104, 12105 as the image pickup unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100, for example.
  • the image pickup unit 12101 provided on the front nose and the image pickup section 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100.
  • the imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100.
  • the images in front acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 28 shows an example of the photographing range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103.
  • the imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
  • the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100). By obtaining can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform coordinated control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
  • automatic brake control including follow-up stop control
  • automatic acceleration control including follow-up start control
  • the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be seen by the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104.
  • pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine.
  • the audio image output unit 12052 When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian.
  • the display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
  • the above is an example of a vehicle control system to which the technology according to the present disclosure can be applied.
  • the technique according to the present disclosure can be applied to the vehicle exterior information detection unit 12030 and the vehicle interior information detection unit 12040 among the configurations described above. Specifically, by using the distance measurement by the distance measuring module 11 as the outside information detection unit 12030 and the inside information detection unit 12040, processing for recognizing the driver's gesture is performed, and various types (for example, for example) according to the gesture are performed. It can perform operations on audio systems, navigation systems, air conditioning systems) and detect the driver's condition more accurately. Further, the distance measurement by the distance measurement module 11 can be used to recognize the unevenness of the road surface and reflect it in the control of the suspension.
  • the structure of the photodiode 51 of the light receiving unit 15 includes a distance measuring sensor having a CAPD (Current Assisted Photonic Demodulator) structure, a gate type distance measuring sensor that alternately applies an electric charge of the photodiode to two gates, and the like. It can be applied to a distance measuring sensor having a structure that distributes charges to two charge storage units.
  • CAPD Current Assisted Photonic Demodulator
  • the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units).
  • the configurations described above as a plurality of devices (or processing units) may be collectively configured as one device (or processing unit).
  • a configuration other than the above may be added to the configuration of each device (or each processing unit).
  • a part of the configuration of one device (or processing unit) may be included in the configuration of another device (or other processing unit). ..
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
  • the present technology can have the following configurations.
  • a signal processing unit that performs processing to calculate the distance to the object based on the pixel data obtained by the pixels that receive the reflected light that is reflected by the object and the irradiation light emitted from the predetermined light emitting source is received.
  • the signal processing unit determines the characteristics of the first charge detection unit and the second charge detection unit of the pixel based on the luminance model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel.
  • a signal processing device having a characteristic calculation unit for calculating correction parameters to be corrected.
  • the signal processing device calculates the correction parameter based on a predetermined luminance model selected from a plurality of luminance models.
  • the predetermined luminance model is a triangular wave.
  • the predetermined luminance model is a sine wave.
  • the signal processing device is a harmonic.
  • the characteristic calculation unit estimates the brightness function representing the brightness waveform assumed to be a harmonic for each of the first charge detection unit and the second charge detection unit by machine learning, and calculates the correction parameter (5).
  • the signal processing apparatus according to.
  • the machine learning learner learns the luminance function corresponding to the first charge detection unit or the second charge detection unit, and learns the brightness function.
  • the signal processing device according to (6) wherein the characteristic calculation unit learns the learner so that the difference between the input of the learner and the value of the luminance function obtained by learning becomes small.
  • the characteristic calculation unit learns based on the condition that the first luminance function corresponding to the first charge detection unit and the second luminance function corresponding to the second charge detection unit are the same.
  • the signal processing device according to (7) above. The characteristic calculation unit calculates the correction parameters for a plurality of luminance models, and calculates the correction parameters.
  • One of the above (1) to (8) further comprising a selection unit for selecting the correction parameter having a small error from the phase shift amount calculated from the input image among the correction parameters of the plurality of luminance models.
  • the signal processing device described.
  • the signal processing apparatus according to any one of (1) to (9), further comprising an evaluation unit for evaluating the correction parameter calculated by the characteristic calculation unit.
  • the signal processing unit further includes a distance calculation unit that calculates a distance to the object based on pixel data corrected by the correction parameter calculated by the characteristic calculation unit (1) to (10).
  • the signal processing device according to any one.
  • a signal processing device that calculates the distance to the object based on the pixel data obtained by the pixels that receive the reflected light that is reflected by the object and the irradiation light emitted from the predetermined light source. , Based on the brightness model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel, a correction parameter for correcting the characteristics of the first charge detection unit and the second charge detection unit of the pixel is calculated. Signal processing method. (13) With a given light source A distance measuring sensor having a pixel that receives the reflected light that is reflected by an object and returned from the irradiation light emitted from the light emitting source is provided.
  • the ranging sensor determines the characteristics of the first charge detection unit and the second charge detection unit of the pixel based on the luminance model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel.
  • a distance measuring module having a characteristic calculation unit that calculates correction parameters to be corrected.

Abstract

The present technology relates to a signal processing device, a signal processing method, and a range-finding module which make it possible to more appropriately correct variations in the characteristics of taps. This signal processing device comprises a signal processing unit that performs a process for computing the distance to an object on the basis of pixel data obtained from a pixel receiving reflected light, which is illumination light that has been emitted from a prescribed light source and has reflected off of the object and returned. The signal processing unit has a characteristics calculation unit that computes a correction parameter for correcting the characteristics of a first charge detection unit and a second charge detection unit of the pixel on the basis of a luminance model corresponding to the shape of the light emission waveform of the illumination light and the shape of the exposure waveform of the pixel. The present technology can be applied to, for example, a range-finding module that measures the distance to a subject.

Description

信号処理装置、信号処理方法、および、測距モジュールSignal processing device, signal processing method, and ranging module
 本技術は、信号処理装置、信号処理方法、および、測距モジュールに関し、特に、より適切にタップ間の特性ばらつきを補正できるようにした信号処理装置、信号処理方法、および、測距モジュールに関する。 The present technology relates to a signal processing device, a signal processing method, and a ranging module, and more particularly to a signal processing device, a signal processing method, and a ranging module capable of more appropriately correcting characteristic variations between taps.
 近年、半導体技術の進歩により、物体までの距離を測定する測距モジュールの小型化が進んでいる。これにより、例えば、いわゆるスマートフォンなどのモバイル端末に測距モジュールを搭載することが実現されている。 In recent years, advances in semiconductor technology have led to the miniaturization of distance measuring modules that measure the distance to an object. As a result, for example, it has been realized that a distance measuring module is mounted on a mobile terminal such as a so-called smartphone.
 スマートフォンには、例えば、Indirect ToF(Time of Flight)方式による測距を行う測距モジュールが搭載されている。Indirect ToF方式は、物体に向かって照射光が発光され、その照射光が物体の表面で反射され返ってくる反射光を検出し、照射光が発光されてから反射光が受光されるまでの飛行時間に基づいて物体までの距離を算出する方式である。 For example, smartphones are equipped with a distance measurement module that measures distance by the Indirect ToF (Time of Flight) method. In the IndirectToF method, the irradiation light is emitted toward the object, the reflected light is reflected on the surface of the object and returned, and the flight from the emission of the irradiation light to the reception of the reflected light is detected. This is a method of calculating the distance to an object based on time.
 Indirect ToF方式では、照射光の照射タイミングを基準に、受光センサ側は、位相を0°、90°、180°、および、270°だけずらした受光タイミングで反射光を受光する。照射光の照射タイミング対して、4つの異なる位相で検出された4枚の位相画像を用いて、物体までの距離を算出する方式は、4Phase方式などと呼ばれる。 In the Indirect ToF method, the light receiving sensor side receives the reflected light at the light receiving timings shifted by 0 °, 90 °, 180 °, and 270 ° based on the irradiation timing of the irradiation light. A method of calculating the distance to an object by using four phase images detected in four different phases with respect to the irradiation timing of the irradiation light is called a 4 Phase method or the like.
 一方、1画素に第1タップと第2タップの2つの電荷蓄積部を有し、第1タップと第2タップとに交互に受光電荷を振り分けることで、1枚の位相画像で位相が反転した2つの位相を検出することができるので、2枚の位相画像を用いて物体までの距離を算出する2Phase方式と呼ばれるものもある。 On the other hand, one pixel has two charge storage portions, a first tap and a second tap, and by alternately distributing the received charge to the first tap and the second tap, the phase is inverted in one phase image. Since two phases can be detected, there is also a method called a 2-phase method in which the distance to an object is calculated using two phase images.
 2Phase方式は、4Phase方式の半分の枚数の位相画像で距離を算出できるメリットがある反面、各画素の2つの電荷蓄積部である第1タップと第2タップには、タップ間の特性ばらつき(タップ間の感度差)が存在するため、タップ間の特性ばらつきを補正することが必要である。 The 2Phase method has the advantage that the distance can be calculated with half the number of phase images of the 4Phase method, but the characteristics of the first tap and the second tap, which are the two charge storage units of each pixel, vary between taps (tap). Since there is a sensitivity difference between taps), it is necessary to correct the characteristic variation between taps.
 例えば、特許文献1には、事前計測により各タップのオフセットを算出し、距離算出の誤差を低減する手法が提案されている。 For example, Patent Document 1 proposes a method of calculating the offset of each tap by pre-measurement to reduce the error of distance calculation.
特開2019-191118号公報Japanese Unexamined Patent Publication No. 2019-191118
 しかしながら、特許文献1に開示に手法では、タップ間の特性ばらつきを補正する補正式が、受光波形の振幅が小さいときには0に近い値で除算する式となり、計算安定性やノイズ耐性に十分とは言えなかった。 However, in the method disclosed in Patent Document 1, the correction formula for correcting the characteristic variation between taps is a formula for dividing by a value close to 0 when the amplitude of the received light waveform is small, which is sufficient for calculation stability and noise immunity. I could not say.
 本技術は、このような状況に鑑みてなされたものであり、1画素に2つの電荷蓄積部を備える受光センサにおいて、より適切にタップ間の特性ばらつきを補正できるようにするものである。 This technology was made in view of such a situation, and makes it possible to more appropriately correct the characteristic variation between taps in a light receiving sensor having two charge storage units in one pixel.
 本技術の第1の側面の信号処理装置は、所定の発光源から照射された照射光が物体で反射されて返ってきた反射光を受光する画素で得られた画素データに基づいて、前記物体までの距離を算出する処理を行う信号処理部を備え、前記信号処理部は、前記照射光の発光波形の形状と前記画素の露光波形の形状とに応じた輝度モデルに基づいて、前記画素の第1電荷検出部と第2電荷検出部との特性を補正する補正パラメータを算出する特性計算部を有する。 The signal processing device on the first aspect of the present technology is based on the pixel data obtained by the pixels that receive the reflected light that is reflected by the object and returned by the irradiation light emitted from the predetermined light emitting source. A signal processing unit that performs processing for calculating the distance to the It has a characteristic calculation unit that calculates correction parameters that correct the characteristics of the first charge detection unit and the second charge detection unit.
 本技術の第2の側面の信号処理方法は、所定の発光源から照射された照射光が物体で反射されて返ってきた反射光を受光する画素で得られた画素データに基づいて、前記物体までの距離を算出する処理を行う信号処理装置が、前記照射光の発光波形の形状と前記画素の露光波形の形状とに応じた輝度モデルに基づいて、前記画素の第1電荷検出部と第2電荷検出部との特性を補正する補正パラメータを算出する。 The signal processing method of the second aspect of the present technology is based on the pixel data obtained by the pixel that receives the reflected light that is reflected by the object and returned by the irradiation light emitted from the predetermined light emitting source. The signal processing device that performs the process of calculating the distance to 2 Calculate the correction parameter that corrects the characteristics with the charge detection unit.
 本技術の第3の側面の測距モジュールは、所定の発光源と、前記発光源から照射された照射光が物体で反射されて返ってきた反射光を受光する画素を有する測距センサとを備え、前記測距センサは、前記照射光の発光波形の形状と前記画素の露光波形の形状とに応じた輝度モデルに基づいて、前記画素の第1電荷検出部と第2電荷検出部との特性を補正する補正パラメータを算出する特性計算部を有する。 The distance measuring module on the third side of the present technology comprises a predetermined light emitting source and a distance measuring sensor having pixels that receive the reflected light that is reflected by an object and returned from the irradiation light emitted from the light emitting source. The distance measuring sensor includes a first charge detection unit and a second charge detection unit of the pixel based on a luminance model corresponding to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel. It has a characteristic calculation unit that calculates correction parameters for correcting characteristics.
 本技術の第1乃至第3の側面においては、照射光の発光波形の形状と画素の露光波形の形状とに応じた輝度モデルに基づいて、画素の第1電荷検出部と第2電荷検出部との特性を補正する補正パラメータが算出される。 In the first to third aspects of the present technology, the first charge detection unit and the second charge detection unit of the pixel are based on the luminance model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel. A correction parameter that corrects the characteristics of and is calculated.
 信号処理装置及び測距モジュールは、独立した装置であっても良いし、他の装置に組み込まれるモジュールであっても良い。 The signal processing device and the ranging module may be independent devices or may be modules incorporated in other devices.
本技術を適用した測距モジュールの概略構成例を示すブロック図である。It is a block diagram which shows the schematic structure example of the distance measurement module to which this technology is applied. 受光部の詳細構成例を示すブロック図である。It is a block diagram which shows the detailed configuration example of a light receiving part. 画素の動作を説明する図である。It is a figure explaining the operation of a pixel. 2Phase方式と4Phase方式を説明する図である。It is a figure explaining 2Phase method and 4Phase method. 2Phase方式と4Phase方式を説明する図である。It is a figure explaining 2Phase method and 4Phase method. 2Phase方式と4Phase方式を説明する図である。It is a figure explaining 2Phase method and 4Phase method. 信号処理部の第1構成例を示すブロック図である。It is a block diagram which shows the 1st structural example of a signal processing part. モデル決定部が決定する輝度モデルの例を示す図である。It is a figure which shows the example of the luminance model which a model determination part determines. 輝度モデルを三角波であると仮定した場合の輝度関数の概念図である。It is a conceptual diagram of the luminance function when it is assumed that the luminance model is a triangular wave. 輝度モデルを三角波であると仮定した場合に信号補正部が行う補正処理の概念図である。It is a conceptual diagram of the correction processing performed by the signal correction unit when it is assumed that the luminance model is a triangular wave. 輝度モデルをsin波であると仮定した場合の輝度関数の概念図である。It is a conceptual diagram of the luminance function when it is assumed that the luminance model is a sine wave. 輝度モデルをsin波であると仮定した場合に信号補正部が行う補正処理の概念図である。It is a conceptual diagram of the correction processing performed by the signal correction unit when it is assumed that the luminance model is a sine wave. 高調波の輝度モデルを学習する学習処理を説明する図である。It is a figure explaining the learning process which learns the luminance model of a harmonic. 第2の評価関数Lを求める演算の概念図である。It is a conceptual diagram of the operation which obtains the 2nd evaluation function L 2. 測距モジュールによる測定処理を説明するフローチャートである。It is a flowchart explaining the measurement process by a distance measurement module. 図15のステップS2で実行される、補正パラメータ推定処理の詳細なフローチャートである。It is a detailed flowchart of the correction parameter estimation process executed in step S2 of FIG. 図15のステップS3で実行される、2Phase方式による距離測定処理の詳細なフローチャートである。It is a detailed flowchart of the distance measurement process by the 2 Phase method executed in step S3 of FIG. 機械学習を用いてオフセットとゲインを算出する処理のフローチャートである。It is a flowchart of the process of calculating offset and gain using machine learning. 信号処理部の第2構成例を示すブロック図である。It is a block diagram which shows the 2nd structural example of a signal processing part. 信号処理部の第2構成例を採用した場合における測距モジュールの測定処理のフローチャートである。It is a flowchart of the measurement process of the distance measurement module in the case of adopting the 2nd configuration example of the signal processing unit. 信号処理部の第3構成例を示すブロック図である。It is a block diagram which shows the 3rd structural example of a signal processing part. 評価部による固定パターンノイズ評価処理を説明する図である。It is a figure explaining the fixed pattern noise evaluation process by an evaluation part. 信号処理部の第3構成例を採用した場合における測距モジュールの測定処理のフローチャートである。It is a flowchart of the measurement process of the distance measurement module in the case of adopting the 3rd configuration example of the signal processing unit. 図23のステップS105で実行される固定パターンノイズ評価処理の詳細なフローチャートである。It is a detailed flowchart of the fixed pattern noise evaluation process executed in step S105 of FIG. 測距センサのチップ構成例を示す斜視図である。It is a perspective view which shows the chip structure example of the distance measuring sensor. 測距モジュールを搭載した電子機器としてのスマートフォンの構成例を示すブロック図である。It is a block diagram which shows the configuration example of a smartphone as an electronic device equipped with a distance measurement module. 車両制御システムの概略的な構成の一例を示すブロック図である。It is a block diagram which shows an example of the schematic structure of a vehicle control system. 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。It is explanatory drawing which shows an example of the installation position of the vehicle exterior information detection unit and the image pickup unit.
 以下、本技術を実施するための形態(以下、実施の形態という)について説明する。なお、説明は以下の順序で行う。
1.測距モジュールの概略構成例
2.Indirect ToF方式の距離算出原理
3.信号処理部の第1構成例
4.輝度波形を三角波とした場合の補正パラメータ算出処理
5.輝度波形をsin波とした場合の補正パラメータ算出処理
6.輝度波形を高調波とした場合の補正パラメータ算出処理
7.測距モジュールによる測定処理のフローチャート
8.信号処理部の第2構成例
9.第2構成例を用いた測定処理のフローチャート
10.信号処理部の第3構成例
11.第3構成例を用いた測定処理のフローチャート
12.測距センサのチップ構成例
13.電子機器の構成例
14.移動体への応用例
Hereinafter, embodiments for carrying out the present technology (hereinafter referred to as embodiments) will be described. The explanation will be given in the following order.
1. 1. Schematic configuration example of the ranging module 2. Indirect To F method distance calculation principle 3. First configuration example of signal processing unit 4. 4. Correction parameter calculation process when the luminance waveform is a triangular wave. Correction parameter calculation process when the luminance waveform is a sine wave 6. Correction parameter calculation process when the luminance waveform is a harmonic 7. Flowchart of measurement process by distance measurement module 8. Second configuration example of the signal processing unit 9. Flowchart of measurement processing using the second configuration example 10. Third configuration example of the signal processing unit 11. Flowchart of measurement processing using the third configuration example 12. Example of chip configuration of distance measuring sensor 13. Configuration example of electronic device 14. Application example to mobile
<1.測距モジュールの概略構成例>
 図1は、本技術を適用した測距モジュールの概略構成例を示すブロック図である。
<1. Schematic configuration example of ranging module>
FIG. 1 is a block diagram showing a schematic configuration example of a distance measuring module to which the present technology is applied.
 図1に示される測距モジュール11は、Indirect ToF方式による測距を行う測距モジュールであり、発光部12、および、測距センサ13を有する。測距モジュール11は、物体に対して光を照射し、その光(照射光)が物体で反射されてきた光(反射光)を受光することにより、物体までの距離情報としてのデプスマップを生成して出力する。測距センサ13は、制御部14、受光部15、および、信号処理部16で構成されている。 The distance measurement module 11 shown in FIG. 1 is a distance measurement module that performs distance measurement by the Indirect ToF method, and has a light emitting unit 12 and a distance measurement sensor 13. The ranging module 11 irradiates an object with light, and the light (irradiation light) receives the light (reflected light) reflected by the object to generate a depth map as distance information to the object. And output. The distance measuring sensor 13 includes a control unit 14, a light receiving unit 15, and a signal processing unit 16.
 発光部12は、例えば、VCSEL(Vertical Cavity Surface Emitting Laser:垂直共振器面発光レーザ)を平面状に複数配列したVCSELアレイを発光源として含み、制御部14から供給される発光制御信号に応じたタイミングで変調しながら発光し、物体に対して照射光を照射する。 The light emitting unit 12 includes, for example, a VCSEL array in which a plurality of VCSELs (Vertical Cavity Surface Emitting Lasers) are arranged in a plane in a plane as a light emitting source, and responds to a light emitting control signal supplied from the control unit 14. It emits light while being modulated at the timing, and irradiates the object with irradiation light.
 制御部14は、測距モジュール11全体の動作を制御する。例えば、制御部14は、所定の周波数(例えば、200MHzなど)の発光制御信号を発光部12に供給することにより、発光部12を制御する。また、制御部14は、発光部12における発光のタイミングに合わせて受光部15を駆動させるために、発光制御信号を受光部15にも供給する。 The control unit 14 controls the operation of the entire ranging module 11. For example, the control unit 14 controls the light emitting unit 12 by supplying a light emitting control signal of a predetermined frequency (for example, 200 MHz or the like) to the light emitting unit 12. Further, the control unit 14 also supplies a light emission control signal to the light receiving unit 15 in order to drive the light receiving unit 15 in accordance with the timing of light emission in the light emitting unit 12.
 受光部15は、詳細は図2を参照して後述するが、複数の画素31が2次元配置された画素アレイ32で、物体からの反射光を受光する。そして、受光部15は、受光した反射光の受光量に応じた検出信号で構成される画素データを、画素アレイ32の画素31単位で信号処理部16に供給する。 The light receiving unit 15 receives the reflected light from the object in the pixel array 32 in which a plurality of pixels 31 are two-dimensionally arranged, which will be described in detail later with reference to FIG. Then, the light receiving unit 15 supplies the pixel data composed of the detection signals according to the received amount of the received reflected light to the signal processing unit 16 in units of pixels 31 of the pixel array 32.
 信号処理部16は、受光部15から画素アレイ32の画素31ごとに供給される画素データに基づいて、測距モジュール11から物体までの距離であるデプス値を算出し、各画素31の画素値としてデプス値が格納されたデプスマップを生成して、モジュール外へ出力する。 The signal processing unit 16 calculates a depth value, which is the distance from the distance measuring module 11 to the object, based on the pixel data supplied from the light receiving unit 15 for each pixel 31 of the pixel array 32, and the pixel value of each pixel 31. Generates a depth map in which the depth value is stored as, and outputs it to the outside of the module.
<2.Indirect ToF方式の距離算出原理>
 次に、受光部15の詳細構成例を説明しながら、Indirect ToF方式の距離算出原理について説明する。
<2. Indirect To F method distance calculation principle>
Next, the distance calculation principle of the Indirect ToF method will be described while explaining a detailed configuration example of the light receiving unit 15.
 図2は、受光部15の詳細構成例を示すブロック図である。 FIG. 2 is a block diagram showing a detailed configuration example of the light receiving unit 15.
 受光部15は、受光した光量に応じた電荷を生成し、その電荷に応じた検出信号を出力する画素31が行方向および列方向の行列状に2次元配置された画素アレイ32と、画素アレイ32の周辺領域に配置された駆動制御回路33とを有する。 The light receiving unit 15 generates a charge corresponding to the amount of received light, and a pixel array 32 and a pixel array in which pixels 31 that output a detection signal corresponding to the charge are two-dimensionally arranged in a matrix in the row direction and the column direction. It has a drive control circuit 33 arranged in a peripheral region of 32.
 駆動制御回路33は、例えば、制御部14から供給される発光制御信号などに基づいて、画素31の駆動を制御するための制御信号(例えば、後述する振り分け信号DIMIXや、選択信号ADDRESS DECODE、リセット信号RSTなど)を出力する。 The drive control circuit 33 is, for example, a control signal for controlling the drive of the pixel 31 based on a light emission control signal supplied from the control unit 14, (for example, a distribution signal DIMIX, a selection signal ADDRESS DECODE, or a reset, which will be described later). (Signal RST, etc.) is output.
 画素31は、受光した光量に応じた電荷を生成する光電変換部としてのフォトダイオード51と、フォトダイオード51で生成された電荷を検出する第1タップ52A(第1の電荷検出部)および第2タップ52B(第2の電荷検出部)とを有する。画素31では、1つのフォトダイオード51で発生した電荷が、第1タップ52Aまたは第2タップ52Bに振り分けられる。そして、フォトダイオード51で発生した電荷のうち、第1タップ52Aに振り分けられた電荷が信号線53Aから検出信号Aとして出力され、第2タップ52Bに振り分けられた電荷が信号線53Bから検出信号Bとして出力される。 The pixel 31 includes a photodiode 51 as a photoelectric conversion unit that generates an electric charge according to the amount of received light, a first tap 52A (first charge detecting unit) that detects the electric charge generated by the photodiode 51, and a second. It has a tap 52B (second charge detection unit). In the pixel 31, the electric charge generated by one photodiode 51 is distributed to the first tap 52A or the second tap 52B. Then, of the charges generated by the photodiode 51, the charges distributed to the first tap 52A are output as a detection signal A from the signal line 53A, and the charges distributed to the second tap 52B are detected signals B from the signal line 53B. Is output as.
 第1タップ52Aは、転送トランジスタ41A、FD(Floating Diffusion)部42A、選択トランジスタ43A、およびリセットトランジスタ44Aにより構成される。同様に、第2タップ52Bは、転送トランジスタ41B、FD部42B、選択トランジスタ43B、およびリセットトランジスタ44Bにより構成される。 The first tap 52A is composed of a transfer transistor 41A, an FD (Floating Diffusion) unit 42A, a selection transistor 43A, and a reset transistor 44A. Similarly, the second tap 52B is composed of a transfer transistor 41B, an FD unit 42B, a selection transistor 43B, and a reset transistor 44B.
 発光部12から、例えば、図3に示されるような、照射時間Tで照射のオン/オフを繰り返すように変調(1周期=2T)された照射光が出力され、物体までの距離に応じた遅延時間ΔTだけ遅れて、フォトダイオード51において反射光が受光される。反射光の波形は、物体までの距離に応じた位相(遅延時間ΔT)の遅れ以外は、照射光の発光波形と同じであるとする。 Irradiation light modulated (1 cycle = 2T) so as to repeat irradiation on / off at irradiation time T is output from the light emitting unit 12, for example, as shown in FIG. 3, depending on the distance to the object. The reflected light is received by the photodiode 51 with a delay time ΔT. It is assumed that the waveform of the reflected light is the same as the emission waveform of the irradiation light except for the delay of the phase (delay time ΔT) according to the distance to the object.
 振り分け信号DIMIX_Aは、転送トランジスタ41Aのオン/オフを制御し、振り分け信号DIMIX_Bは、転送トランジスタ41Bのオン/オフを制御する。図3の振り分け信号DIMIX_Aは、照射光と同一位相の信号であり、振り分け信号DIMIX_Bは、振り分け信号DIMIX_Aを反転した位相となっている。 The distribution signal DIMIX_A controls the on / off of the transfer transistor 41A, and the distribution signal DIMIX_B controls the on / off of the transfer transistor 41B. The distribution signal DIMIX_A in FIG. 3 is a signal having the same phase as the irradiation light, and the distribution signal DIMIX_B has a phase in which the distribution signal DIMIX_A is inverted.
 従って、図2において、フォトダイオード51が反射光を受光することにより発生する電荷は、振り分け信号DIMIX_Aに従って転送トランジスタ41Aがオンとなっている間ではFD部42Aに転送され、振り分け信号DIMIX_Bに従って転送トランジスタ41Bがオンとなっている間ではFD部42Bに転送される。これにより、照射時間Tの照射光の照射が周期的に行われる所定の期間において、転送トランジスタ41Aを介して転送された電荷はFD部42Aに順次蓄積され、転送トランジスタ41Bを介して転送された電荷はFD部42Bに順次蓄積される。 Therefore, in FIG. 2, the electric charge generated by the photodiode 51 receiving the reflected light is transferred to the FD unit 42A according to the distribution signal DIMIX_A while the transfer transistor 41A is on, and is transferred to the transfer transistor according to the distribution signal DIMIX_B. While 41B is on, it is transferred to the FD unit 42B. As a result, the electric charges transferred via the transfer transistor 41A are sequentially accumulated in the FD section 42A and transferred via the transfer transistor 41B during a predetermined period in which the irradiation light of the irradiation time T is periodically irradiated. The electric charge is sequentially accumulated in the FD unit 42B.
 そして、電荷を蓄積する期間の終了後、選択信号ADDRESS DECODE_Aに従って選択トランジスタ43Aがオンとなると、FD部42Aに蓄積されている電荷が信号線53Aを介して読み出され、その電荷量に応じた検出信号Aが受光部15から出力される。同様に、選択信号ADDRESS DECODE_Bに従って選択トランジスタ43Bがオンとなると、FD部42Bに蓄積されている電荷が信号線53Bを介して読み出され、その電荷量に応じた検出信号Bが受光部15から出力される。また、FD部42Aに蓄積されている電荷は、リセット信号RST_Aに従ってリセットトランジスタ44Aがオンになると排出され、FD部42Bに蓄積されている電荷は、リセット信号RST_Bに従ってリセットトランジスタ44Bがオンになると排出される。 Then, when the selection transistor 43A is turned on according to the selection signal ADDRESS DECODE_A after the period for accumulating the electric charge, the electric charge accumulated in the FD unit 42A is read out via the signal line 53A and corresponds to the amount of the electric charge. The detection signal A is output from the light receiving unit 15. Similarly, when the selection transistor 43B is turned on according to the selection signal ADDRESS DECODE_B, the electric charge accumulated in the FD unit 42B is read out via the signal line 53B, and the detection signal B corresponding to the amount of the electric charge is transmitted from the light receiving unit 15. It is output. Further, the electric charge stored in the FD section 42A is discharged when the reset transistor 44A is turned on according to the reset signal RST_A, and the electric charge stored in the FD section 42B is discharged when the reset transistor 44B is turned on according to the reset signal RST_B. Will be done.
 このように、画素31は、フォトダイオード51が受光した反射光により発生する電荷を、遅延時間ΔTに応じて第1タップ52Aまたは第2タップ52Bに振り分けて、検出信号Aおよび検出信号Bを画素データとして出力する。 In this way, the pixel 31 distributes the electric charge generated by the reflected light received by the photodiode 51 to the first tap 52A or the second tap 52B according to the delay time ΔT, and outputs the detection signal A and the detection signal B to the pixels. Output as data.
 信号処理部16は、各画素31から画素データとして供給される検出信号Aおよび検出信号Bに基づき、デプス値を算出する。デプス値を算出する方式としては、2種類の位相の検出信号を用いる2Phase方式と、4種類の位相の検出信号を用いる4Phase方式とがある。 The signal processing unit 16 calculates the depth value based on the detection signal A and the detection signal B supplied as pixel data from each pixel 31. As a method for calculating the depth value, there are a 2Phase method using two types of phase detection signals and a 4Phase method using four types of phase detection signals.
 2Phase方式と、4Phase方式とについて説明する。 The 2Phase method and the 4Phase method will be explained.
 4Phase方式では、受光部15は、図4に示されるように、照射光の照射タイミングを基準に、位相を0°、90°、180°、および、270°だけずらした受光タイミングで反射光を受光(露光)する。より具体的には、受光部15は、あるフレーム期間では、照射光の照射タイミングに対して位相を0°にして受光し、次のフレーム期間では、位相を90°にして受光し、次のフレーム期間では、位相を180°にして受光し、次のフレーム期間では、位相を270°にして受光する、というように、時分割で位相を変えて反射光を受光する。 In the 4Phase method, as shown in FIG. 4, the light receiving unit 15 transmits the reflected light at the light receiving timings shifted by 0 °, 90 °, 180 °, and 270 ° with respect to the irradiation timing of the irradiation light. Receive light (exposure). More specifically, the light receiving unit 15 receives light with the phase set to 0 ° with respect to the irradiation timing of the irradiation light in a certain frame period, and receives light with the phase set to 90 ° in the next frame period. In the frame period, the phase is set to 180 ° to receive light, and in the next frame period, the phase is set to 270 ° to receive light.
 このように、照射光の照射タイミングを基準として予め設定した受光タイミングの位相を、設定位相と呼ぶことにする。なお、設定位相0°、90°、180°、または、270°は、特に言及しない限り、画素31の第1タップ52Aにおける設定位相を表す。第2タップ52Bは、第1タップ52Aとは反転した位相となるので、第1タップ52Aが0°、90°、180°、または、270°の設定位相のとき、第2タップ52Bは、それぞれ、180°、270°、0°、または、90°の設定位相となっている。 In this way, the phase of the light receiving timing that is preset with reference to the irradiation timing of the irradiation light is referred to as the set phase. The set phase of 0 °, 90 °, 180 °, or 270 ° represents the set phase of the pixel 31 at the first tap 52A unless otherwise specified. Since the second tap 52B has a phase inverted from that of the first tap 52A, when the first tap 52A has a set phase of 0 °, 90 °, 180 °, or 270 °, the second tap 52B has a phase opposite to that of the first tap 52A, respectively. , 180 °, 270 °, 0 °, or 90 °.
 図5は、0°、90°、180°、および、270°の各設定位相における画素31の第1タップ52Aの露光期間を、設定位相の違いが分かり易いように並べて示した図である。 FIG. 5 is a diagram showing the exposure periods of the first tap 52A of the pixel 31 in each set phase of 0 °, 90 °, 180 °, and 270 ° side by side so that the difference in the set phase can be easily understood.
 図5に示されるように、反射光は、受光部15において、物体までの距離に応じた時間ΔTだけ、照射光の照射タイミングから位相が遅れた状態で受光される。この物体までの距離に応じて発生する位相差を、設定位相と区別して、距離位相とも称する。 As shown in FIG. 5, the reflected light is received by the light receiving unit 15 in a state in which the phase is delayed from the irradiation timing of the irradiation light by the time ΔT corresponding to the distance to the object. The phase difference generated according to the distance to the object is also referred to as a distance phase to distinguish it from the set phase.
 第1タップ52Aにおいて、照射光と同一の設定位相(設定位相0°)で受光して得られる検出信号Iを検出信号I(0)、照射光と90度ずらした設定位相(設定位相90°)で受光して得られる検出信号Iを検出信号I(90)、照射光と180度ずらした設定位相(設定位相180°)で受光して得られる検出信号Iを検出信号I(180)、照射光と270度ずらした設定位相(設定位相270°)で受光して得られる検出信号Iを検出信号I(270)、と呼ぶことにする。 In the first tap 52A, the detection signal obtained by receiving the irradiation light and the same phase setting (set phase 0 °) I A detection signal I A (0), the irradiation light 90 degrees shifted by the set phase (phase setting 90 detection signal I a (90 a detection signal I a obtained by receiving in °)), the detection signal of the detection signals I a obtained by receiving the irradiation light 180 degrees shifted by the set phase (phase setting 180 °) I a (180), the detection signal I a (270) a detection signal I a obtained by receiving the irradiation light and the 270-degree shifted by the set phase (phase setting 270 °), is referred to as.
 また、図示は省略するが、第2タップ52Bにおいて、照射光と同一の設定位相(設定位相0°)で受光して得られる検出信号Iを検出信号I(0)、照射光と90度ずらした設定位相(設定位相90°)で受光して得られる検出信号Iを検出信号I(90)、照射光と180度ずらした設定位相(設定位相180°)で受光して得られる検出信号Iを検出信号I(180)、照射光と270度ずらした設定位相(設定位相270°)で受光して得られる検出信号Iを検出信号I(270)、と呼ぶことにする。 Although not shown, the second tap 52B, detection signal I B (0) to detection signals I B obtained by receiving the same phase setting and illumination light (set phase 0 °), the irradiation light 90 detection signal a detection signal I B obtained by receiving in degrees staggered phase setting (set phase 90 °) I B (90) , is received by the irradiation light 180 degrees shifted by the set phase (phase setting 180 °) to give detection signal I B a detection signal I B to be (180), the detection signal I B a detection signal I B obtained by receiving the irradiation light and the 270-degree shifted by the set phase (phase setting 270 °) (270), referred to as I will decide.
 図5は、2Phase方式と4Phase方式によるデプス値dの算出方法を説明する図である。 FIG. 5 is a diagram illustrating a method of calculating the depth value d by the 2Phase method and the 4Phase method.
 Indirect ToF方式において、デプス値dは、次式(1)で求めることができる。
Figure JPOXMLDOC01-appb-M000001
 式(1)のcは光速であり、ΔTは遅延時間であり、fは光の変調周波数を表す。また、式(1)のφは、反射光の位相ずれ量[rad]、すなわち距離位相を表し、次式(2)で表される。
Figure JPOXMLDOC01-appb-M000002
In the Indirect ToF method, the depth value d can be obtained by the following equation (1).
Figure JPOXMLDOC01-appb-M000001
In equation (1), c is the speed of light, ΔT is the delay time, and f is the modulation frequency of light. Further, φ in the equation (1) represents the phase shift amount [rad] of the reflected light, that is, the metric phase, and is represented by the following equation (2).
Figure JPOXMLDOC01-appb-M000002
 4Phase方式では、式(2)のI,Qが、設定位相0°、90°、180°、および270°で得られた検出信号I(0)乃至I(270)および検出信号I(0)乃至I(270)を用いて、次式(3)で計算される。I,Qは、照射光の輝度変化をsin波と仮定し、sin波の位相を極座標から直交座標系(IQ平面)に変換した信号である。
 I=c-c180
   =(I(0)-I(0))-(I(180)-I(180))
 Q=c90-c270
   =(I(90)-I(90))-(I(270)-I(270))  ・・・・・・・・・・(3)
In 4Phase method, I of equation (2), Q is set phase 0 °, 90 °, 180 °, and 270 detect signals I A (0) obtained in ° to I A (270) and the detection signal I B (0) through using I B (270), is calculated by the following formula (3). I and Q are signals obtained by converting the phase of the sine wave from polar coordinates to a Cartesian coordinate system (IQ plane), assuming that the change in brightness of the irradiation light is a sine wave.
I = c 0- c 180
= (I A (0) -I B (0)) - (I A (180) -I B (180))
Q = c 90- c 270
= (I A (90) -I B (90)) - (I A (270) -I B (270)) ·········· (3)
 4Phase方式では、例えば、式(3)の“I(0)-I(180)”や“I(90)-I(270)”のように、同じ画素での逆位相の検出信号の差分を取ることで、各画素に存在するタップ間の特性ばらつき、すなわち、固定パターンノイズを除去することができる。 In 4Phase method, for example, of the formula (3) "I A (0 ) -I A (180)" Ya "I A (90) -I A (270)" as the detection of anti-phase at the same pixel By taking the difference between the signals, it is possible to remove the characteristic variation between the taps existing in each pixel, that is, the fixed pattern noise.
 一方、2Phase方式では、設定位相0°、90°、180°、および270°で得られた検出信号I(0)乃至I(270)および検出信号I(0)乃至I(270)のうち、直交関係にある2つの設定位相のみを用いて、物体までのデプス値dを求めることができる。例えば、設定位相0°の検出信号I(0)およびI(0)と、設定位相90°の検出信号I(90)およびI(90)を用いた場合、式(2)のI,Qは次式(4)となる。
 I=c-c180=(I(0)-I(0))
 Q=c90-c270=(I(90)-I(90))  ・・・・・・・・・・(4)
On the other hand, in the 2Phase method, setting the phase 0 °, 90 °, 180 °, and 270 detect signals I A (0) obtained in ° to I A (270) and the detection signal I B (0) to I B (270 ), The depth value d to the object can be obtained by using only two set phases that are orthogonal to each other. For example, when used with the set phase 0 ° of the detection signals I A (0) and I B (0), setting the phase 90 ° of the detection signals I A (90) and I B (90), formula (2) I and Q are given by the following equation (4).
I = c 0 -c 180 = ( I A (0) -I B (0))
Q = c 90 -c 270 = ( I A (90) -I B (90)) ·········· (4)
 例えば、設定位相180°の検出信号I(180)およびI(180)と、設定位相270°の検出信号I(270)およびI(270)を用いた場合、式(2)のI,Qは次式(5)となる。
 I=c-c180=-(I(180)-I(180))
 Q=c90-c270=-(I(270)-I(270))  ・・・・・・・(5)
For example, when used with the set phase 180 ° of the detection signals I A (180) and I B (180), sets the phase 270 ° of the detection signals I A (270) and I B (the 270), formula (2) I and Q are given by the following equation (5).
I = c 0 -c 180 = - (I A (180) -I B (180))
Q = c 90 -c 270 = - (I A (270) -I B (270)) ······· (5)
 2Phase方式では、各画素に存在するタップ間の特性ばらつきは除去することができないが、2つの設定位相の検出信号のみで物体までのデプス値dを求めることができるので、4Phase方式の2倍のフレームレートで測距を行うことができる。 In the 2Phase method, the characteristic variation between taps existing in each pixel cannot be removed, but the depth value d to the object can be obtained only from the detection signals of the two set phases, which is twice that of the 4Phase method. Distance measurement can be performed at the frame rate.
 換言すれば、各画素に存在するタップ間の特性ばらつきを適正に補正することができれば、2Phase方式を採用することで、物体までの距離を正確かつ高速に測定することができる。 In other words, if the characteristic variation between taps existing in each pixel can be properly corrected, the distance to the object can be measured accurately and at high speed by adopting the 2Phase method.
 タップ間の特性ばらつきを補正する手法として、例えば、タップ間の特性ばらつきを補正する補正パラメータとして、オフセットc0とゲインc1とを仮定して、オフセットc0とゲインc1を最小二乗法により算出する方法がある。 As a method for correcting the characteristic variation between taps, for example, assuming an offset c 0 and a gain c 1 as correction parameters for correcting the characteristic variation between taps, the offset c 0 and the gain c 1 are set by the least squares method. There is a way to calculate.
 具体的には、各画素21の第1タップ32Aと第2タップ32Bの受光期間は、180°位相がずれていることから、オフセットc0とゲインc1と、検出信号I(0)乃至I(270)および検出信号I(0)乃至I(270)との間には、理想条件では以下の関係が成り立つ。
  I(0) = c0+ c1・I(180)
  I(90) = c0+ c1・I(270)
  I(180) = c0+ c1・I(0)
  I(270) = c0+ c1・I(90)     ・・・・・(6)
Specifically, the light receiving period of the first tap 32A and the second tap 32B of each pixel 21, since it is 180 ° out of phase, and offset c 0 and the gain c 1, detection signal I A (0) to between the I a (270) and the detection signal I B (0) to I B (270), the following relationship holds in the ideal conditions.
I A (0) = c 0 + c 1 · I B (180)
I A (90) = c 0 + c 1 · I B (270)
I A (180) = c 0 + c 1 · I B (0)
I A (270) = c 0 + c 1 · I B (90) ····· (6)
 行列A、x、およびyを、
Figure JPOXMLDOC01-appb-M000003
と置くと、式(6)は、y=Axと表現することができるので、最小二乗法により、次式(7)により、行列x、すなわち、オフセットc0とゲインc1を算出することができる。
Figure JPOXMLDOC01-appb-M000004
The matrices A, x, and y,
Figure JPOXMLDOC01-appb-M000003
Then, since the equation (6) can be expressed as y = Ax, the matrix x, that is, the offset c 0 and the gain c 1 can be calculated by the least squares method and the following equation (7). can.
Figure JPOXMLDOC01-appb-M000004
 しかしながら、式(7)の分母に、以下の式(8)が出現するところ、式(8)の値は、照射タイミングを制御する照射光の発光波形の振幅が小さい場合に限りなくゼロに近づく値となる。例えば、発光波形の振幅が1/100になると、式(8)の値は、およそ1/8000になる。そのため、反射率の低い物体や、遠方の物体、または、消費電力低減のため発光量を小さくした場合などに、オフセットc0とゲインc1の算出が不安定となる。
Figure JPOXMLDOC01-appb-M000005
However, when the following equation (8) appears in the denominator of the equation (7), the value of the equation (8) approaches zero as much as possible when the amplitude of the emission waveform of the irradiation light that controls the irradiation timing is small. It becomes a value. For example, when the amplitude of the emission waveform becomes 1/100, the value of the equation (8) becomes about 1/8000. Therefore, the calculation of the offset c 0 and the gain c 1 becomes unstable when the reflectance is low, the object is far away, or the amount of light emitted is reduced to reduce power consumption.
Figure JPOXMLDOC01-appb-M000005
 そこで、図1の測距センサ13の信号処理部16は、タップ間の特性ばらつきをより適切に補正し、2Phase方式を用いて物体までの距離を算出する。 Therefore, the signal processing unit 16 of the distance measuring sensor 13 in FIG. 1 more appropriately corrects the characteristic variation between taps and calculates the distance to the object using the 2Phase method.
<3.信号処理部の第1構成例>
 図7は、図1の測距センサ13の信号処理部16の第1構成例を示すブロック図である。
<3. First configuration example of signal processing unit>
FIG. 7 is a block diagram showing a first configuration example of the signal processing unit 16 of the distance measuring sensor 13 of FIG.
 図7の信号処理部16は、モデル決定部71、特性計算部72、信号補正部73、および、距離計算部74を備える。 The signal processing unit 16 of FIG. 7 includes a model determination unit 71, a characteristic calculation unit 72, a signal correction unit 73, and a distance calculation unit 74.
 受光部15の各画素31で検出される輝度波形は、発光部12が出力した照射光の発光波形と、受光部15の各画素31が露光(受光)を行う際の露光波形とを畳み込んだ波形となる。 The luminance waveform detected by each pixel 31 of the light receiving unit 15 is a convolution of the emission waveform of the irradiation light output by the light emitting unit 12 and the exposure waveform when each pixel 31 of the light receiving unit 15 exposes (receives light). It becomes a waveform.
 そこで、モデル決定部71は、発光部12が出力した照射光の発光波形の形状と、受光部15の各画素31が露光(受光)を行う際の露光波形の形状とを仮定して(予測して)、受光部15で観測される輝度波形のモデル(輝度モデル)を決定する。 Therefore, the model determination unit 71 assumes (predicts) the shape of the emission waveform of the irradiation light output by the light emitting unit 12 and the shape of the exposure waveform when each pixel 31 of the light receiving unit 15 exposes (receives light). Then, a model (luminance model) of the brightness waveform observed by the light receiving unit 15 is determined.
 図8は、モデル決定部71が決定する輝度モデルの例を示している。 FIG. 8 shows an example of a luminance model determined by the model determination unit 71.
 例えば、モデル決定部71は、図3に示されるように、第1の輝度モデル(モデル1)として、照射光の発光波形の形状として矩形波を仮定し、露光波形の形状として矩形波を仮定し、受光部15で観測される輝度波形が三角波であると仮定する。 For example, as shown in FIG. 3, the model determination unit 71 assumes a square wave as the shape of the emission waveform of the irradiation light and assumes a square wave as the shape of the exposure waveform as the first luminance model (model 1). Then, it is assumed that the luminance waveform observed by the light receiving unit 15 is a triangular wave.
 また例えば、モデル決定部71は、第2の輝度モデル(モデル2)として、照射光の発光波形の形状としてsin波を仮定し、露光波形の形状として矩形波を仮定し、受光部15で観測される輝度波形がsin波であると仮定する。 Further, for example, the model determination unit 71 assumes a sine wave as the shape of the emission waveform of the irradiation light as the second luminance model (model 2), assumes a square wave as the shape of the exposure waveform, and observes with the light receiving unit 15. It is assumed that the luminance waveform to be generated is a sine wave.
 あるいはまた、モデル決定部71は、照射光の発光波形の形状と、露光波形の形状が仮定できない場合、第3の輝度モデル(モデル3)として、受光部15で観測される輝度波形が高調波であると仮定する。 Alternatively, in the model determination unit 71, when the shape of the emission waveform of the irradiation light and the shape of the exposure waveform cannot be assumed, the luminance waveform observed by the light receiving unit 15 is a harmonic as the third luminance model (model 3). Is assumed to be.
 モデル決定部71は、例えば、図3のような複数の輝度モデルのなかから、受光部15で観測される輝度波形のモデルである輝度モデルを決定し、特性計算部72へ供給する。なお、発光波形が、ユーザによる初期設定などにより既知である場合には、その設定値に基づいて、輝度モデルを決定してもよい。 The model determination unit 71 determines a luminance model, which is a model of the luminance waveform observed by the light receiving unit 15, from among a plurality of luminance models as shown in FIG. 3, and supplies the luminance model to the characteristic calculation unit 72. If the emission waveform is known by the initial setting by the user, the brightness model may be determined based on the set value.
 特性計算部72は、発光波形の形状と露光波形の形状とに応じた輝度波形(輝度モデル)に基づいて、各画素31のタップ間の特性ばらつきを補正する補正パラメータであるオフセットc0とゲインc1を算出する。算出されたオフセットc0とゲインc1は、信号補正部73に供給される。 The characteristic calculation unit 72 has offset c 0 and gain, which are correction parameters for correcting characteristic variations between taps of each pixel 31 based on a luminance waveform (luminance model) corresponding to the shape of the emission waveform and the shape of the exposure waveform. Calculate c 1. The calculated offset c 0 and gain c 1 are supplied to the signal correction unit 73.
 例えば、特性計算部72は、モデル決定部71において輝度モデルとして第1のモデルが決定された場合、輝度波形が三角波であると仮定して、補正パラメータであるオフセットc0とゲインc1を算出する。 For example, when the model determination unit 71 determines the first model as the luminance model, the characteristic calculation unit 72 calculates the offset c 0 and the gain c 1 which are the correction parameters, assuming that the luminance waveform is a triangular wave. do.
 また、特性計算部72は、モデル決定部71において輝度モデルとして第2のモデルが決定された場合、輝度波形がsin波であると仮定して、補正パラメータであるオフセットc0とゲインc1を算出する。 Further, when the model determination unit 71 determines the second model as the luminance model, the characteristic calculation unit 72 assumes that the luminance waveform is a sine wave, and sets the offset c 0 and the gain c 1 which are correction parameters. calculate.
 また、特性計算部72は、モデル決定部71において輝度モデルとして第3のモデルが決定された場合、輝度波形が高調波であると仮定して、補正パラメータであるオフセットc0とゲインc1を算出する。 Further, when the model determination unit 71 determines the third model as the luminance model, the characteristic calculation unit 72 assumes that the luminance waveform is a harmonic and sets the offset c 0 and the gain c 1 which are correction parameters. calculate.
 輝度波形が三角波、sin波、または、高調波であると仮定された場合の補正パラメータの詳細な算出方法については後述する。 The detailed calculation method of the correction parameter when the luminance waveform is assumed to be a triangular wave, a sine wave, or a harmonic wave will be described later.
 信号補正部73は、特性計算部72で算出されたオフセットc0とゲインc1を用いて、第1タップ52Aの検出信号I(θ)または第2タップ52Bの検出信号I(θ)のいずれか一方の信号を補正する。 Signal correcting unit 73 uses the offset c 0 and a gain c 1 calculated by the characteristic calculating section 72, the detection signal I B of the detection signals I A (θ) or the second tap 52B of the first tap 52A (theta) Correct one of the signals.
 例えば、信号補正部73は、上述の式(6)に従い、設定位相0°の第2タップ52Bの検出信号I(0)を補正することにより、設定位相180°の第1タップ52Aの検出信号I”(180)を生成し、設定位相90°の第2タップ52Bの検出信号I(0)を補正することにより、設定位相270°の第1タップ52Aの検出信号I”(180)を生成する。なお、以下では、補正パラメータを用いて変換された検出信号I(θ)には、検出信号I”(θ)のように「”」を付けて表す。
  I”(180) = c0+ c1・I(0)
  I”(270) = c0+ c1・I(90)
For example, the signal correcting unit 73 in accordance with the above equation (6), by correcting the detection signal I B of the second tap 52B of the phase setting 0 ° (0), detection of the first tap 52A of the phase setting 180 ° By generating the signal I " A (180) and correcting the detection signal I B (0) of the second tap 52B having the set phase 90 °, the detection signal I" A (0) of the first tap 52A having the set phase 270 ° 180) is generated. In the following, the detection signal I (θ) converted using the correction parameter is represented by adding “” ”like the detection signal I” (θ).
I "A (180) = c 0 + c 1 · I B (0)
I "A (270) = c 0 + c 1 · I B (90)
 距離計算部74は、直交関係にある2つの設定位相の位相画像、具体的には、信号補正部73から供給される補正後の画素データに基づいて、2Phase方式により、物体までの距離であるデプス値を算出する。そして、距離計算部74は、各画素31の画素値としてデプス値が格納されたデプスマップを生成して、モジュール外へ出力する。 The distance calculation unit 74 is the distance to the object by the 2 Phase method based on the phase images of the two set phases having an orthogonal relationship, specifically, the corrected pixel data supplied from the signal correction unit 73. Calculate the depth value. Then, the distance calculation unit 74 generates a depth map in which the depth value is stored as the pixel value of each pixel 31 and outputs the depth map to the outside of the module.
 測距センサ13の受光部15は、設定位相を、0°、90°、180°、および、270°に順次変更して受光してもよいし、例えば、0°と90°の設定位相のみを交互の繰り返するような受光を行ってもよい。信号処理部16は、隣接する2枚(2つの設定位相)の位相画像を用いて、デプスマップを生成して出力することができる。 The light receiving unit 15 of the distance measuring sensor 13 may sequentially change the set phase to 0 °, 90 °, 180 °, and 270 ° to receive light, and for example, only the set phases of 0 ° and 90 °. The light reception may be performed so that the above steps are alternately repeated. The signal processing unit 16 can generate and output a depth map using two adjacent (two set phases) phase images.
<4.輝度波形を三角波とした場合の補正パラメータ算出処理>
 次に、モデル決定部71において第1のモデルが決定され、輝度モデルとして三角波が仮定された場合の特性計算部72による補正パラメータの算出について説明する。
<4. Correction parameter calculation process when the luminance waveform is a triangular wave>
Next, the calculation of the correction parameter by the characteristic calculation unit 72 when the first model is determined by the model determination unit 71 and a triangular wave is assumed as the luminance model will be described.
 特性計算部72には、受光部15(図1)から、設定位相θを0°、90°、180°、270°に順次設定して得られた4枚の位相画像が供給される。画素アレイ32の各画素31の第1タップ52Aおよび第2タップ52Bにおいて設定位相θで受光した反射光の検出信号I(θ)およびI(θ)は輝度波形を表す関数に対応するので、検出信号I(θ)およびI(θ)を、輝度関数I(θ)およびI(θ)とも称する。 The characteristic calculation unit 72 is supplied with four phase images obtained by sequentially setting the set phase θ to 0 °, 90 °, 180 °, and 270 ° from the light receiving unit 15 (FIG. 1). Since the detection signal I A (θ) and I B of the reflected light received by the set phase theta in the first tap 52A and the second tap 52B of the pixels 31 of the pixel array 32 (theta) correspond to the function representing the luminance waveform , the detection signal I a (θ) and I B the (theta), also referred to as the intensity function I a (θ) and I B (theta).
 図9は、モデル決定部71で決定された輝度モデルが第1のモデルの三角波である場合の、輝度関数I(0)乃至I(270)および輝度関数I(0)乃至I(270)の概念図を示している。 9, when the luminance model determined by the model determination unit 71 is a triangular wave of the first model, the intensity function I A (0) to I A (270) and the intensity function I B (0) to I B The conceptual diagram of (270) is shown.
 特性計算部72は、設定位相を0°、90°、180°、および、270°の4位相に順次設定して受光部15で検出された各画素31の第1タップ52Aの輝度関数I(0)、I(90)、I(180)、およびI(270)と、第2タップ52Bの輝度関数I(0)、I(90)、I(180)、およびI(270)とに基づいて、タップ間の特性ばらつきを補正する補正パラメータであるオフセットc0とゲインc1を算出する。 Characteristic calculation unit 72, 0 ° setting phase, 90 °, 180 °, and, 270 ° of 4 luminance function I A of the first tap 52A of each pixel 31 is detected by the light receiving portion 15 are sequentially set to the phase (0), I a (90 ), and I a (180), and I a (270), the intensity function I B of the second tap 52B (0), I B ( 90), I B (180), and based on the I B (270), calculates the offset c 0 and the gain c 1 is a correction parameter for correcting the variation in characteristics between taps.
 輝度モデルが第1のモデルの三角波である場合、第1タップ52Aで検出される輝度関数I(θ)は、図9に示されるような、中心center、振幅amp、オフセットoffsetにより規定される三角波に従う。 If the brightness model is a triangular wave of the first model, the intensity function I A detected by the first tap 52A (theta), such as shown in FIG. 9, the central center A, the amplitude # 038 A, by the offset offset A Follow the specified triangular wave.
 同様に、第2タップ52Bで検出される輝度関数I(θ)は、図9に示されるような、中心center、振幅amp、オフセットoffsetにより規定される三角波に従う。 Similarly, the intensity function I B detected by the second tap 52B (theta), such as shown in FIG. 9, the central center B, the amplitude # 038 B, according to the triangular wave is defined by an offset offset B.
 この場合、オフセットc0とゲインc1は、以下の式(9)および式(10)により算出することができる。
Figure JPOXMLDOC01-appb-M000006
In this case, the offset c 0 and the gain c 1 can be calculated by the following equations (9) and (10).
Figure JPOXMLDOC01-appb-M000006
 ここで、第1タップ52Aで検出される輝度関数I(θ)の三角波の振幅amp、および、第2タップ52Bで検出される輝度関数I(θ)の三角波の振幅ampは、例えば、文献「M. Schmidt. “Spatiotemporal Analysis of Range Imagery”. Dissertation, Department of Physics and Astronomy, University of Heidelberg, 2008.」等に記載されているように、以下の式(11)および式(12)により算出することができる。
Figure JPOXMLDOC01-appb-M000007
Here, the amplitude # 038 A of the triangular wave of the luminance function I A detected by the first tap 52A (theta), and the amplitude # 038 B of the triangular wave of the luminance function I B detected by the second tap 52B (theta) is For example, as described in the literature "M. Schmidt." Spatiotemporal Analysis of Range Imagery ". Dissertation, Department of Physics and Astronomy, University of Heidelberg, 2008.", the following equations (11) and (12) ) Can be calculated.
Figure JPOXMLDOC01-appb-M000007
 また、三角波の輝度関数I(θ)は、式(13)で表すことができるので、θ=0°、90°、180°、または、270°として得られる式(14)乃至式(17)から、三角波の中心centerは、式(18)で求めることができる。
Figure JPOXMLDOC01-appb-M000008
Further, since the brightness function I (θ) of the triangular wave can be expressed by the equation (13), the equations (14) to (17) obtained as θ = 0 °, 90 °, 180 °, or 270 °. Therefore, the center of the triangular wave can be obtained by Eq. (18).
Figure JPOXMLDOC01-appb-M000008
 したがって、第1タップ52Aの輝度関数I(θ)の三角波の中心center、および、第2タップ52Bの輝度関数I(θ)の三角波の中心centerは、それぞれ、式(19)および式(20)により算出することができる。
Figure JPOXMLDOC01-appb-M000009
Thus, the triangular wave of the center center A of the intensity function I A of the first tap 52A (theta), and the central center B of the triangular wave of the luminance function I B of the second tap 52B (theta), respectively, equation (19) and It can be calculated by the formula (20).
Figure JPOXMLDOC01-appb-M000009
 第1タップ52Aの輝度関数I(θ)の三角波のオフセットoffset、および、第2タップ52Bの輝度関数I(θ)の三角波のオフセットoffsetは、三角波の最小値に相当するので、以下のように、三角波の中心centerから振幅ampを引くことで得ることができる。
   offset=center-amp  ・・・・・・・・・・(21)
   offset=center-amp  ・・・・・・・・・・(22)
Triangular wave offset offset A of the intensity function I A of the first tap 52A (theta), and the offset offset B of the triangular wave of the luminance function I B of the second tap 52B (theta), so corresponds to the minimum value of the triangular wave, It can be obtained by subtracting the amplitude amp from the center center of the triangular wave as shown below.
offset A = center A -amp A ·········· (21)
offset B = center B -amp B ·········· (22)
 特性計算部72は、輝度モデルとして三角波を仮定した場合、第1タップ52Aで検出された輝度関数I(θ)の三角波の振幅amp、および、第2タップ52Bで検出された輝度関数I(θ)の三角波の振幅ampを、上述の式(11)および式(12)により算出し、オフセットoffset、および、オフセットoffsetを、上述の式(21)および式(22)により算出する。そして、特性計算部72は、上述の式(9)および式(10)により、オフセットc0とゲインc1を算出する。 Characteristic calculating unit 72, assuming a triangular wave as a brightness model, the amplitude # 038 A of the triangular wave of the detected intensity function I A first tap 52A (theta), and the intensity function I detected by the second tap 52B The amplitude amp B of the triangular wave of B (θ) is calculated by the above equations (11) and (12), and the offset offset A and the offset offset B are calculated by the above equations (21) and (22). calculate. Then, the characteristic calculation unit 72 calculates the offset c 0 and the gain c 1 by the above equations (9) and (10).
 図10は、輝度モデルを第1のモデルの三角波であると仮定した場合に、信号補正部73が行う補正処理の概念図を示している。 FIG. 10 shows a conceptual diagram of correction processing performed by the signal correction unit 73 when the luminance model is assumed to be a triangular wave of the first model.
 信号補正部73は、設定位相0°の第2タップ52Bの検出信号I(0)をオフセットc0とゲインc1を用いて補正することにより、設定位相180°の第1タップ52Aの検出信号I”(180)を生成する。また、信号補正部73は、設定位相90°の第2タップ52Bの検出信号I(90)をオフセットc0とゲインc1を用いて補正することにより、設定位相270°の第1タップ52Aの検出信号I”(270)を生成する。 Signal correction unit 73, by correcting using the offset c 0 and the gain c 1 a detection signal I B (0) of the second tap 52B of the phase setting 0 °, the detection of the first tap 52A of the phase setting 180 ° The signal I " A (180) is generated. Further, the signal correction unit 73 corrects the detection signal I B (90) of the second tap 52B having the set phase of 90 ° by using the offset c 0 and the gain c 1. Therefore, the detection signal I "A (270) of the first tap 52A having the set phase of 270 ° is generated.
 これにより、4位相の検出信号I(θ)が揃うので、式(1)、(2)、および(4)を用いて、デプス値を計算することができる。 As a result, the four-phase detection signals I (θ) are aligned, so the depth value can be calculated using equations (1), (2), and (4).
<5.輝度波形をsin波とした場合の補正パラメータ算出処理>
 次に、モデル決定部71において第2のモデルが決定され、輝度モデルとしてsin波が仮定された場合の特性計算部72による補正パラメータの算出について説明する。
<5. Correction parameter calculation process when the luminance waveform is a sine wave>
Next, the calculation of the correction parameter by the characteristic calculation unit 72 when the second model is determined by the model determination unit 71 and a sine wave is assumed as the luminance model will be described.
 輝度モデルとしてsin波が仮定された場合、振幅ampおよび振幅ampを求める演算式が、上述した三角波の場合の式(11)および式(12)と異なる。具体的には、輝度モデルがsin波の場合の振幅ampおよび振幅ampは、以下の式(23)および式(24)で算出される。 When a sine wave is assumed as the luminance model, the arithmetic expressions for obtaining the amplitude amp A and the amplitude amp B are different from the above-mentioned equations (11) and (12) for the triangular wave. Specifically, the amplitude amp A and the amplitude amp B when the luminance model is a sine wave are calculated by the following equations (23) and (24).
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
 特性計算部72は、輝度モデルとしてsin波を仮定した場合、第1タップ52Aで検出された輝度関数I(θ)のsin波の振幅amp、および、第2タップ52Bで検出された輝度関数I(θ)のsin波の振幅ampを、上述の式(23)および式(24)により算出する。 Characteristic calculating unit 72, assuming a sin wave as a brightness model, the amplitude # 038 A of sin wave of the detected intensity function I A first tap 52A (theta), and the luminance detected by the second tap 52B the amplitude # 038 B of sin wave function I B (theta), is calculated by the above equation (23) and (24).
 また、特性計算部72は、輝度関数I(θ)のsin波の中心center、および、輝度関数I(θ)のsin波の中心centerを、上述の式(19)および式(20)により算出する。さらに、特性計算部72は、輝度関数I(θ)のsin波のオフセットoffset、および、輝度関数I(θ)のsin波のオフセットoffsetを、上述の式(21)および式(22)により算出する。そして、特性計算部72は、上述の式(9)および式(10)により、オフセットc0とゲインc1を算出する。 Moreover, the characteristic calculating section 72, the intensity function I A (theta) center center A of sin wave, and, a central center B of sin waves of the intensity function I B (theta), the above equation (19) and ( Calculated according to 20). Furthermore, characteristic calculation unit 72, the intensity function I A (theta) of the sin wave offset offset A, and the offset offset B of the sin wave of the intensity function I B (theta), the above equation (21) and ( Calculated according to 22). Then, the characteristic calculation unit 72 calculates the offset c 0 and the gain c 1 by the above equations (9) and (10).
 図11は、輝度モデルを第2のモデルのsin波であると仮定した場合の、輝度関数I(0)乃至I(270)および輝度関数I(0)乃至I(270)の概念図を示している。 11, assuming that the brightness model is a sin wave of the second model, the intensity function I A (0) to I A (270) and the intensity function I B (0) to I B of the (270) A conceptual diagram is shown.
 図12は、輝度モデルを第2のモデルのsin波であると仮定した場合に、信号補正部73が行う補正処理の概念図を示している。 FIG. 12 shows a conceptual diagram of correction processing performed by the signal correction unit 73 when the luminance model is assumed to be a sine wave of the second model.
 発光波形が矩形になるよう設計しても、発光回路の時定数や、発光波形および露光波形の変調周波数などに応じて、実際の発光波形が、矩形から歪んだ形状になることがある。その場合、輝度モデルを三角波と仮定するのではなく、sin波として仮定するほうが、実際の輝度モデルに近い場合があり、輝度モデルをsin波と仮定してオフセットc0とゲインc1を算出する方が、距離の算出結果が良好となる場合がある。 Even if the emission waveform is designed to be rectangular, the actual emission waveform may be distorted from the rectangle depending on the time constant of the emission circuit, the modulation frequency of the emission waveform and the exposure waveform, and the like. In that case, it may be closer to the actual luminance model to assume the luminance model as a sine wave instead of assuming it as a triangular wave, and calculate the offset c 0 and gain c 1 by assuming the luminance model as a sine wave. In some cases, the calculation result of the distance is better.
<6.輝度波形を高調波とした場合の補正パラメータ算出処理>
 次に、モデル決定部71において第3のモデルが決定され、輝度モデルとして高調波が仮定された場合の特性計算部72による補正パラメータの算出について説明する。
<6. Correction parameter calculation process when the luminance waveform is a harmonic>
Next, the calculation of the correction parameter by the characteristic calculation unit 72 when the third model is determined by the model determination unit 71 and harmonics are assumed as the luminance model will be described.
 特性計算部72は、機械学習を用いて、補正パラメータを算出する。 The characteristic calculation unit 72 calculates the correction parameter using machine learning.
 初めに、測距モジュール11は、設定位相を0°、90°、180°、および270°とした4枚の位相画像を、様々なシーン(測定対象)で取得する。取得された様々なシーンの4枚の位相画像が、特性計算部72に蓄積される。 First, the ranging module 11 acquires four phase images with the set phases set to 0 °, 90 °, 180 °, and 270 ° in various scenes (measurement targets). The four phase images of the acquired various scenes are accumulated in the characteristic calculation unit 72.
 特性計算部72は、第1タップ52Aと第2タップ52Bのそれぞれについて、同一構成のニューラルネットワークの学習器を有している。 The characteristic calculation unit 72 has a neural network learner having the same configuration for each of the first tap 52A and the second tap 52B.
 図13は、第1タップ52Aで検出される高調波の輝度モデルを学習する学習処理を説明する図である。 FIG. 13 is a diagram illustrating a learning process for learning the luminance model of the harmonics detected by the first tap 52A.
 特性計算部72は、第1タップ52Aで検出される高調波の輝度モデルを学習する学習器81Aを有している。学習器81Aの各ノードの重み係数は、WA={w,w,w,・・・}である。 The characteristic calculation unit 72 has a learner 81A that learns a brightness model of harmonics detected by the first tap 52A. Weighting factor of each node learning unit 81A is, W A = {w 1, w 2, w 3, ···} is.
 輝度モデルを高調波と仮定したときの輝度関数I’(θ)を、フーリエ級数展開した次式(25)で表す。
Figure JPOXMLDOC01-appb-M000011
The luminance function I 'A (theta), assuming the luminance model harmonic, expressed by Fourier series expanded following equation (25).
Figure JPOXMLDOC01-appb-M000011
 式(25)は、高調波と仮定したときの輝度関数I’(θ)を、高調波の中心c A と、1次からM次までのcos関数と、1次からM次までのsin関数とで表したものである。ここで、次数M(Mは自然数)は予め決定される。 Equation (25), the luminance function I 'A (theta) on the assumption that higher harmonics, centered c A 0 harmonics, and cos function from primary to M order, from the primary to M order It is represented by the sin function. Here, the order M (M is a natural number) is determined in advance.
 学習器81Aの入力vA inは、蓄積された所定のシーンの設定位相0°、90°、180°、および、270°の4枚の位相画像内の所定の画素31の第1タップ52Aの輝度値I(0)、I(90)、I(180)、およびI(270)とされる。すなわち、入力vA in={I(0)、I(90)、I(180)、およびI(270)}である。 The input v A in of the learner 81A is the first tap 52 A of the predetermined pixel 31 in the four phase images of the accumulated predetermined scenes of set phases 0 °, 90 °, 180 °, and 270 °. brightness value I a (0), I a (90), are I a (180), and I a (270). That is, the input v A in = {I A ( 0), I A (90), I A (180), and I A (270)} is.
 学習器81Aの出力vA outは、輝度関数I’(θ)をフーリエ級数展開した式(25)のM次元のcos関数の係数{aA 1,・・・,aA M}と、M次元のsin関数の係数{bA 1,・・・, bA M}とされる。すなわち、出力vA out={aA 1,・・・,aA M, bA 1,・・・, bA M}である。 Output v A out of the learning unit 81A, the coefficient of cos function of M dimensions of formula (25) in which the luminance function I 'A (theta) Fourier series expansion {a A 1, ···, a A M} and, It is the coefficient of the M-dimensional sin function {b A 1 , ..., b A M }. That is, the output v A out = {a A 1 , ..., a A M , b A 1 , ..., b A M }.
 予め何らかの重み係数WAが初期値として設定された学習器81Aの入力vA inに、vA in={I(0)、I(90)、I(180)、およびI(270)}を入力すると、出力vA out={aA 1,・・・,aA M, bA 1,・・・, bA M}が得られる。 Advance to the input v A in some weight coefficient W A is set learner 81A as an initial value, v A in = {I A (0), I A (90), I A (180), and I A ( When 270)} is input, the output v A out = {a A 1 , ..., a A M , b A 1 , ..., b A M } is obtained.
 特性計算部72は、学習器81Aから得られた出力vA out={aA 1,・・・,aA M, bA 1,・・・, bA M}を、式(25)に代入し、高調波の中心c A には、蓄積された様々なシーンの入力vA inの平均値を代入することで、高調波と仮定した輝度関数I’(θ)を復元する。 The characteristic calculation unit 72 puts the output v A out = {a A 1 , ..., a A M , b A 1 , ..., b A M } obtained from the learner 81A into equation (25). By substituting the average value of the input v A in of various accumulated scenes into the center c A 0 of the harmonic, the luminance function I A '(θ) assumed to be the harmonic is restored.
 復元された高調波の輝度関数I’(θ)が輝度関数I(θ)をよく表している場合、設定位相0°、90°、180°、および、270°のときの輝度関数I’(θ)の値である、輝度値I’(0)、I’(90)、I’(180)、およびI’(270)は、学習器81Aの入力とした輝度関数I(θ)の値である、輝度値I(0)、I(90)、I(180)、およびI(270)}と同じになるはずである。 If restored harmonic intensity function I 'A (θ) is representative good intensity function I A (theta), it sets the phase 0 °, 90 °, 180 ° , and, the intensity function I when the 270 ° 'is the value of a (theta), the luminance value I' a (0), I 'a (90), I' a (180), and I 'a (270), the luminance as input learning device 81A is the value of the function I a (θ), the brightness value I a (0), I a (90), I a (180), and I a (270)} and should be the same.
 そこで、特性計算部72は、学習器81Aの入力vA inと出力vA outの輝度関数I’(θ)の値との差が小さくなるように学習を行う。具体的には、特性計算部72は、次式(26)で表される第1の評価関数LA が小さくなるように、学習器81Aの各ノードの重み係数WA={wA ,wA ,wA ,・・・}を更新する。式(27)のWA’は、更新後の重み係数を表し、ηAは学習率を表す係数である。
Figure JPOXMLDOC01-appb-M000012
Therefore, characteristic calculation unit 72 performs learning such that the difference between the value of the intensity function I 'A (θ) of the input v A in the output v A out of the learning device 81A is reduced. Specifically, the characteristic calculating section 72, as the first evaluation function L A 1 represented by the following formula (26) decreases, the weighting factor of each node learning unit 81A W A = {w A 1 , W A 2 , w A 3 , ...} are updated. W A 'of formula (27) represents the weighting coefficients after updating, the eta A is a coefficient representing the learning rate.
Figure JPOXMLDOC01-appb-M000012
 以上の重み係数WAの更新を、蓄積された様々なシーンの4枚の位相画像の同一画素の第1タップ52Aの輝度値I(0)、I(90)、I(180)、およびI(270)を用いて所定回数繰り返すことにより、式(25)で表される高調波の輝度関数I’(θ)が求められる。 More weighting factor W update of A, the luminance value I A of the first tap 52A of the same pixel in the four phase images of the stored various scenes (0), I A (90 ), I A (180) , and by repeating a predetermined number of times by using the I a (270), the intensity function I 'a harmonics of the formula (25) (θ) is obtained.
 図示は省略するが、特性計算部72は、第2タップ52Bについても同様の学習を行う。 Although not shown, the characteristic calculation unit 72 performs the same learning for the second tap 52B.
 すなわち、輝度モデルを高調波と仮定したときの輝度関数I’B(θ)を、フーリエ級数展開した次式(28)で表す。
Figure JPOXMLDOC01-appb-M000013
In other words, expressed by the luminance function I 'B (theta), assuming the luminance model harmonic Fourier series expanded following equation (28).
Figure JPOXMLDOC01-appb-M000013
 式(28)は、高調波と仮定したときの輝度関数I’B(θ)を、高調波の中心c B と、1次からM次までのcos関数と、1次からM次までのsin関数とで表したものである。ここで、次数M(Mは自然数)は予め決定される。 Equation (28), the luminance function I 'B (theta) on the assumption that higher harmonics, centered c B 0 harmonic, and cos function from primary to M order, from the primary to M order It is represented by the sin function. Here, the order M (M is a natural number) is determined in advance.
 特性計算部72は、第2タップ52Bで検出される高調波の輝度モデルを学習する学習器81B(図示せず)の各ノードの重み係数を、WB={wB ,wB ,wB ,・・・}とすると、学習器81Bの入力vB inに、蓄積された所定のシーンの設定位相0°、90°、180°、および、270°の4枚の位相画像内の所定の画素31の第2タップ52Bの輝度値I(0)、I(90)、I(180)、およびI(270)を入力する。すなわち、入力v in={I(0)、I(90)、I(180)、およびI(270)}である。 Characteristic calculation unit 72, a weighting factor of each node learning unit 81B (not shown) to learn the luminance model of harmonics detected by the second tap 52B, W B = {w B 1, w B 2, If w B 3 , ...}, the set phases of the predetermined scenes accumulated in the input v B in of the learner 81B are within the four phase images of 0 °, 90 °, 180 °, and 270 °. the luminance value of the second tap 52B of predetermined pixels 31 I B (0), I B (90), I B (180), and I inputs B (270). That is, the input v B in = {I B ( 0), I B (90), I B (180), and I B (270)} is.
 学習器81Bの出力vB outは、輝度関数I’B(θ)をフーリエ級数展開した式(28)のM次元のcos関数の係数{aB 1,・・・,aB M}と、M次元のsin関数の係数{bB 1,・・・, bB M}とされる。すなわち、出力vB out={aB 1,・・・,aB M, bB 1,・・・, bB M}である。 Output v B out of the learning device 81B, the coefficients of cos function of M dimensions of formula (28) in which the luminance function I 'B (theta) Fourier series expansion {a B 1, ···, a B M} and, The coefficient of the M-dimensional sin function {b B 1 , ..., b B M }. That is, the output v B out = {a B 1 , ..., a B M , b B 1 , ..., b B M }.
 予め何らかの重み係数WBが初期値として設定された学習器81Bの入力vB inに、vB in={IB(0)、IB(90)、IB(180)、およびIB(270)}を入力すると、出力vB out={aB 1,・・・,aB M, bB 1,・・・, bB M}が得られる。 The input v B in the preset learner 81B some weighting factor W B is an initial value, v B in = {I B (0), I B (90), I B (180), and I B ( When 270)} is input, the output v B out = {a B 1 , ..., a B M , b B 1 , ..., b B M } is obtained.
 特性計算部72は、学習器81Bから得られた出力vA out={aB 1,・・・,aB M, bB 1,・・・, bB M}を、式(28)に代入し、高調波の中心c B には、蓄積された様々なシーンの入力vB inの平均値を代入することで、高調波と仮定した輝度関数I’(θ)を復元する。 The characteristic calculation unit 72 puts the output v A out = {a B 1 , ..., a B M , b B 1 , ..., b B M } obtained from the learner 81B into the equation (28). substituted and, in the center c B 0 harmonic, by substituting an average value of the input v B in the stored various scenes, to restore the luminance function I B were assumed harmonic '(theta).
 そして、特性計算部72は、学習器81Bの入力v inと出力vB outの輝度関数I’B(θ)との差が小さくなるように学習を行う。具体的には、特性計算部72は、次式(29)で表される第1の評価関数LB が小さくなるように、学習器81Bの各ノードの重み係数WB={wB ,wB ,wB ,・・・}を更新する。式(30)のWB’は、更新後の重み係数を表し、ηBは学習率を表す係数である。
Figure JPOXMLDOC01-appb-M000014
The characteristic calculating section 72 performs learning such that the difference between the intensity function I 'B (θ) of the input v B in the output v B out of the learning device 81B decreases. Specifically, the characteristic calculating section 72, as the first evaluation function L B 1 represented by the following formula (29) decreases, the weighting factor of each node learning unit 81B W B = {w B 1 , W B 2 , w B 3 , ...}. W B 'is of formula (30) represents the weighting coefficients after updating, eta B is a coefficient representing the learning rate.
Figure JPOXMLDOC01-appb-M000014
 以上の重み係数WBの更新を、蓄積された様々なシーンの4枚の位相画像の同一画素の第2タップ52Bの輝度値I(0)、I(90)、I(180)、およびI(270)を用いて所定回数繰り返すことにより、式(28)で表される高調波の輝度関数I’(θ)が求められる。 More updates weighting factors W B, the luminance value I B of the second tap 52B of the same pixel in the four phase images of the stored various scenes (0), I B (90 ), I B (180) , and by repeating a predetermined number of times by using the I B (270), the intensity function I 'B harmonics of the formula (28) (θ) is obtained.
 なお、復元される輝度関数I’A(θ)と輝度関数I’B(θ)は、中心と振幅は異なるが同じ発光波形と露光波形の特性を持つと考えられるため、輝度関数I’(θ)を白色化して得られる関数の形状は同一であると考えられる。そこで、学習器81の各ノードの重み係数Wを更新する際の制約条件として、上述した第1の評価関数LA およびLB に、次式(31)で表される第2の評価関数Lを追加し、第2の評価関数Lも小さくなるような、重み係数Wを求めるようにしてもよい。重み係数Wの更新は、式(32)で表される。
Figure JPOXMLDOC01-appb-M000015
Incidentally, the intensity function I 'A (θ) and the intensity function I' B to be restored (theta), since the center of the amplitude is considered to differ with the characteristics of the same light emission waveform as exposure waveform, the intensity function I '( It is considered that the shape of the function obtained by whitening θ) is the same. Therefore, as a constraint condition when updating the weighting coefficient W of each node of the learner 81, the first evaluation functions L A 1 and L B 1 described above are subjected to the second evaluation represented by the following equation (31). A function L 2 may be added to obtain a weighting coefficient W such that the second evaluation function L 2 is also small. The update of the weighting coefficient W is represented by the equation (32).
Figure JPOXMLDOC01-appb-M000015
 図14は、第2の評価関数Lを求める演算の概念図を示している。 FIG. 14 shows a conceptual diagram of an operation for obtaining the second evaluation function L 2.
 以上により、第1タップ52Aで検出された反射光の輝度関数I’(θ)と、第2タップ52Bで検出された反射光の輝度関数I’B(θ)が推定された。 Thus, 'and A (theta), intensity function I of the detected reflected light at the second tap 52B' luminance function I of the reflected light detected by the first tap 52A B (θ) is estimated.
 次に、特性計算部72は、推定された第1タップ52Aの輝度関数I’(θ)と、第2タップ52Bの輝度関数I’B(θ)とを用いて、次式(33)によりゲインc1を算出し、式(34)によりオフセットc0を算出する。
Figure JPOXMLDOC01-appb-M000016
Next, characteristic calculation unit 72 uses 'and A (theta), intensity function I of the second tap 52B' luminance function I of the first tap 52A estimated and B (theta), the following equation (33) The gain c 1 is calculated by the formula (34), and the offset c 0 is calculated by the equation (34).
Figure JPOXMLDOC01-appb-M000016
 モデル決定部71において輝度モデルとして高調波が仮定された場合、特性計算部72は、以上のように機械学習を用いて補正パラメータを算出することができる。 When harmonics are assumed as the luminance model in the model determination unit 71, the characteristic calculation unit 72 can calculate the correction parameters using machine learning as described above.
 なお、輝度モデルを高調波として仮定した場合の高調波の算出方法は、機械学習に限定されるわけではなく、その他の手法を用いてもよい。例えば、文献「Marvin Lindner and Andreas Kolb, ”Lateral and Depth Calibration of PMD-Distance Sensors, International Symposium on Visual Computing”, 2006.」に開示されているような手法を採用してもよい。この文献に開示されている手法は、既知の複数の奥行きを撮影し、既知の奥行きと、実際に計測された奥行きの差から、高調波成分を求める、奥行きを補正するためのルックアップテーブルを生成する方法が開示されている。 Note that the method of calculating harmonics when the luminance model is assumed as harmonics is not limited to machine learning, and other methods may be used. For example, a method as disclosed in the document "Marvin Lindner and Andreas Kolb," Lateral and Depth Calibration of PMD-Distance Sensors, International Symposium on Visual Computing ", 2006." may be adopted. The technique disclosed in this document captures multiple known depths and obtains a harmonic component from the difference between the known depth and the actually measured depth, providing a look-up table for depth correction. The method of generation is disclosed.
 輝度モデルを高調波として仮定することにより、輝度モデルが未知である場合にも、単純なモデルよりも正確に補正パラメータを算出することができる。 By assuming the brightness model as a harmonic, even if the brightness model is unknown, the correction parameters can be calculated more accurately than with a simple model.
<7.測距モジュールによる測定処理のフローチャート>
 次に、図15のフローチャートを参照して、図1の測距モジュール11による測定処理を説明する。この処理は、例えば、測距モジュール11が組み込まれた上位装置の制御部から、測定の開始が指示されたとき開始される。
<7. Flowchart of measurement process by distance measurement module>
Next, the measurement process by the distance measuring module 11 of FIG. 1 will be described with reference to the flowchart of FIG. This process is started, for example, when the control unit of the host device in which the distance measuring module 11 is incorporated instructs the start of measurement.
 初めに、ステップS1において、測距センサ13の制御部14は、受光部15の画素アレイ32の各画素31のタップ間の特性ばらつきを補正する補正パラメータの推定を行うかを判定する。 First, in step S1, the control unit 14 of the distance measuring sensor 13 determines whether to estimate the correction parameter for correcting the characteristic variation between the taps of each pixel 31 of the pixel array 32 of the light receiving unit 15.
 例えば、出荷前のキャリブレーション等においては、補正パラメータの推定を行うと判定される。また例えば、測距モジュール11が起動された最初の測定においては、補正パラメータの推定を必ず行うようにしたり、所定回数の測定ごとに、補正パラメータの推定を行うように制御することができる。 For example, in calibration before shipment, it is determined that the correction parameters are estimated. Further, for example, in the first measurement in which the distance measuring module 11 is activated, the correction parameter can be estimated without fail, or the correction parameter can be estimated every predetermined number of measurements.
 ステップS1で、補正パラメータの推定を行うと判定された場合、処理はステップS2に進み、測距モジュール11は、各画素31の補正パラメータであるオフセットc0とゲインc1を推定する補正パラメータ推定処理を実行する。 If it is determined in step S1 that the correction parameters are to be estimated, the process proceeds to step S2, and the distance measuring module 11 estimates the correction parameters for estimating the offset c 0 and the gain c 1 which are the correction parameters of each pixel 31. Execute the process.
 一方、ステップS1で、補正パラメータの推定を行わないと判定された場合、ステップS2の補正パラメータ推定処理はスキップされ、処理はステップS3に進む。補正パラメータの推定を行わない場合は、既に補正パラメータが設定されている状態である。 On the other hand, if it is determined in step S1 that the correction parameter is not estimated, the correction parameter estimation process in step S2 is skipped, and the process proceeds to step S3. When the correction parameter is not estimated, the correction parameter has already been set.
 ステップS3において、測距モジュール11は、各画素31のタップ間の特性ばらつきを補正する補正パラメータを用いて、2Phase方式により物体までの距離を測定する距離測定処理を実行し、処理を終了する。 In step S3, the distance measuring module 11 executes a distance measurement process for measuring the distance to the object by the 2 Phase method using the correction parameter for correcting the characteristic variation between the taps of each pixel 31, and ends the process.
 図16は、図15の測定処理のステップS2で実行される、補正パラメータ推定処理の詳細なフローチャートである。 FIG. 16 is a detailed flowchart of the correction parameter estimation process executed in step S2 of the measurement process of FIG.
 この処理では、初めに、ステップS11において、制御部14は、発光波形と露光波形の位相差(設定位相)を0°に設定し、発光制御信号を発光部12と受光部15に供給する。 In this process, first, in step S11, the control unit 14 sets the phase difference (set phase) between the light emitting waveform and the exposure waveform to 0 °, and supplies the light emitting control signal to the light emitting unit 12 and the light receiving unit 15.
 ステップS12において、測距モジュール11は、照射光の発光と、物体で反射されて返ってきた反射光の受光を行う。すなわち、発光部12は、制御部14から供給された発光制御信号に応じたタイミングで変調しながら発光し、物体に対して照射光を照射する。受光部15は、物体からの反射光を受光する。受光部15は、受光した反射光の受光量に応じた検出信号で構成される画素データを、画素アレイ32の画素31単位で信号処理部16に供給する。 In step S12, the ranging module 11 emits the irradiation light and receives the reflected light reflected and returned by the object. That is, the light emitting unit 12 emits light while modulating at a timing corresponding to the light emission control signal supplied from the control unit 14, and irradiates the object with the irradiation light. The light receiving unit 15 receives the reflected light from the object. The light receiving unit 15 supplies pixel data composed of detection signals according to the amount of received reflected light to the signal processing unit 16 in units of pixels 31 of the pixel array 32.
 ステップS13において、制御部14は、設定位相0°、90°、180°、および、270°の4枚の位相画像の取得を行ったかを判定する。 In step S13, the control unit 14 determines whether or not four phase images having the set phases of 0 °, 90 °, 180 °, and 270 ° have been acquired.
 ステップS13で、4枚の位相画像の取得をまだ行っていないと判定された場合、処理はステップS14に進み、制御部14は、設定位相を、現在の値から90°インクリメントした値に更新する。ステップS14の後、処理はステップS12に戻り、上述したステップS12およびS13が繰り返される。 If it is determined in step S13 that the acquisition of the four phase images has not yet been performed, the process proceeds to step S14, and the control unit 14 updates the set phase to a value incremented by 90 ° from the current value. .. After step S14, the process returns to step S12, and steps S12 and S13 described above are repeated.
 一方、ステップS13で、4枚の位相画像の取得を行ったと判定された場合、処理はステップS15に進み、信号処理部16の特性計算部72は、発光波形の形状と露光波形の形状とに応じた輝度波形に基づいて、各画素31のタップ間の特性ばらつきを補正する補正パラメータであるオフセットc0とゲインc1を算出する。輝度モデルは、ステップS15の処理の前に決定され、モデル決定部71から特性計算部72に供給されている。 On the other hand, when it is determined in step S13 that four phase images have been acquired, the process proceeds to step S15, and the characteristic calculation unit 72 of the signal processing unit 16 changes the shape of the emission waveform and the shape of the exposure waveform. Based on the corresponding luminance waveform, the offset c 0 and the gain c 1 which are the correction parameters for correcting the characteristic variation between the taps of each pixel 31 are calculated. The luminance model is determined before the process of step S15, and is supplied from the model determination unit 71 to the characteristic calculation unit 72.
 ステップS15において、オフセットc0とゲインc1は、輝度モデルとして三角波が仮定された場合、輝度モデルとしてsin波が仮定された場合、輝度モデルとして高調波が仮定された場合のそれぞれについて、上述した算出方法で算出される。算出されたオフセットc0とゲインc1は、信号補正部73に供給され、図16の補正パラメータ推定処理が終了し、図15の測定処理に戻る。 In step S15, the offset c 0 and the gain c 1 are described above for each of the cases where a triangular wave is assumed as the luminance model, a sine wave is assumed as the luminance model, and a harmonic is assumed as the luminance model. It is calculated by the calculation method. The calculated offset c 0 and gain c 1 are supplied to the signal correction unit 73, the correction parameter estimation process of FIG. 16 is completed, and the process returns to the measurement process of FIG.
 なお、図16の補正パラメータ推定処理では、1枚のデプスマップを生成できる4枚の位相画像のみを用いて補正パラメータ(オフセットc0とゲインc1)を算出したが、ノイズの影響を低減するために、複数枚のデプスマップに相当する8枚以上の位相画像を用いて、その結果得られた複数の補正パラメータの加算平均等を用いて、最終的な補正パラメータを算出するようにしてもよい。 In the correction parameter estimation process of FIG. 16, the correction parameters (offset c 0 and gain c 1 ) were calculated using only four phase images capable of generating one depth map, but the influence of noise is reduced. Therefore, even if eight or more phase images corresponding to a plurality of depth maps are used and the summation average of the plurality of correction parameters obtained as a result is used to calculate the final correction parameter. good.
 図17は、図15の測定処理のステップS3で実行される、2Phase方式による距離測定処理の詳細なフローチャートである。 FIG. 17 is a detailed flowchart of the distance measurement process by the 2 Phase method executed in step S3 of the measurement process of FIG.
 この処理では、初めに、ステップS31において、制御部14は、発光波形と露光波形の位相差(設定位相)を0°に設定し、発光制御信号を発光部12と受光部15に供給する。 In this process, first, in step S31, the control unit 14 sets the phase difference (set phase) between the light emitting waveform and the exposure waveform to 0 °, and supplies the light emitting control signal to the light emitting unit 12 and the light receiving unit 15.
 ステップS32において、測距モジュール11は、照射光の発光と、物体で反射されて返ってきた反射光の受光を行う。すなわち、発光部12は、制御部14から供給された発光制御信号に応じたタイミングで変調しながら発光し、物体に対して照射光を照射する。受光部15は、物体からの反射光を受光する。受光部15は、受光した反射光の受光量に応じた検出信号で構成される画素データを、画素アレイ32の画素31単位で信号処理部16に供給する。 In step S32, the ranging module 11 emits the irradiation light and receives the reflected light reflected and returned by the object. That is, the light emitting unit 12 emits light while modulating at a timing corresponding to the light emission control signal supplied from the control unit 14, and irradiates the object with the irradiation light. The light receiving unit 15 receives the reflected light from the object. The light receiving unit 15 supplies pixel data composed of detection signals according to the amount of received reflected light to the signal processing unit 16 in units of pixels 31 of the pixel array 32.
 ステップS33において、制御部14は、設定位相0°および90°の2枚の位相画像の取得を行ったかを判定する。 In step S33, the control unit 14 determines whether or not two phase images having the set phases of 0 ° and 90 ° have been acquired.
 ステップS33で、2枚の位相画像の取得をまだ行っていないと判定された場合、処理はステップS34に進み、制御部14は、発光波形と露光波形の位相差(設定位相)を90°に設定する。ステップS34の後、処理はステップS32に戻り、上述したステップS32およびS33が繰り返される。 If it is determined in step S33 that the acquisition of the two phase images has not yet been performed, the process proceeds to step S34, and the control unit 14 sets the phase difference (set phase) between the emission waveform and the exposure waveform to 90 °. Set. After step S34, the process returns to step S32, and steps S32 and S33 described above are repeated.
 一方、ステップS33で、2枚の位相画像の取得を行ったと判定された場合、処理はステップS35に進み、信号処理部16の信号補正部73は、特性計算部72で算出されたオフセットc0とゲインc1を用いて、式(6)に従い、第2タップ52Bの検出信号I(θ)を補正する。 On the other hand, if it is determined in step S33 that the two phase images have been acquired, the process proceeds to step S35, and the signal correction unit 73 of the signal processing unit 16 has the offset c 0 calculated by the characteristic calculation unit 72. and using the gain c 1, in accordance with the equation (6), to correct the detection signals I B of the second tap 52B (theta).
 具体的には、信号補正部73は、設定位相0°および90°で取得された各画素31の第2タップ52Bの検出信号I(0)およびI(90)を用いて、次の補正後のI”(180)およびI”(270)を算出する。
  I”(180) = c0+ c1・I(0)
  I”(270) = c0+ c1・I(90)
Specifically, the signal correcting unit 73 uses the detection signal I B of the second tap 52B of the pixels 31 acquired in set phases 0 ° and 90 ° (0) and I B (90), the following Calculate the corrected I " A (180) and I" A (270).
I "A (180) = c 0 + c 1 · I B (0)
I "A (270) = c 0 + c 1 · I B (90)
 ステップS36において、距離計算部74は、信号補正部73から供給された、補正後の設定位相0°と90°の2枚の位相画像を用いて、画素アレイ32の各画素31について、物体までの距離であるデプス値を、2Phase方式により算出する。そして、距離計算部74は、各画素31の画素値としてデプス値が格納されたデプスマップを生成して、モジュール外へ出力する。 In step S36, the distance calculation unit 74 uses the two phase images of the corrected set phases of 0 ° and 90 ° supplied from the signal correction unit 73 to reach the object for each pixel 31 of the pixel array 32. The depth value, which is the distance between, is calculated by the 2 Phase method. Then, the distance calculation unit 74 generates a depth map in which the depth value is stored as the pixel value of each pixel 31 and outputs the depth map to the outside of the module.
 以上で図16の補正パラメータ推定処理が終了し、図15の測定処理全体も終了する。 With the above, the correction parameter estimation process of FIG. 16 is completed, and the entire measurement process of FIG. 15 is also completed.
 図16のステップS15におけるオフセットc0とゲインc1を算出する処理であって、輝度モデルとして高調波が仮定された場合である、機械学習を用いてオフセットc0とゲインc1を算出する処理について、図18のフローチャートを参照して、さらに詳しく説明する。 The process of calculating the offset c 0 and the gain c 1 in step S15 of FIG. 16 , which is the process of calculating the offset c 0 and the gain c 1 using machine learning when harmonics are assumed as the luminance model. Will be described in more detail with reference to the flowchart of FIG.
 なお、以下のステップS51乃至S56の処理では、第1タップ52Aで検出される高調波の輝度モデルを学習する学習器81Aの処理について説明するが、第2タップ52Bで検出される高調波の輝度モデルを学習する学習器81Bについても同様に実行されている。 In the following steps S51 to S56, the processing of the learner 81A for learning the brightness model of the harmonics detected by the first tap 52A will be described, but the brightness of the harmonics detected by the second tap 52B will be described. The same applies to the learner 81B for learning the model.
 初めに、ステップS51において、特性計算部72は、第1タップ52Aに対応する学習器81Aの重み係数WAに所定の初期値を設定する。 First, in step S51, characteristic calculation unit 72 sets a predetermined initial value the weight coefficient W A learner 81A corresponding to the first tap 52A.
 ステップS52において、特性計算部72は、蓄積された複数のシーンの位相画像のうち、所定の1つのシーンの設定位相0°、90°、180°、および、270°の4枚の位相画像内の所定の画素31の第1タップ52Aの輝度値I(0)、I(90)、I(180)、およびI(270)を抽出し、学習器81Aの入力vA in={I(0)、I(90)、I(180)、およびI(270)}とする。 In step S52, the characteristic calculation unit 72 within four phase images of the set phases 0 °, 90 °, 180 °, and 270 ° of a predetermined one scene among the accumulated phase images of the plurality of scenes. the luminance value I a of the first tap 52A of predetermined pixels 31 (0), I a ( 90), I a (180), and extracts the I a (270), the input v a in the learning device 81A = {I a (0), I a (90), I a (180), and I a (270)} and.
 ステップS53において、特性計算部72は、学習器81Aの入力vA inに応じた出力vA out={aA 1,・・・,aA M, bA 1,・・・, bA M}を取得する。 In step S53, the characteristic calculation unit 72 outputs the output according to the input v A in of the learner 81 A v A out = {a A 1 , ..., a A M , b A 1 , ..., b A M } To get.
 ステップS54において、特性計算部72は、学習器81Aから得られた出力vA out={aA 1,・・・,aA M, bA 1,・・・, bA M}を、式(25)に代入し、高調波の中心c A には、蓄積された様々なシーンの入力vA inの平均値を代入することで、高調波と仮定した輝度関数I’(θ)を復元する。 In step S54, the characteristic calculation unit 72 formulates the output v A out = {a A 1 , ..., a A M , b A 1 , ..., b A M } obtained from the learner 81A. By substituting into (25) and substituting the average value of the input v A in of various accumulated scenes into the center c A 0 of the harmonic, the luminance function I A '(θ) assumed to be a harmonic. To restore.
 ステップS55において、特性計算部72は、復元した輝度関数I’(θ)に設定位相0°、90°、180°、および、270°を代入して得られた輝度値I’(0)、I’(90)、I’(180)、およびI’(270)と、入力vA in={I(0)、I(90)、I(180)、およびI(270)}とを比較して、式(26)で表される第1の評価関数LA が小さくなるように、学習器81Aの各ノードの重み係数WA={wA ,wA ,wA ,・・・}を更新する。これにより、更新後の重み係数WA’が算出される。 In step S55, the characteristic calculating section 72, the restored luminance function I A 'set phase 0 ° to (θ), 90 °, 180 °, and the luminance value obtained by substituting the 270 ° I' A (0 ), I 'a (90) , I' and a (180), and I 'a (270), the input v a in = {I a ( 0), I a (90), I a (180), and by comparing the I a (270)}, as the evaluation function L a 1 of the first of the formula (26) decreases, the weighting factor of each node learning unit 81A W a = {w a 1 , W A 2 , w A 3 , ...} are updated. Thus, the weighting coefficients after updating W A 'is calculated.
 ステップS56において、特性計算部72は、重み係数WAの更新を規定回数行ったかを判定する。 In step S56, the characteristic calculating section 72 determines whether performed predetermined number of times to update the weight coefficient W A.
 なお、ステップS56の重み係数WAの更新終了を判定する判定処理は、規定回数行ったかを判定するのではなく、復元した輝度関数I’(θ)の輝度値I’(0)、I’(90)、I’(180)、およびI’(270)と、入力vA in={I(0)、I(90)、I(180)、およびI(270)}と誤差が所定範囲内となったかや、更新前後の重み係数WAの差が所定範囲内であるかなどを判定基準としてもよい。 Note that determination processes updating end of the weight coefficient W A of step S56, instead of determining whether performed predetermined number of times, 'luminance value I (theta)' A restored luminance function I A (0), I 'a (90), I ' a (180), and I 'and a (270), the input v a in = {I a ( 0), I a (90), I a (180), and I a (270)} and Kaya error becomes within the predetermined range, the difference between the weight coefficient W a of before and after the update and may be a criterion it is within the predetermined range.
 ステップS56で、まだ規定回数の重み係数WAの更新を行っていないと判定された場合、処理はステップS52に戻り、特性計算部72は、蓄積された複数のシーンの位相画像のうち、まだ選択していない所定の1つのシーンの設定位相0°、90°、180°、および、270°の4枚の位相画像を取得して、上述したステップS52乃至S56の処理を繰り返す。 In step S56, when it is determined that not yet updated weight coefficient W A prescribed number, processing returns to step S52, characteristic calculation unit 72, among the accumulated plural scenes phase images, still Four phase images of set phases 0 °, 90 °, 180 °, and 270 ° of a predetermined unselected scene are acquired, and the above-described processes of steps S52 to S56 are repeated.
 一方、ステップS56で、規定回数の重み係数WAの更新を行ったと判定された場合、処理はステップS57に進む。ステップS51乃至S56の処理は、上述したように、学習器81Aの処理と同様に、第2タップ52Bで検出される高調波の輝度モデルを学習する学習器81Bについても実行されている。 On the other hand, in step S56, when it is determined that made the update of the weight coefficient W A prescribed number, the process proceeds to step S57. As described above, the processes of steps S51 to S56 are also executed for the learner 81B that learns the luminance model of the harmonics detected by the second tap 52B, in the same manner as the process of the learner 81A.
 ステップS57において、特性計算部72は、算出された第1タップ52Aの輝度関数I’(θ)と、第2タップ52Bの輝度関数I’B(θ)とを用いて、上述した式(34)によりオフセットc0を算出し、上述した式(33)によりゲインc1を算出する。 In step S57, the characteristic calculating section 72 uses 'and A (theta), intensity function I of the second tap 52B' luminance function I of the first tap 52A which is calculated and B (theta), the above Expression ( The offset c 0 is calculated by 34), and the gain c 1 is calculated by the above equation (33).
 以上により、機械学習を用いてオフセットc0とゲインc1を算出する処理は終了する。 This completes the process of calculating the offset c 0 and the gain c 1 using machine learning.
 信号処理部16の第1構成例を用いた測距モジュール11による測定処理によれば、決定した輝度モデル(輝度波形)に基づいて、各画素31のタップ間の特性ばらつきを補正する補正パラメータを算出することにより、より適切にタップ間の特性ばらつきを補正することができる。 According to the measurement process by the ranging module 11 using the first configuration example of the signal processing unit 16, the correction parameter for correcting the characteristic variation between the taps of each pixel 31 is set based on the determined luminance model (luminance waveform). By calculating, it is possible to more appropriately correct the characteristic variation between taps.
<8.信号処理部の第2構成例>
 図19は、図1の測距センサ13の信号処理部16の第2構成例を示すブロック図である。
<8. Second configuration example of signal processing unit>
FIG. 19 is a block diagram showing a second configuration example of the signal processing unit 16 of the distance measuring sensor 13 of FIG.
 図19において、図7に示した第1構成例と対応する部分については同一の符号を付してあり、その部分の説明は適宜省略する。 In FIG. 19, the parts corresponding to the first configuration example shown in FIG. 7 are designated by the same reference numerals, and the description of the parts will be omitted as appropriate.
 図19の信号処理部16は、モデル決定部71、特性計算部72、信号補正部73、距離計算部74、および、最適モデル選択部91を備える。 The signal processing unit 16 of FIG. 19 includes a model determination unit 71, a characteristic calculation unit 72, a signal correction unit 73, a distance calculation unit 74, and an optimum model selection unit 91.
 すなわち、信号処理部16の第2構成例は、図7に示した信号処理部16の第1構成例に対して、最適モデル選択部91がさらに追加されている。 That is, in the second configuration example of the signal processing unit 16, the optimum model selection unit 91 is further added to the first configuration example of the signal processing unit 16 shown in FIG. 7.
 図7に示した第1構成例では、モデル決定部71は、選択可能な複数の輝度モデルのなかから所定の1つを決定し、特性計算部72は、モデル決定部71から供給された1つの輝度モデル(輝度波形)に基づいて、補正パラメータ(オフセットc0とゲインc1)を算出した。 In the first configuration example shown in FIG. 7, the model determination unit 71 determines a predetermined one from a plurality of selectable luminance models, and the characteristic calculation unit 72 is supplied from the model determination unit 71. The correction parameters (offset c 0 and gain c 1 ) were calculated based on one luminance model (luminance waveform).
 これに対して、図19の第2構成例では、モデル決定部71は、記憶している全ての輝度モデルを、特性計算部72に供給し、特性計算部72は、全ての輝度モデル(輝度波形)について、補正パラメータ(オフセットc0とゲインc1)を算出する。ここで、モデル決定部71においてN個の輝度モデルが記憶されており、N個の輝度モデルそれぞれについて算出されたオフセットc0とゲインc1を、(c0 ,c1 )、(c0 ,c1 )、・・・、(c0 ,c1 )とする。特性計算部72は、N個の輝度モデルそれぞれについて算出されたオフセットc0とゲインc1を、最適モデル選択部91に供給する。 On the other hand, in the second configuration example of FIG. 19, the model determination unit 71 supplies all the stored luminance models to the characteristic calculation unit 72, and the characteristic calculation unit 72 supplies all the luminance models (luminance). For the waveform), the correction parameters (offset c 0 and gain c 1 ) are calculated. Here, N brightness models are stored in the model determination unit 71, and the offset c 0 and the gain c 1 calculated for each of the N brightness models are set to (c 0 1 , c 1 1 ), (c). 0 2 , c 1 2 ), ..., (c 0 N , c 1 N ). The characteristic calculation unit 72 supplies the offset c 0 and the gain c 1 calculated for each of the N luminance models to the optimum model selection unit 91.
 最適モデル選択部91は、N個の輝度モデルそれぞれのオフセットとゲインの組(c0 ,c1 )、(c0 ,c1 )、・・・、(c0 ,c1 )を算出したのと同じ、設定位相0°、90°、180°、および270°の4枚の位相画像を用いて、4Phase方式により、各画素31の位相ずれ量φrefを算出する。すなわち、上述した式(3)と式(2)により、各画素31の位相ずれ量φrefが算出される。 The optimum model selection unit 91 sets the offset and gain of each of the N luminance models (c 0 1 , c 1 1 ), (c 0 2 , c 1 2 ), ..., (C 0 N , c 1). Using the same four phase images with set phases of 0 °, 90 °, 180 °, and 270 °, which are the same as those for calculating N), the phase shift amount φ ref of each pixel 31 is calculated by the 4 Phase method. That is, the phase shift amount φ ref of each pixel 31 is calculated by the above-mentioned equations (3) and (2).
 また、最適モデル選択部91は、オフセットとゲインの組(c0 ,c1 )、(c0 ,c1 )、・・・、(c0 ,c1 )それぞれについて、式(6)を用いて、設定位相180°の第1タップ52Aの検出信号I”(180)と、設定位相270°の第1タップ52Aの検出信号I”(180)を生成する。
  I”(180) = c0+ c1・I(0)
  I”(270) = c0+ c1・I(90)
Further, the optimum model selection unit 91 sets the offset and gain (c 0 1 , c 1 1 ), (c 0 2 , c 1 2 ), ..., (C 0 N , c 1 N ), respectively. using equation (6), to produce "and a (180), the detection signal I of the first tap 52A configuration phase 270 °" detection signal I of the first tap 52A configuration phase 180 ° of the a (180).
I "A (180) = c 0 + c 1 · I B (0)
I "A (270) = c 0 + c 1 · I B (90)
 さらに、最適モデル選択部91は、2Phase方式、すなわち、上述した式(4)と式(2)により、各画素31の位相ずれ量φ(c0 ,c1 )、φ(c0 ,c1 )、・・・、φ(c0 ,c1 )を算出する。 Further, the optimum model selection unit 91 uses the 2Phase method, that is, the phase shift amounts φ (c 0 1 , c 1 1 ) and φ (c 0 2) of each pixel 31 according to the above equations (4) and (2). , C 1 2 ), ···, φ (c 0 N , c 1 N ) is calculated.
 そして、最適モデル選択部91は、位相ずれ量φrefと、位相ずれ量φ(c0 ,c1 )、φ(c0 ,c1 )、・・・、φ(c0 ,c1 )との誤差を比較することで、最適なオフセットとゲインの組(c0 opt,c1 opt)を選択する。すなわち、最適モデル選択部91は、次式(35)を計算する。
Figure JPOXMLDOC01-appb-M000017
Then, the optimum model selection unit 91 has a phase shift amount φ ref , a phase shift amount φ (c 0 1 , c 1 1 ), φ (c 0 2 , c 1 2 ), ..., φ (c 0 N). , C 1 N ) to select the optimum offset and gain pair (c 0 opt , c 1 opt). That is, the optimum model selection unit 91 calculates the following equation (35).
Figure JPOXMLDOC01-appb-M000017
 式(35)のMSE[]は、[]内の平均二乗誤差を算出する関数である。argminは、MSE[]を最小にする(c0 ,c1 )を求める関数である。したがって、式(35)は、{φref-φ(c0 ,c1 )}の平均二乗誤差を最小とする(c0 ,c1 )を(c0 opt,c1 opt)に決定することを表す。 MSE [] in Eq. (35) is a function that calculates the mean square error in []. argmin is a function that finds the minimum MSE [] (c 0 n , c 1 n). Therefore, in equation (35), (c 0 opt , c 1 opt ) is set to minimize the mean square error of {φ ref −φ (c 0 n , c 1 n )} (c 0 n , c 1 n). Indicates that the decision is made.
 なお、誤差を評価する関数は、平均二乗誤差に限らず、その他の関数を用いてもよい。 The function for evaluating the error is not limited to the mean square error, and other functions may be used.
 最適モデル選択部91は、選択したオフセットc0 optとゲインc1 optを、信号補正部73に供給する。 The optimum model selection unit 91 supplies the selected offset c 0 opt and gain c 1 opt to the signal correction unit 73.
 信号補正部73と距離計算部74の処理は、第1構成例と同様である。 The processing of the signal correction unit 73 and the distance calculation unit 74 is the same as that of the first configuration example.
<9.第2構成例を用いた測定処理のフローチャート>
 図20は、信号処理部16の第2構成例を採用した場合における図1の測距モジュール11の測定処理のフローチャートである。この処理は、例えば、測距モジュール11が組み込まれた上位装置の制御部から、測定の開始が指示されたとき開始される。
<9. Flowchart of measurement processing using the second configuration example>
FIG. 20 is a flowchart of the measurement process of the distance measuring module 11 of FIG. 1 when the second configuration example of the signal processing unit 16 is adopted. This process is started, for example, when the control unit of the host device in which the distance measuring module 11 is incorporated instructs the start of measurement.
 初めに、ステップS71において、測距センサ13の制御部14は、受光部15の画素アレイ32の各画素31のタップ間の特性ばらつきを補正する補正パラメータの推定を行うかを判定する。ステップS71の処理は、図15のフローチャートで説明した第1の構成例における測定処理のステップS1と同様である。 First, in step S71, the control unit 14 of the distance measuring sensor 13 determines whether to estimate the correction parameter for correcting the characteristic variation between the taps of each pixel 31 of the pixel array 32 of the light receiving unit 15. The process of step S71 is the same as step S1 of the measurement process in the first configuration example described in the flowchart of FIG.
 ステップS71で、補正パラメータの推定を行うと判定された場合、処理はステップS2に進み、測距モジュール11は、全ての輝度モデルについて補正パラメータを推定する補正パラメータ推定処理を実行する。 If it is determined in step S71 that the correction parameters are to be estimated, the process proceeds to step S2, and the ranging module 11 executes the correction parameter estimation process for estimating the correction parameters for all the luminance models.
 すなわち、第1の構成例におけるステップS2の処理では、モデル決定部71によって決定された所定の1つの輝度モデルの補正パラメータを推定したのに対して、第2構成例のステップS72の処理は、モデル決定部71が記憶している全ての輝度モデルの補正パラメータを推定する点で異なる。特性計算部72は、N個の輝度モデルそれぞれについて算出されたオフセットc0とゲインc1を、最適モデル選択部91に供給する。 That is, in the process of step S2 in the first configuration example, the correction parameter of one predetermined luminance model determined by the model determination unit 71 is estimated, whereas in the process of step S72 of the second configuration example, the process is The difference is that the model determination unit 71 estimates the correction parameters of all the luminance models stored. The characteristic calculation unit 72 supplies the offset c 0 and the gain c 1 calculated for each of the N luminance models to the optimum model selection unit 91.
 そして、ステップS72の後、処理はステップS73に進み、最適モデル選択部91は、N個の輝度モデルのなかから、最適な輝度モデルを選択する。すなわち、式(35)を計算することにより、N個の輝度モデルそれぞれの補正パラメータ(オフセットc0 とゲインc1 )のなかから、最適なオフセットc0 optとゲインc1 optが選択される。 Then, after step S72, the process proceeds to step S73, and the optimum model selection unit 91 selects the optimum luminance model from the N luminance models. That is, by calculating equation (35), the optimum offset c 0 opt and gain c 1 opt are selected from the correction parameters (offset c 0 n and gain c 1 n) of each of the N luminance models. NS.
 一方、ステップS71で、補正パラメータの推定を行わないと判定された場合、ステップS72およびS73の処理はスキップされ、処理はステップS74に進む。 On the other hand, if it is determined in step S71 that the correction parameter is not estimated, the processes of steps S72 and S73 are skipped, and the process proceeds to step S74.
 ステップS74において、測距モジュール11は、各画素31のタップ間の特性ばらつきを補正する補正パラメータを用いて、2Phase方式により物体までの距離を測定する距離測定処理を実行し、処理を終了する。 In step S74, the distance measuring module 11 executes a distance measurement process for measuring the distance to the object by the 2 Phase method using the correction parameter for correcting the characteristic variation between the taps of each pixel 31, and ends the process.
 信号処理部16の第2の構成例を用いた測距モジュール11による測定処理によれば、様々な輝度モデル(輝度波形)のなかから最適な輝度モデルを選択し、各画素31のタップ間の特性ばらつきを補正する補正パラメータを算出することにより、より適切にタップ間の特性ばらつきを補正することができる。 According to the measurement process by the ranging module 11 using the second configuration example of the signal processing unit 16, the optimum luminance model is selected from various luminance models (luminance waveforms), and between the taps of each pixel 31. By calculating the correction parameter for correcting the characteristic variation, the characteristic variation between taps can be corrected more appropriately.
<10.信号処理部の第3構成例>
 図21は、図1の測距センサ13の信号処理部16の第3構成例を示すブロック図である。
<10. Third configuration example of signal processing unit>
FIG. 21 is a block diagram showing a third configuration example of the signal processing unit 16 of the distance measuring sensor 13 of FIG.
 図21において、図19に示した第2構成例と対応する部分については同一の符号を付してあり、その部分の説明は適宜省略する。 In FIG. 21, the parts corresponding to the second configuration example shown in FIG. 19 are designated by the same reference numerals, and the description of the parts will be omitted as appropriate.
 図21の信号処理部16は、モデル決定部71、特性計算部72、信号補正部73、距離計算部74、最適モデル選択部91、および、評価部101を備える。 The signal processing unit 16 of FIG. 21 includes a model determination unit 71, a characteristic calculation unit 72, a signal correction unit 73, a distance calculation unit 74, an optimum model selection unit 91, and an evaluation unit 101.
 すなわち、信号処理部16の第3構成例は、図19に示した信号処理部16の第2構成例に対して、評価部101がさらに追加されている。 That is, in the third configuration example of the signal processing unit 16, the evaluation unit 101 is further added to the second configuration example of the signal processing unit 16 shown in FIG.
 評価部101には、距離計算部74で計算されたデプスマップが供給される。 The depth map calculated by the distance calculation unit 74 is supplied to the evaluation unit 101.
 評価部101は、時間方向に連続する2枚のデプスマップを用いて、現在使用している補正パラメータが適切であるか否かを評価する。補正パラメータが画素31のタップ間の特性ばらつきを適切に補正した値でない場合、特性ばらつきが固定パターンノイズとしてデプスマップに現れる。評価部101は、デプスマップに固定パターンノイズが現れているか否かを判定する。 The evaluation unit 101 evaluates whether or not the correction parameter currently used is appropriate by using two depth maps that are continuous in the time direction. If the correction parameter is not a value obtained by appropriately correcting the characteristic variation between taps of the pixel 31, the characteristic variation appears in the depth map as fixed pattern noise. The evaluation unit 101 determines whether or not fixed pattern noise appears in the depth map.
 評価部101は、デプスマップに固定パターンノイズが現れていると判定した場合、モデル決定部71に、補正パラメータの再計算を指示する。モデル決定部71は、評価部101から補正パラメータの再計算が指示された場合、記憶している全ての輝度モデルを、特性計算部72に供給する。 When the evaluation unit 101 determines that fixed pattern noise appears in the depth map, the evaluation unit 101 instructs the model determination unit 71 to recalculate the correction parameters. When the evaluation unit 101 instructs the evaluation unit 101 to recalculate the correction parameters, the model determination unit 71 supplies all the stored luminance models to the characteristic calculation unit 72.
 図22を参照して、評価部101が行う固定パターンノイズ評価処理について説明する。 The fixed pattern noise evaluation process performed by the evaluation unit 101 will be described with reference to FIG. 22.
 評価部101には、時刻t-1におけるデプスマップDt-1と、時刻tにおけるデプスマップDとが入力されている。 The depth map D t-1 at time t-1 and the depth map D t at time t are input to the evaluation unit 101.
 評価部101は、初めに、時間方向に連続する2枚のデプスマップDt-1およびDのそれぞれに、画像を平滑化する空間フィルタfを適用し、フィルタ処理後のデプスマップD’t-1および’Dを生成する。以下、フィルタ処理後のデプスマップD’t-1およびD’を、処理後デプスマップD’t-1およびD’と称する。
   D’t-1=f(Dt-1
   D’=f(D
Evaluation unit 101, initially, each time two consecutive in the direction depth map D t-1 and D t, to apply the spatial filter f for smoothing the image, the filtered depth map D 't Generate -1 and'D t. Hereinafter, the depth map D 't-1 and D' t after filtering, referred to as a post-processing the depth map D 't-1 and D' t.
D' t-1 = f (D t-1 )
D' t = f (D t )
 空間フィルタfには、例えば、ガウシアンフィルタやバイラテラルフィルタなどの、エッジ保存型フィルタを採用することができる。空間フィルタfを適用することで、デプスマップからノイズの影響を低減することができる。 For the spatial filter f, for example, an edge preservation type filter such as a Gaussian filter or a bilateral filter can be adopted. By applying the spatial filter f, the influence of noise can be reduced from the depth map.
 次に、評価部101は、領域サイズを予め設定した小領域gt-1およびgの領域位置を、それぞれ、処理後デプスマップD’t-1およびD’内で順次スライドさせ、小領域g内の各画素で検出されるデプス値dの分散と、2枚の処理後デプスマップ間のデプス値dの差との双方が最も小さい小領域gを、固定パターンノイズの評価で用いる代表小領域gs’とする。この処理を式で表すと、以下のように表すことができる。
Figure JPOXMLDOC01-appb-M000018
 式(36)において、V()は、分散を求める演算子を表す。
Next, the evaluation unit 101, the area position of the small region g t-1 and g t obtained by setting the area size in advance, respectively, are sequentially slid in the post-processing the depth map D 't-1 and D' t, the small the dispersion of the depth value d that is detected at each pixel in the region g, both the smallest small region g t of the difference between the depth value d between two of the processed depth map, used in the evaluation of the fixed pattern noise as the representative small area gs' t. This process can be expressed as follows.
Figure JPOXMLDOC01-appb-M000018
In equation (36), V () represents the operator for the variance.
 次に、評価部101は、代表小領域gs’と同じ領域位置の小領域gsを、2枚のデプスマップDt-1およびDそれぞれから抽出し、小領域gst-1およびgsとする。 Next, the evaluation unit 101, the representative small area gs' t subregions gs in the same area position, extracted from each of two depth map D t-1 and D t, the small region gs t-1 and gs t And.
 評価部101は、2枚のデプスマップDt-1およびDそれぞれから抽出した小領域gst-1およびgsを用いて、以下の式(37)を算出し、小領域gst-1とgsとの差分と、小領域gsの分散との和が、所定の閾値Thより大きいか否かを判定する。式(37)の条件式を満たす場合、評価部101は、デプスマップDに固定パターンノイズが現れている、すなわち、補正パラメータがタップ間の特性ばらつきを適切に補正した値でないと判定する。
Figure JPOXMLDOC01-appb-M000019
The evaluation unit 101 calculates the following equation (37) using the small regions gs t-1 and gs t extracted from the two depth maps D t-1 and D t, respectively, and calculates the small region gs t-1. It is determined whether or not the sum of the difference between gs t and the variance of the small region gs t is larger than the predetermined threshold value Th. When satisfying the condition of formula (37), the evaluation unit 101, a fixed pattern noise in the depth map D t it has appeared, that is, determines that the correction parameter is not the value obtained by appropriately correcting the characteristic variation between taps.
Figure JPOXMLDOC01-appb-M000019
<11.第3構成例を用いた測定処理のフローチャート>
 図23は、信号処理部16の第3構成例を採用した場合における図1の測距モジュール11の測定処理のフローチャートである。この処理は、例えば、測距モジュール11が組み込まれた上位装置の制御部から、測定の開始が指示されたとき開始される。
<11. Flowchart of measurement processing using the third configuration example>
FIG. 23 is a flowchart of the measurement process of the distance measuring module 11 of FIG. 1 when the third configuration example of the signal processing unit 16 is adopted. This process is started, for example, when the control unit of the host device in which the distance measuring module 11 is incorporated instructs the start of measurement.
 図23のステップS101乃至S104の処理は、図20で説明したステップS71乃至S74の処理とそれぞれ同様であるので、その説明は省略する。ステップS104において算出されたデプスマップは、評価部101にも供給され、評価部101には、時間方向に連続する2枚のデプスマップDt-1およびDが記憶される。 Since the processes of steps S101 to S104 of FIG. 23 are the same as the processes of steps S71 to S74 described with reference to FIG. 20, the description thereof will be omitted. The depth map calculated in step S104 is also supplied to the evaluation unit 101, and the evaluation unit 101 stores two depth maps D t-1 and D t that are continuous in the time direction.
 ステップS105において、評価部101は、時間方向に連続する2枚のデプスマップを用いて、現在使用している補正パラメータが適切であるか否かを評価する固定パターンノイズ評価処理を実行する。ステップS105における固定パターンノイズ評価処理の詳細は、図24のフローチャートを参照して後述する。 In step S105, the evaluation unit 101 executes a fixed pattern noise evaluation process for evaluating whether or not the correction parameter currently used is appropriate, using two depth maps that are continuous in the time direction. Details of the fixed pattern noise evaluation process in step S105 will be described later with reference to the flowchart of FIG. 24.
 ステップS106において、評価部101は、固定パターンノイズ評価処理の結果を用いて、デプスマップDt-1およびDに固定パターンノイズが発生しているかを判定する。具体的には、評価部101は、式(37)の条件式を満たすか否かを判定する。 In step S106, the evaluation unit 101 uses the result of the fixed pattern noise evaluation process to determine whether fixed pattern noise is generated in the depth maps D t-1 and D t. Specifically, the evaluation unit 101 determines whether or not the conditional expression of the equation (37) is satisfied.
 ステップS106で、式(37)の条件式を満たし、固定パターンノイズが発生していると判定された場合、処理はステップS107に進み、評価部101は、モデル決定部71に、補正パラメータの再計算を指示する。モデル決定部71は、記憶している全ての輝度モデルを、特性計算部72に供給する。特性計算部72は、全ての輝度モデルについて補正パラメータを推定する補正パラメータ推定処理を実行する。補正パラメータの再計算が指示された後のステップS107の処理は、ステップS102や、図20のステップS72と同様である。 If the conditional expression of the equation (37) is satisfied in step S106 and it is determined that the fixed pattern noise is generated, the process proceeds to step S107, and the evaluation unit 101 informs the model determination unit 71 of the correction parameter. Instruct the calculation. The model determination unit 71 supplies all the stored luminance models to the characteristic calculation unit 72. The characteristic calculation unit 72 executes a correction parameter estimation process for estimating correction parameters for all luminance models. The process of step S107 after the recalculation of the correction parameter is instructed is the same as that of step S102 and step S72 of FIG.
 そして、ステップS108において、最適モデル選択部91は、N個の輝度モデルのなかから、最適な輝度モデルを選択する。ステップS108の処理は、ステップS103や、図20のステップS73と同様である。 Then, in step S108, the optimum model selection unit 91 selects the optimum brightness model from the N brightness models. The process of step S108 is the same as that of step S103 and step S73 of FIG.
 一方、ステップS106で、固定パターンノイズが発生していないと判定された場合、ステップS107およびS108の処理はスキップされ、測定処理は終了する。 On the other hand, if it is determined in step S106 that fixed pattern noise has not occurred, the processes of steps S107 and S108 are skipped, and the measurement process ends.
 図24は、図23のステップS105で実行される固定パターンノイズ評価処理の詳細なフローチャートである。 FIG. 24 is a detailed flowchart of the fixed pattern noise evaluation process executed in step S105 of FIG. 23.
 初めに、ステップS121において、評価部101は、時間方向に連続する2枚のデプスマップDt-1およびDのそれぞれに、画像を平滑化する空間フィルタfを適用し、処理後デプスマップD’t-1およびD’を生成する。 First, in step S121, the evaluation unit 101 applies a spatial filter f for smoothing an image to each of the two continuous depth maps D t-1 and D t in the time direction, and the processed depth map D. ' t-1 and D' t are generated.
 ステップS122において、評価部101は、処理後デプスマップD’t-1およびD’内に、それぞれ、小領域gt-1およびgを設定してスライドさせ、代表小領域gs’を設定する。代表小領域gs’は、式(36)で表される、小領域g内のデプス値dの分散と、2枚の処理後デプスマップ間のデプス値dの差との双方が最も小さい小領域gとされる。 In step S122, the evaluation unit 101, the processing after the depth map D within 't-1 and D' t, respectively, is slid by setting small regions g t-1 and g t, a representative small region gs' t Set. Small representative subregion gs' t has the formula represented by (36), and the variance of depth value d in the small region g, the smallest both the difference between the depth value d between the two processed depth map It is an area g t.
 ステップS123において、評価部101は、代表小領域gs’と同じ領域位置の小領域gsを、2枚のデプスマップDt-1およびDそれぞれから抽出し、小領域gst-1およびgsとする。 In step S123, the evaluation unit 101, the representative small area gs' t subregions gs in the same area position, extracted from each of two depth map D t-1 and D t, the small region gs t-1 and gs Let t .
 ステップS124において、評価部101は、2枚のデプスマップDt-1およびDそれぞれから抽出した小領域gst-1およびgsを用いて、式(37)の左辺、すなわち、小領域gst-1とgsとの差分と、小領域gsの分散との和を算出する。 In step S124, the evaluation unit 101 uses the small regions gs t-1 and gs t extracted from the two depth maps D t-1 and D t, respectively, to use the left side of the equation (37), that is, the small region gs. The sum of the difference between t-1 and gs t and the variance of the small region gs t is calculated.
 ステップS124の後、処理は図23のステップS106に進み、式(37)の条件式を満たすか否かが判定される。 After step S124, the process proceeds to step S106 of FIG. 23, and it is determined whether or not the conditional expression of the equation (37) is satisfied.
 以上で、信号処理部16の第3構成例を採用した場合における測距モジュール11の測定処理が終了する。 This completes the measurement process of the distance measuring module 11 when the third configuration example of the signal processing unit 16 is adopted.
 上述した信号処理部16の第1構成例乃至第3構成例のいずれかを備える測距モジュール11によれば、より適切にタップ間の特性ばらつきを補正することができる。信号処理部16は、上述した第1構成例乃至第3構成例のいずれかで構成されてもよいし、第1構成例乃至第3構成例を選択的に実行可能な構成としてもよい。かかる信号処理部16を有することにより、タップ間の特性ばらつきを補正するタイミングにおいて信号のSN比が十分に高くない場合でも、安定したオフセットc0とゲインc1の推定が期待できる。 According to the distance measuring module 11 including any of the first configuration example to the third configuration example of the signal processing unit 16 described above, it is possible to more appropriately correct the characteristic variation between taps. The signal processing unit 16 may be configured by any of the above-mentioned first configuration example to third configuration example, or may be configured so that the first configuration example to the third configuration example can be selectively executed. By having such a signal processing unit 16, stable estimation of offset c 0 and gain c 1 can be expected even when the signal-to-noise ratio of the signal is not sufficiently high at the timing of correcting the characteristic variation between taps.
<12.測距センサのチップ構成例>
 図25は、測距センサ13のチップ構成例を示す斜視図である。
<12. Distance measurement sensor chip configuration example>
FIG. 25 is a perspective view showing a chip configuration example of the distance measuring sensor 13.
 測距センサ13は、例えば、図25のAに示されるように、複数のダイ(基板)としてのセンサダイ151とロジックダイ152とが積層された1つのチップで構成することができる。 For example, as shown in A of FIG. 25, the distance measuring sensor 13 can be composed of one chip in which a sensor die 151 as a plurality of dies (boards) and a logic die 152 are laminated.
 センサダイ151には、センサ部161(としての回路)が構成され、ロジックダイ152には、ロジック部162が構成されている。 The sensor die 151 is configured with a sensor unit 161 (as a circuit), and the logic die 152 is configured with a logic unit 162.
 センサ部161には、例えば、受光部15が形成されている。ロジック部162には、例えば、制御部14、信号処理部16、入出力端子などが形成されている。 For example, a light receiving unit 15 is formed in the sensor unit 161. The logic unit 162 is formed with, for example, a control unit 14, a signal processing unit 16, input / output terminals, and the like.
 また、測距センサ13は、センサダイ151とロジックダイ152とに加えて、もう1つのロジックダイを積層した3層で構成してもよい。勿論、4層以上のダイ(基板)の積層で構成してもよい。 Further, the distance measuring sensor 13 may be composed of three layers in which another logic die is laminated in addition to the sensor die 151 and the logic die 152. Of course, it may be composed of a stack of four or more dies (boards).
 あるいはまた、測距センサ13は、例えば、図25のBに示されるように、第1のチップ171および第2のチップ172と、それらが搭載された中継基板(インターポーザ基板)173とで構成してもよい。 Alternatively, the ranging sensor 13 is composed of, for example, as shown in FIG. 25B, a first chip 171 and a second chip 172, and a relay board (interposer board) 173 on which they are mounted. You may.
 第1のチップ171には、例えば、受光部15が形成されている。第2のチップ172には、制御部14、信号処理部16などが形成されている。 For example, a light receiving portion 15 is formed on the first chip 171. A control unit 14, a signal processing unit 16, and the like are formed on the second chip 172.
 なお、上述した図25のAにおけるセンサダイ151とロジックダイ152との回路配置、および、図25のBにおける第1のチップ171と第2のチップ172との回路配置は、あくまで一例であり、これに限定されない。例えば、デプスマップの生成処理などを行う信号処理部16が、信号処理装置として測距センサ13の外部(別チップ)に設けられてもよい。 The circuit arrangement of the sensor die 151 and the logic die 152 in A of FIG. 25 and the circuit arrangement of the first chip 171 and the second chip 172 in B of FIG. 25 are merely examples. Not limited to. For example, a signal processing unit 16 that performs depth map generation processing and the like may be provided outside (separate chip) of the distance measuring sensor 13 as a signal processing device.
<13.電子機器の構成例>
 上述した測距モジュール11は、例えば、スマートフォン、タブレット型端末、携帯電話機、パーソナルコンピュータ、ゲーム機、テレビ受像機、ウェアラブル端末、デジタルスチルカメラ、デジタルビデオカメラなどの電子機器に搭載することができる。
<13. Configuration example of electronic device>
The distance measuring module 11 described above can be mounted on an electronic device such as a smartphone, a tablet terminal, a mobile phone, a personal computer, a game machine, a television receiver, a wearable terminal, a digital still camera, or a digital video camera.
 図26は、測距モジュールを搭載した電子機器としてのスマートフォンの構成例を示すブロック図である。 FIG. 26 is a block diagram showing a configuration example of a smartphone as an electronic device equipped with a ranging module.
 図26に示すように、スマートフォン201は、測距モジュール202、撮像装置203、ディスプレイ204、スピーカ205、マイクロフォン206、通信モジュール207、センサユニット208、タッチパネル209、および制御ユニット210が、バス211を介して接続されて構成される。また、制御ユニット210では、CPUがプログラムを実行することによって、アプリケーション処理部221およびオペレーションシステム処理部222としての機能を備える。 As shown in FIG. 26, in the smartphone 201, the distance measuring module 202, the image pickup device 203, the display 204, the speaker 205, the microphone 206, the communication module 207, the sensor unit 208, the touch panel 209, and the control unit 210 are connected via the bus 211. Is connected and configured. Further, the control unit 210 has functions as an application processing unit 221 and an operating system processing unit 222 by executing a program by the CPU.
 測距モジュール202には、図1の測距モジュール11が適用される。例えば、測距モジュール202は、スマートフォン201の前面に配置され、スマートフォン201のユーザを対象とした測距を行うことにより、そのユーザの顔や手、指などの表面形状のデプス値を測距結果として出力することができる。また、測距モジュール202による測距結果を用いて、ユーザのジェスチャを認識することも可能である。 The distance measuring module 11 of FIG. 1 is applied to the distance measuring module 202. For example, the distance measuring module 202 is arranged in front of the smartphone 201, and by performing distance measurement for the user of the smartphone 201, the depth value of the surface shape of the user's face, hand, finger, etc. is measured as a distance measurement result. Can be output as. It is also possible to recognize the user's gesture by using the distance measurement result by the distance measurement module 202.
 撮像装置203は、スマートフォン201の前面に配置され、スマートフォン201のユーザを被写体とした撮像を行うことにより、そのユーザが写された画像を取得する。なお、図示しないが、スマートフォン201の背面にも撮像装置203が配置された構成としてもよい。 The image pickup device 203 is arranged in front of the smartphone 201, and by taking an image of the user of the smartphone 201 as a subject, the image taken by the user is acquired. Although not shown, the image pickup device 203 may be arranged on the back surface of the smartphone 201.
 ディスプレイ204は、アプリケーション処理部221およびオペレーションシステム処理部222による処理を行うための操作画面や、撮像装置203が撮像した画像などを表示する。スピーカ205およびマイクロフォン206は、例えば、スマートフォン201により通話を行う際に、相手側の音声の出力、および、ユーザの音声の収音を行う。 The display 204 displays an operation screen for performing processing by the application processing unit 221 and the operation system processing unit 222, an image captured by the image pickup device 203, and the like. The speaker 205 and the microphone 206, for example, output the voice of the other party and collect the voice of the user when making a call by the smartphone 201.
 通信モジュール207は、通信ネットワークを介した通信を行う。センサユニット208は、速度や加速度、近接などをセンシングし、タッチパネル209は、ディスプレイ204に表示されている操作画面に対するユーザによるタッチ操作を取得する。 The communication module 207 communicates via the communication network. The sensor unit 208 senses speed, acceleration, proximity, etc., and the touch panel 209 acquires a touch operation by the user on the operation screen displayed on the display 204.
 アプリケーション処理部221は、スマートフォン201によって様々なサービスを提供するための処理を行う。例えば、アプリケーション処理部221は、測距モジュール202から供給されるデプスに基づいて、ユーザの表情をバーチャルに再現したコンピュータグラフィックスによる顔を作成し、ディスプレイ204に表示する処理を行うことができる。また、アプリケーション処理部221は、測距モジュール202から供給されるデプスに基づいて、例えば、任意の立体的な物体の三次元形状データを作成する処理を行うことができる。 The application processing unit 221 performs processing for providing various services by the smartphone 201. For example, the application processing unit 221 can create a face by computer graphics that virtually reproduces the user's facial expression based on the depth supplied from the distance measuring module 202, and can perform a process of displaying the face on the display 204. Further, the application processing unit 221 can perform a process of creating, for example, three-dimensional shape data of an arbitrary three-dimensional object based on the depth supplied from the distance measuring module 202.
 オペレーションシステム処理部222は、スマートフォン201の基本的な機能および動作を実現するための処理を行う。例えば、オペレーションシステム処理部222は、測距モジュール202から供給されるデプス値に基づいて、ユーザの顔を認証し、スマートフォン201のロックを解除する処理を行うことができる。また、オペレーションシステム処理部222は、測距モジュール202から供給されるデプス値に基づいて、例えば、ユーザのジェスチャを認識する処理を行い、そのジェスチャに従った各種の操作を入力する処理を行うことができる。 The operation system processing unit 222 performs processing for realizing the basic functions and operations of the smartphone 201. For example, the operation system processing unit 222 can perform a process of authenticating the user's face and unlocking the smartphone 201 based on the depth value supplied from the distance measuring module 202. Further, the operation system processing unit 222 performs, for example, a process of recognizing a user's gesture based on the depth value supplied from the distance measuring module 202, and performs a process of inputting various operations according to the gesture. Can be done.
 このように構成されているスマートフォン201では、上述した測距モジュール11を適用することで、例えば、高精度にかつ高速にデプスマップを生成することができる。これにより、スマートフォン201は、測距情報をより正確に検出することができる。 In the smartphone 201 configured in this way, by applying the distance measuring module 11 described above, for example, a depth map can be generated with high accuracy and high speed. As a result, the smartphone 201 can detect the distance measurement information more accurately.
<14.移動体への応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
<14. Application example to mobile>
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
 図27は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。 FIG. 27 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile control system to which the technology according to the present disclosure can be applied.
 車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図27に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(interface)12053が図示されている。 The vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001. In the example shown in FIG. 27, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050. Further, as a functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are shown.
 駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。 The drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle.
 ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps. In this case, the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches. The body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
 車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。 The vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000. For example, the image pickup unit 12031 is connected to the vehicle exterior information detection unit 12030. The vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image. The vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or a character on the road surface based on the received image.
 撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。 The imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received. The image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
 車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。 The in-vehicle information detection unit 12040 detects the in-vehicle information. For example, a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040. The driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
 マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。 The microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit. A control command can be output to 12010. For example, the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 Further, the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving, etc., which runs autonomously without depending on the operation.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12020に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。 Further, the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030. For example, the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs coordinated control for the purpose of anti-glare such as switching the high beam to the low beam. It can be carried out.
 音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図27の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。 The audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger of the vehicle or the outside of the vehicle. In the example of FIG. 27, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices. The display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
 図28は、撮像部12031の設置位置の例を示す図である。 FIG. 28 is a diagram showing an example of the installation position of the imaging unit 12031.
 図28では、車両12100は、撮像部12031として、撮像部12101,12102,12103,12104,12105を有する。 In FIG. 28, the vehicle 12100 has image pickup units 12101, 12102, 12103, 12104, 12105 as the image pickup unit 12031.
 撮像部12101,12102,12103,12104,12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102,12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。撮像部12101及び12105で取得される前方の画像は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100, for example. The image pickup unit 12101 provided on the front nose and the image pickup section 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100. The imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100. The imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100. The images in front acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
 なお、図28には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。 Note that FIG. 28 shows an example of the photographing range of the imaging units 12101 to 12104. The imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose, the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively, and the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103. The imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
 撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。 At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 For example, the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100). By obtaining can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform coordinated control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。 For example, the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be seen by the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
 撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。 At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104. Such pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine. When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian. The display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
 以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、車外情報検出ユニット12030や車内情報検出ユニット12040に適用され得る。具体的には、車外情報検出ユニット12030や車内情報検出ユニット12040として測距モジュール11による測距を利用することで、運転者のジェスチャを認識する処理を行い、そのジェスチャに従った各種(例えば、オーディオシステム、ナビゲーションシステム、エアーコンディショニングシステム)の操作を実行したり、より正確に運転者の状態を検出することができる。また、測距モジュール11による測距を利用して、路面の凹凸を認識して、サスペンションの制御に反映させたりすることができる。 The above is an example of a vehicle control system to which the technology according to the present disclosure can be applied. The technique according to the present disclosure can be applied to the vehicle exterior information detection unit 12030 and the vehicle interior information detection unit 12040 among the configurations described above. Specifically, by using the distance measurement by the distance measuring module 11 as the outside information detection unit 12030 and the inside information detection unit 12040, processing for recognizing the driver's gesture is performed, and various types (for example, for example) according to the gesture are performed. It can perform operations on audio systems, navigation systems, air conditioning systems) and detect the driver's condition more accurately. Further, the distance measurement by the distance measurement module 11 can be used to recognize the unevenness of the road surface and reflect it in the control of the suspension.
 なお、本技術は、Indirect ToF方式の中でもContinuous-Wave方式と称する、物体へ投射する光を振幅変調する方式に適用することができる。また、受光部15のフォトダイオード51の構造としては、CAPD(Current Assisted Photonic Demodulator)構造の測距センサや、フォトダイオードの電荷を2つのゲートに交互にパルスを加えるゲート方式の測距センサなど、2つの電荷蓄積部に電荷を振り分ける構造の測距センサに適用することができる。 Note that this technology can be applied to the continuous-Wave method, which is an indirect ToF method that amplitude-modulates the light projected onto an object. The structure of the photodiode 51 of the light receiving unit 15 includes a distance measuring sensor having a CAPD (Current Assisted Photonic Demodulator) structure, a gate type distance measuring sensor that alternately applies an electric charge of the photodiode to two gates, and the like. It can be applied to a distance measuring sensor having a structure that distributes charges to two charge storage units.
 本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 The embodiment of the present technology is not limited to the above-described embodiment, and various changes can be made without departing from the gist of the present technology.
 本明細書において複数説明した本技術は、矛盾が生じない限り、それぞれ独立に単体で実施することができる。もちろん、任意の複数の本技術を併用して実施することもできる。例えば、いずれかの実施の形態において説明した本技術の一部または全部を、他の実施の形態において説明した本技術の一部または全部と組み合わせて実施することもできる。また、上述した任意の本技術の一部または全部を、上述していない他の技術と併用して実施することもできる。 The present techniques described above in this specification can be independently implemented independently as long as there is no contradiction. Of course, any plurality of the present technologies can be used in combination. For example, some or all of the techniques described in any of the embodiments may be combined with some or all of the techniques described in other embodiments. It is also possible to carry out a part or all of any of the above-mentioned techniques in combination with other techniques not described above.
 また、例えば、1つの装置(または処理部)として説明した構成を分割し、複数の装置(または処理部)として構成するようにしてもよい。逆に、以上において複数の装置(または処理部)として説明した構成をまとめて1つの装置(または処理部)として構成されるようにしてもよい。また、各装置(または各処理部)の構成に上述した以外の構成を付加するようにしてももちろんよい。さらに、システム全体としての構成や動作が実質的に同じであれば、ある装置(または処理部)の構成の一部を他の装置(または他の処理部)の構成に含めるようにしてもよい。 Further, for example, the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units). On the contrary, the configurations described above as a plurality of devices (or processing units) may be collectively configured as one device (or processing unit). Further, of course, a configuration other than the above may be added to the configuration of each device (or each processing unit). Further, if the configuration and operation of the entire system are substantially the same, a part of the configuration of one device (or processing unit) may be included in the configuration of another device (or other processing unit). ..
 さらに、本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 Further, in the present specification, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
 なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、本明細書に記載されたもの以外の効果があってもよい。 It should be noted that the effects described in the present specification are merely examples and are not limited, and effects other than those described in the present specification may be obtained.
 なお、本技術は、以下の構成を取ることができる。
(1)
 所定の発光源から照射された照射光が物体で反射されて返ってきた反射光を受光する画素で得られた画素データに基づいて、前記物体までの距離を算出する処理を行う信号処理部を備え、
 前記信号処理部は、前記照射光の発光波形の形状と前記画素の露光波形の形状とに応じた輝度モデルに基づいて、前記画素の第1電荷検出部と第2電荷検出部との特性を補正する補正パラメータを算出する特性計算部を有する
 信号処理装置。
(2)
 前記特性計算部は、複数の輝度モデルのなかから選択された所定の輝度モデルに基づいて、前記補正パラメータを算出する
 前記(1)に記載の信号処理装置。
(3)
 前記所定の輝度モデルは、三角波である
 前記(2)に記載の信号処理装置。
(4)
 前記所定の輝度モデルは、sin波である
 前記(2)に記載の信号処理装置。
(5)
 前記所定の輝度モデルは、高調波である
 前記(2)に記載の信号処理装置。
(6)
 前記特性計算部は、前記第1電荷検出部と第2電荷検出部のそれぞれについて、高調波と仮定した輝度波形を表す輝度関数を機械学習により推定し、前記補正パラメータを算出する
 前記(5)に記載の信号処理装置。
(7)
 前記機械学習の学習器は、前記第1電荷検出部または第2電荷検出部に対応する前記輝度関数を学習し、
 前記特性計算部は、前記学習器の入力と、学習により得られた前記輝度関数の値との差が小さくなるように、前記学習器を学習する
 前記(6)に記載の信号処理装置。
(8)
 前記特性計算部は、前記第1電荷検出部に対応する第1の輝度関数と、前記第2電荷検出部に対応する第2の輝度関数とが同一であるという条件にも基づいて、前記学習器を学習する
 前記(7)に記載の信号処理装置。
(9)
 前記特性計算部は、複数の輝度モデルについて前記補正パラメータを算出し、
 前記複数の輝度モデルの前記補正パラメータのうち、入力された画像から算出した位相ずれ量との誤差が少ない前記補正パラメータを選択する選択部をさらに備える
 前記(1)乃至(8)のいずれかに記載の信号処理装置。
(10)
 前記特性計算部により算出された前記補正パラメータを評価する評価部をさらに備える
 前記(1)乃至(9)のいずれかに記載の信号処理装置。
(11)
 前記信号処理部は、前記特性計算部により算出された前記補正パラメータで補正された画素データに基づいて、前記物体までの距離を算出する距離計算部をさらに備える
 前記(1)乃至(10)のいずれかに記載の信号処理装置。
(12)
 所定の発光源から照射された照射光が物体で反射されて返ってきた反射光を受光する画素で得られた画素データに基づいて、前記物体までの距離を算出する処理を行う信号処理装置が、
 前記照射光の発光波形の形状と前記画素の露光波形の形状とに応じた輝度モデルに基づいて、前記画素の第1電荷検出部と第2電荷検出部との特性を補正する補正パラメータを算出する
 信号処理方法。
(13)
 所定の発光源と、
 前記発光源から照射された照射光が物体で反射されて返ってきた反射光を受光する画素を有する測距センサと
 を備え、
 前記測距センサは、前記照射光の発光波形の形状と前記画素の露光波形の形状とに応じた輝度モデルに基づいて、前記画素の第1電荷検出部と第2電荷検出部との特性を補正する補正パラメータを算出する特性計算部を有する
 測距モジュール。
The present technology can have the following configurations.
(1)
A signal processing unit that performs processing to calculate the distance to the object based on the pixel data obtained by the pixels that receive the reflected light that is reflected by the object and the irradiation light emitted from the predetermined light emitting source is received. Prepare,
The signal processing unit determines the characteristics of the first charge detection unit and the second charge detection unit of the pixel based on the luminance model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel. A signal processing device having a characteristic calculation unit for calculating correction parameters to be corrected.
(2)
The signal processing device according to (1), wherein the characteristic calculation unit calculates the correction parameter based on a predetermined luminance model selected from a plurality of luminance models.
(3)
The signal processing device according to (2) above, wherein the predetermined luminance model is a triangular wave.
(4)
The signal processing device according to (2) above, wherein the predetermined luminance model is a sine wave.
(5)
The signal processing device according to (2) above, wherein the predetermined luminance model is a harmonic.
(6)
The characteristic calculation unit estimates the brightness function representing the brightness waveform assumed to be a harmonic for each of the first charge detection unit and the second charge detection unit by machine learning, and calculates the correction parameter (5). The signal processing apparatus according to.
(7)
The machine learning learner learns the luminance function corresponding to the first charge detection unit or the second charge detection unit, and learns the brightness function.
The signal processing device according to (6), wherein the characteristic calculation unit learns the learner so that the difference between the input of the learner and the value of the luminance function obtained by learning becomes small.
(8)
The characteristic calculation unit learns based on the condition that the first luminance function corresponding to the first charge detection unit and the second luminance function corresponding to the second charge detection unit are the same. The signal processing device according to (7) above.
(9)
The characteristic calculation unit calculates the correction parameters for a plurality of luminance models, and calculates the correction parameters.
One of the above (1) to (8) further comprising a selection unit for selecting the correction parameter having a small error from the phase shift amount calculated from the input image among the correction parameters of the plurality of luminance models. The signal processing device described.
(10)
The signal processing apparatus according to any one of (1) to (9), further comprising an evaluation unit for evaluating the correction parameter calculated by the characteristic calculation unit.
(11)
The signal processing unit further includes a distance calculation unit that calculates a distance to the object based on pixel data corrected by the correction parameter calculated by the characteristic calculation unit (1) to (10). The signal processing device according to any one.
(12)
A signal processing device that calculates the distance to the object based on the pixel data obtained by the pixels that receive the reflected light that is reflected by the object and the irradiation light emitted from the predetermined light source. ,
Based on the brightness model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel, a correction parameter for correcting the characteristics of the first charge detection unit and the second charge detection unit of the pixel is calculated. Signal processing method.
(13)
With a given light source
A distance measuring sensor having a pixel that receives the reflected light that is reflected by an object and returned from the irradiation light emitted from the light emitting source is provided.
The ranging sensor determines the characteristics of the first charge detection unit and the second charge detection unit of the pixel based on the luminance model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel. A distance measuring module having a characteristic calculation unit that calculates correction parameters to be corrected.
 11 測距モジュール, 12 発光部, 13 測距センサ, 14 制御部, 15 受光部, 16 信号処理部, 31 画素, 32 画素アレイ, 52A 第1タップ, 52B 第2タップ, 71 モデル決定部, 72 特性計算部, 73 信号補正部, 74 距離計算部, 81A,81B 学習器, 91 最適モデル選択部, 101 評価部, 201 スマートフォン, 202 測距モジュール 11 Distance measurement module, 12 light emitting unit, 13 distance measurement sensor, 14 control unit, 15 light receiving unit, 16 signal processing unit, 31 pixels, 32 pixel array, 52A 1st tap, 52B 2nd tap, 71 model determination unit, 72 Characteristic calculation unit, 73 signal correction unit, 74 distance calculation unit, 81A, 81B learner, 91 optimum model selection unit, 101 evaluation unit, 201 smartphone, 202 distance measurement module

Claims (13)

  1.  所定の発光源から照射された照射光が物体で反射されて返ってきた反射光を受光する画素で得られた画素データに基づいて、前記物体までの距離を算出する処理を行う信号処理部を備え、
     前記信号処理部は、前記照射光の発光波形の形状と前記画素の露光波形の形状とに応じた輝度モデルに基づいて、前記画素の第1電荷検出部と第2電荷検出部との特性を補正する補正パラメータを算出する特性計算部を有する
     信号処理装置。
    A signal processing unit that performs processing to calculate the distance to the object based on the pixel data obtained by the pixels that receive the reflected light that is reflected by the object and the irradiation light emitted from the predetermined light emitting source is received. Prepare,
    The signal processing unit determines the characteristics of the first charge detection unit and the second charge detection unit of the pixel based on the luminance model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel. A signal processing device having a characteristic calculation unit for calculating correction parameters to be corrected.
  2.  前記特性計算部は、複数の輝度モデルのなかから選択された所定の輝度モデルに基づいて、前記補正パラメータを算出する
     請求項1に記載の信号処理装置。
    The signal processing device according to claim 1, wherein the characteristic calculation unit calculates the correction parameter based on a predetermined luminance model selected from a plurality of luminance models.
  3.  前記所定の輝度モデルは、三角波である
     請求項2に記載の信号処理装置。
    The signal processing device according to claim 2, wherein the predetermined luminance model is a triangular wave.
  4.  前記所定の輝度モデルは、sin波である
     請求項2に記載の信号処理装置。
    The signal processing device according to claim 2, wherein the predetermined luminance model is a sine wave.
  5.  前記所定の輝度モデルは、高調波である
     請求項2に記載の信号処理装置。
    The signal processing device according to claim 2, wherein the predetermined luminance model is a harmonic.
  6.  前記特性計算部は、前記第1電荷検出部と第2電荷検出部のそれぞれについて、高調波と仮定した輝度波形を表す輝度関数を機械学習により推定し、前記補正パラメータを算出する
     請求項5に記載の信号処理装置。
    The characteristic calculation unit estimates the brightness function representing the brightness waveform assumed to be a harmonic for each of the first charge detection unit and the second charge detection unit by machine learning, and calculates the correction parameter according to claim 5. The signal processing device described.
  7.  前記機械学習の学習器は、前記第1電荷検出部または第2電荷検出部に対応する前記輝度関数を学習し、
     前記特性計算部は、前記学習器の入力と、学習により得られた前記輝度関数の値との差が小さくなるように、前記学習器を学習する
     請求項6に記載の信号処理装置。
    The machine learning learner learns the luminance function corresponding to the first charge detection unit or the second charge detection unit, and learns the brightness function.
    The signal processing device according to claim 6, wherein the characteristic calculation unit learns the learner so that the difference between the input of the learner and the value of the luminance function obtained by learning becomes small.
  8.  前記特性計算部は、前記第1電荷検出部に対応する第1の輝度関数と、前記第2電荷検出部に対応する第2の輝度関数とが同一であるという条件にも基づいて、前記学習器を学習する
     請求項7に記載の信号処理装置。
    The characteristic calculation unit learns based on the condition that the first luminance function corresponding to the first charge detection unit and the second luminance function corresponding to the second charge detection unit are the same. The signal processing device according to claim 7, wherein the device is learned.
  9.  前記特性計算部は、複数の輝度モデルについて前記補正パラメータを算出し、
     前記複数の輝度モデルの前記補正パラメータのうち、入力された画像から算出した位相ずれ量との誤差が少ない前記補正パラメータを選択する選択部をさらに備える
     請求項1に記載の信号処理装置。
    The characteristic calculation unit calculates the correction parameters for a plurality of luminance models, and calculates the correction parameters.
    The signal processing apparatus according to claim 1, further comprising a selection unit for selecting the correction parameter having a small error from the phase shift amount calculated from the input image among the correction parameters of the plurality of luminance models.
  10.  前記特性計算部により算出された前記補正パラメータを評価する評価部をさらに備える
     請求項1に記載の信号処理装置。
    The signal processing apparatus according to claim 1, further comprising an evaluation unit for evaluating the correction parameter calculated by the characteristic calculation unit.
  11.  前記信号処理部は、前記特性計算部により算出された前記補正パラメータで補正された画素データに基づいて、前記物体までの距離を算出する距離計算部をさらに備える
     請求項1に記載の信号処理装置。
    The signal processing device according to claim 1, wherein the signal processing unit further includes a distance calculation unit that calculates a distance to the object based on pixel data corrected by the correction parameter calculated by the characteristic calculation unit. ..
  12.  所定の発光源から照射された照射光が物体で反射されて返ってきた反射光を受光する画素で得られた画素データに基づいて、前記物体までの距離を算出する処理を行う信号処理装置が、
     前記照射光の発光波形の形状と前記画素の露光波形の形状とに応じた輝度モデルに基づいて、前記画素の第1電荷検出部と第2電荷検出部との特性を補正する補正パラメータを算出する
     信号処理方法。
    A signal processing device that calculates the distance to the object based on the pixel data obtained by the pixels that receive the reflected light that is reflected by the object and the irradiation light emitted from the predetermined light source. ,
    Based on the brightness model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel, a correction parameter for correcting the characteristics of the first charge detection unit and the second charge detection unit of the pixel is calculated. Signal processing method.
  13.  所定の発光源と、
     前記発光源から照射された照射光が物体で反射されて返ってきた反射光を受光する画素を有する測距センサと
     を備え、
     前記測距センサは、前記照射光の発光波形の形状と前記画素の露光波形の形状とに応じた輝度モデルに基づいて、前記画素の第1電荷検出部と第2電荷検出部との特性を補正する補正パラメータを算出する特性計算部を有する
     測距モジュール。
    With a given light source
    A distance measuring sensor having a pixel that receives the reflected light that is reflected by an object and returned from the irradiation light emitted from the light emitting source is provided.
    The ranging sensor determines the characteristics of the first charge detection unit and the second charge detection unit of the pixel based on the luminance model according to the shape of the emission waveform of the irradiation light and the shape of the exposure waveform of the pixel. A distance measuring module having a characteristic calculation unit that calculates correction parameters to be corrected.
PCT/JP2021/006075 2020-03-04 2021-02-18 Signal processing device, signal processing method, and range-finding module WO2021177045A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020036524 2020-03-04
JP2020-036524 2020-03-04

Publications (1)

Publication Number Publication Date
WO2021177045A1 true WO2021177045A1 (en) 2021-09-10

Family

ID=77612616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/006075 WO2021177045A1 (en) 2020-03-04 2021-02-18 Signal processing device, signal processing method, and range-finding module

Country Status (1)

Country Link
WO (1) WO2021177045A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023139916A1 (en) * 2022-01-21 2023-07-27 株式会社小糸製作所 Measurement device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1020036A (en) * 1996-06-28 1998-01-23 Toyota Central Res & Dev Lab Inc Method and device for measuring distance
JP2008520988A (en) * 2004-11-23 2008-06-19 アイイーイー インターナショナル エレクトロニクス アンド エンジニアリング エス.エイ. Error compensation method for 3D camera
JP2010203877A (en) * 2009-03-03 2010-09-16 Topcon Corp Distance measuring device
CN110361751A (en) * 2019-06-14 2019-10-22 深圳奥比中光科技有限公司 The distance measurement method of time flight depth camera and the reduction noise of single-frequency modulation /demodulation
JP2019191119A (en) * 2018-04-27 2019-10-31 ソニーセミコンダクタソリューションズ株式会社 Range-finding processing device, range-finding module, range-finding processing method and program
JP2020504310A (en) * 2017-01-20 2020-02-06 カーネギー メロン ユニバーシティ A method for epipolar time-of-flight imaging

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1020036A (en) * 1996-06-28 1998-01-23 Toyota Central Res & Dev Lab Inc Method and device for measuring distance
JP2008520988A (en) * 2004-11-23 2008-06-19 アイイーイー インターナショナル エレクトロニクス アンド エンジニアリング エス.エイ. Error compensation method for 3D camera
JP2010203877A (en) * 2009-03-03 2010-09-16 Topcon Corp Distance measuring device
JP2020504310A (en) * 2017-01-20 2020-02-06 カーネギー メロン ユニバーシティ A method for epipolar time-of-flight imaging
JP2019191119A (en) * 2018-04-27 2019-10-31 ソニーセミコンダクタソリューションズ株式会社 Range-finding processing device, range-finding module, range-finding processing method and program
CN110361751A (en) * 2019-06-14 2019-10-22 深圳奥比中光科技有限公司 The distance measurement method of time flight depth camera and the reduction noise of single-frequency modulation /demodulation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SCHMIDT MIRKO, ZIMMERMANN KLAUS, JAHNE BERND: "High frame rate for 3D time-of-flight cameras by dynamic sensor calibration", 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY (ICCP, 21 April 2011 (2011-04-21), pages 1 - 8, XP031943266, Retrieved from the Internet <URL:https://ieeexplore.ieee.org/document/5753121> DOI: 10.1109/ICCPHOT.2011.5753121 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023139916A1 (en) * 2022-01-21 2023-07-27 株式会社小糸製作所 Measurement device

Similar Documents

Publication Publication Date Title
TWI814804B (en) Distance measurement processing apparatus, distance measurement module, distance measurement processing method, and program
WO2021085128A1 (en) Distance measurement device, measurement method, and distance measurement system
WO2020241294A1 (en) Signal processing device, signal processing method, and ranging module
WO2017195459A1 (en) Imaging device and imaging method
US20210174127A1 (en) Image processing device, image processing method, and program
TWI798408B (en) Ranging processing device, ranging module, ranging processing method, and program
JP2018007210A (en) Signal processing device and method and imaging device
WO2021177045A1 (en) Signal processing device, signal processing method, and range-finding module
WO2020209079A1 (en) Distance measurement sensor, signal processing method, and distance measurement module
WO2020246264A1 (en) Distance measurement sensor, signal processing method, and distance measurement module
WO2021010174A1 (en) Light receiving device and method for driving light receiving device
WO2021065500A1 (en) Distance measurement sensor, signal processing method, and distance measurement module
WO2021065494A1 (en) Distance measurement sensor, signal processing method, and distance measurement module
WO2021039458A1 (en) Distance measuring sensor, driving method therefor, and distance measuring module
JP7476170B2 (en) Signal processing device, signal processing method, and ranging module
WO2020203331A1 (en) Signal processing device, signal processing method, and ranging module
CN114424084A (en) Lighting apparatus, lighting apparatus control method, and distance measurement module
WO2022004441A1 (en) Ranging device and ranging method
WO2021065495A1 (en) Ranging sensor, signal processing method, and ranging module
WO2021131684A1 (en) Ranging device, method for controlling ranging device, and electronic apparatus
WO2021124918A1 (en) Signal processing device, signal processing method, and range finding device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21765493

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21765493

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP