WO2021006048A1 - Signal processing device and signal processing method - Google Patents

Signal processing device and signal processing method Download PDF

Info

Publication number
WO2021006048A1
WO2021006048A1 PCT/JP2020/024971 JP2020024971W WO2021006048A1 WO 2021006048 A1 WO2021006048 A1 WO 2021006048A1 JP 2020024971 W JP2020024971 W JP 2020024971W WO 2021006048 A1 WO2021006048 A1 WO 2021006048A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
data
drive mode
unit
distance measuring
Prior art date
Application number
PCT/JP2020/024971
Other languages
French (fr)
Japanese (ja)
Inventor
基 三原
俊 海津
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2021006048A1 publication Critical patent/WO2021006048A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • the present technology relates to a signal processing device and a signal processing method, and more particularly to a signal processing device and a signal processing method that enable more easily to obtain impulse response data in the time direction of reflected light.
  • a distance measuring method in the distance measuring module for example, there is an Indirect ToF (Time of Flight) method.
  • IndirectToF Time of Flight
  • the reflected light that is reflected on the surface of the object by irradiating the light toward the object is detected. Then, the distance to the object is calculated based on the measured value obtained by measuring the flight time of the reflected light.
  • Exposure control in the Indirect ToF sensor is realized, for example, by using an image sensor provided with two charge storage units per pixel and distributing the photoelectrically converted charges to the two charge storage units at high speed.
  • Patent Document 1 and Non-Patent Document 1 propose a sensor having four or eight charge storage units per pixel and acquiring an impulse response of reflected light in the time direction of several tens of ns.
  • Non-Patent Documents 2 to 4 propose a technique of inputting the output of the IndirectToF sensor to the neural network, reducing the depth estimation error mainly due to multiple reflections, and outputting accurate depth information.
  • Patent Document 1 in order to obtain an impulse response in the time direction of reflected light, information is acquired in time division while changing many parameters, so it takes a long time just to take a picture. , The user cannot perform simple measurement. Moreover, since it can be applied only to a completely stationary object, it is not suitable for use in a general environment or consumer use.
  • Non-Patent Documents 2 to 4 have the effect of reducing the depth estimation error due to multiple reflections, but since the depth map is an output, it is not possible to obtain an impulse response in the time direction of the reflected light.
  • This technology was made in view of such a situation, and makes it possible to more easily obtain the impulse response data in the time direction of the reflected light.
  • the ranging sensor that receives the reflected light reflected by the subject in the first drive mode or the second drive mode is in the second drive mode.
  • a teacher data generation unit that generates impulse response data of the reflected light with respect to the subject in the time direction as teacher data from the output received data, and an impulse response of the received data in the first drive mode and the reflected light in the time direction. It is provided with a learning unit that calculates model parameters of a learning model that learns the relationship with data.
  • the distance measuring sensor in which the signal processing device receives the reflected light reflected by the subject in the first drive mode or the second drive mode is the first. From the received light data output in the second drive mode, impulse response data in the time direction of the reflected light to the subject is generated as teacher data, and the received light data in the first drive mode and the impulse response in the time direction of the reflected light are generated. Calculate the model parameters of the learning model that learns the relationship with the data.
  • the light receiving received by the distance measuring sensor that receives the reflected light reflected by the subject in the first drive mode or the second drive mode is output in the second drive mode.
  • the impulse response data of the reflected light with respect to the subject in the time direction is generated as teacher data, and learning to learn the relationship between the received light data in the first drive mode and the impulse response data of the reflected light in the time direction.
  • the model parameters of the model are calculated.
  • the signal processing device of one aspect of the present technology can be realized by causing a computer to execute a program.
  • the program to be executed by the computer can be provided by transmitting through a transmission medium or by recording on a recording medium.
  • the signal processing device may be an independent device or an internal block constituting one device.
  • FIG. 1 is a block diagram showing a configuration example of an embodiment of a distance measuring system to which the present technology is applied.
  • the distance measuring system 1 of FIG. 1 includes a distance measuring module 11, a teacher data generation unit 12, a learning unit 13, an estimation unit 14, and a control unit 15.
  • the distance measuring module 11 is an Indirect ToF type distance measuring module that outputs light receiving data corresponding to distance information to a subject by receiving the reflected light that is reflected by a predetermined object as a subject and returned. Is.
  • the received light data output by the ranging module 11 is also referred to as raw data.
  • the ranging module 11 has a first drive mode and a second drive mode as drive modes, operates in the drive mode specified by the control unit 15, and outputs Raw data.
  • the first drive mode is the light receiving timing in which the phase is shifted by 0 °, 90 °, 180 °, or 270 ° based on the irradiation timing of the irradiation light, as in the case of the general Indirect ToF type ranging module.
  • This is a drive mode for receiving reflected light, and is also referred to as a normal drive mode below.
  • the second drive mode is a drive mode in which the reflected light is received at a light reception timing in which the phase is shifted by 1 ° from 0 ° to 179 °, for example, with reference to the irradiation timing of the irradiation light.
  • the distance measuring module 11 When the distance measuring module 11 operates in the first drive mode (normal drive mode), the distance measuring module 11 receives the reflected light and supplies the raw data generated to the learning unit 13 or the estimation unit 14. On the other hand, when the distance measuring module 11 operates in the second drive mode (special drive mode), the distance measuring module 11 receives the reflected light and supplies the raw data generated to the teacher data generation unit 12.
  • the teacher data generation unit 12 acquires the raw data of the second drive mode from the distance measuring module 11, and from the acquired raw data of the second drive mode, the teacher data is the impulse response data of the reflected light to the subject in the time direction. Is generated and supplied to the learning unit 13.
  • the learning unit 13 acquires the light receiving data in the first drive mode supplied from the distance measuring module 11 and the impulse response data in the time direction of the reflected light supplied from the teacher data generation unit 12, and the first unit. Learn the relationship between the received light data in the drive mode and the impulse response data in the time direction of the reflected light. More specifically, the learning unit 13 has, for example, a learner using a convolutional neural network (hereinafter referred to as CNN (Convolutional Neural Network)) as a learning model, and the teacher data is stored in the CNN as a learning model.
  • CNN Convolutional Neural Network
  • the student data is the light-receiving data in the first drive mode, and the light-receiving data in the first drive mode is input.
  • the calculated model parameters are supplied to the estimation unit 14.
  • the estimation unit 14 includes a predictor that executes prediction processing using the model parameters obtained by the learning device by the learning unit 13. That is, the estimation unit 14 uses the learning model (CNN) using the model parameters calculated by the learning unit 13 as a prediction model, and predicts the light receiving data of the first drive mode supplied from the distance measuring module 11. Input to the model and output the impulse response data in the time direction of the reflected light as the prediction result.
  • CNN learning model
  • CNN is adopted as a model used for the learner and the predictor of machine learning, but other neural networks may be adopted, or other algorithms may be adopted. .. That is, machine learning models and algorithms are not limited.
  • the control unit 15 is, for example, an operation control signal based on the user's operation from the operation unit of the device in which the distance measurement system 1 is incorporated, or an operation control signal from a higher control unit of the device in which the distance measurement system 1 is incorporated.
  • the distance measurement module 11, the teacher data generation unit 12, the learning unit 13, and the estimation unit 14 are controlled based on the above.
  • the control unit 15 sequentially drives the distance measuring module 11 in the first drive mode and the second drive mode, and sequentially drives the raw data into the teacher data. Output to the generation unit 12 or the learning unit 13. Further, in the prediction mode after the model parameters are learned, the control unit 15 drives the ranging module 11 in the first drive mode and outputs the raw data to the estimation unit 14.
  • the teacher data generation unit 12, the learning unit 13, the estimation unit 14, and the control unit 15 in the latter stage of processing the raw data output from the distance measuring module 11 are DSP (Digital Signal Processor) and ISP (Image Signal Processor). It can be configured by one signal processing device 16 such as.
  • each of the teacher data generation unit 12, the learning unit 13, the estimation unit 14, and the control unit 15 may be configured by individual DSPs or ISPs.
  • the reflected light is shifted by 0 °, 90 °, 180 °, and 270 ° based on the irradiation timing of the irradiation light. It receives light and outputs the resulting Raw data P 0 , P 90 , P 180 , and P 270 .
  • the distance to the object can be calculated by performing a predetermined operation described later.
  • the reflected light received by the ToF sensor includes the reflected light that is directly irradiated and reflected on the object to be measured as shown by the solid line in FIG. 2, and the reflected light that is reflected by the object as shown in FIG.
  • the received light data of the profile in FIG. 2 is impulse response data in the time direction of the reflected light obtained by using impulse light emission with a very short emission time and shortening the accumulation time of the ToF sensor.
  • Such impulse response data in the time direction of the reflected light can be obtained by using impulse light emission with a very short emission time and shortening the accumulation time of the ToF sensor, but in an actual usage environment, the impulse response data can be obtained. It is practically difficult to perform such a drive.
  • the estimation unit 14 of the distance measuring system 1 inputs the four-phase Raw data P 0 , P 90 , P 180 , and P 270 output by the ToF sensor, and the time of the reflected light is reflected. Predicts the impulse response data in the direction and outputs it as the prediction result.
  • the impulse response data in the time direction of the reflected light is data represented in a total of three dimensions, two dimensions in the spatial direction corresponding to the pixel array of the ToF sensor and one dimension in the time direction.
  • the estimation unit 14 outputs not only the impulse response data of the reflected light in the time direction but also the four-phase raw data P 0 , P 90 , P 180 , and P 270 like a general ToF sensor. can do.
  • the estimation unit 14 learns the model parameters of the CNN as a predictor for predicting the impulse response data in the time direction of the reflected light.
  • the teacher data generation unit 12 generates impulse response data in the time direction of the reflected light, which is the teacher data for the learning unit 13 to learn.
  • FIG. 4 is a block diagram showing a detailed configuration example of the ranging module 11.
  • the distance measuring module 11 includes a light emitting source 21, a light emitting control unit 22, a delay generation unit 23, and a distance measuring sensor 24.
  • the light emitting source 21 has, for example, an infrared laser diode as a light source, emits light at a timing corresponding to a light emitting control signal supplied from the light emitting control unit 22, and irradiates an object with irradiation light.
  • the light emission control signal is composed of, for example, a rectangular wave pulse signal that becomes high in a predetermined period, an impulse signal that momentarily becomes high for a short time in a predetermined period, and the like.
  • the light emission control unit 22 controls the light emission source 21 by supplying a light emission control signal of a predetermined frequency (for example, 20 MHz or the like) to the light emission source 21. Further, the light emission control unit 22 also supplies a light emission control signal to the delay generation unit 23 in order to drive the distance measuring sensor 24 in accordance with the timing of light emission in the light emission source 21.
  • a predetermined frequency for example, 20 MHz or the like
  • the delay generation unit 23 generates an exposure control signal delayed by a predetermined delay amount with respect to the timing (irradiation timing) indicated by the light emission control signal from the light emission control unit 22, and supplies the exposure control signal to the ranging sensor 24.
  • the delay amount is supplied from the storage unit 61 (FIG. 11) of the teacher data generation unit 12.
  • the distance measuring sensor 24 receives the reflected light from the object based on the exposure control signal supplied from the delay generation unit 23.
  • the exposure control signal supplied from the delay generation unit 23 is delayed by a predetermined phase difference (delay amount) from the light emission control signal supplied to the light emission source 21, the distance measuring sensor 24 uses the light emission source 21.
  • the light is received at a timing shifted by a predetermined phase difference from the irradiation timing.
  • the distance measuring sensor 24 is an Indirect ToF type ToF sensor.
  • the distance measuring sensor 24 has a normal drive mode (first drive mode) and a special drive mode (second drive mode) as operation modes, and is in the drive mode designated by the control unit 15 (FIG. 1). It works and outputs Raw data.
  • the normal drive mode and the special drive mode drive the distance measuring sensor 24 in the same manner, but differ in the exposure control signal.
  • the exposure control signal supplied from the delay generation unit 23 in the normal drive mode has a phase of 0 °, 90 °, 180 °, or a phase with respect to the light emission control signal output by the light emission control unit 22. It is a signal shifted by 270 °.
  • the exposure control signal supplied from the delay generation unit 23 in the special drive mode has a phase of only one of 1 ° units from 0 ° to 179 ° with respect to the light emission control signal output by the light emission control unit 22. It is a shifted signal.
  • the ranging sensor 24 emits the reflected light at a light receiving timing that is out of phase with the irradiation timing of the irradiation light by 0 °, 90 °, 180 °, or 270 °.
  • the light is received, and the raw data of the resulting phase differences of 0 °, 90 °, 180 °, and 270 ° are supplied to the learning unit 13 or the estimation unit 14.
  • the distance measuring sensor 24 receives the reflected light at the light receiving timing in which the phase is changed in 1 ° units from 0 ° to 179 ° with respect to the irradiation timing of the irradiation light.
  • the raw data having a phase difference of 0 ° to 179 ° as a result is supplied to the storage unit 61 of the teacher data generation unit 12.
  • the distance measuring sensor 24 has a phase of 0 ° in the normal drive mode. It outputs 90 °, 180 °, and 270 ° 4-phase (4 types) raw images.
  • the ranging sensor 24 outputs 180 phases (180 types) of raw images having a phase of 0 ° to 179 °.
  • the distance measuring sensor 24 has a pixel array unit 32 in which pixels 31 are two-dimensionally arranged in a matrix in the row direction and a column direction, and a drive control in which the pixels 31 are arranged in a peripheral region of the pixel array unit 32. It has a circuit 33.
  • the pixel 31 generates an electric charge according to the amount of reflected light received, and outputs a signal corresponding to the electric charge.
  • the drive control circuit 33 is, for example, a control signal for controlling the drive of the pixel 31 based on the exposure control signal supplied from the delay generation unit 23 (for example, the distribution signal DIMIX described later, the selection signal ADDRESS DECODE, and the reset). (Signal RST, etc.) is output.
  • the pixel 31 has a photodiode 51 and a first tap 52A and a second tap 52B that detect the charge photoelectrically converted by the photodiode 51.
  • the electric charge generated by one photodiode 51 is distributed to the first tap 52A or the second tap 52B.
  • the charges distributed to the first tap 52A are output as a detection signal A from the signal line 53A
  • the charges distributed to the second tap 52B are detected signals B from the signal line 53B. Is output as.
  • the first tap 52A is composed of a transfer transistor 41A, an FD (Floating Diffusion) unit 42A, a selection transistor 43A, and a reset transistor 44A.
  • the second tap 52B is composed of a transfer transistor 41B, an FD unit 42B, a selection transistor 43B, and a reset transistor 44B.
  • the pixel 31 adopts a 2-tap pixel structure having two charge distribution portions of the first tap 52A and the second tap 52B. Can be realized with 1 tap or 4 taps.
  • the reflected light is received by the photodiode 51 with a delay time ⁇ T.
  • the distribution signal DIMIX_A controls the on / off of the transfer transistor 41A
  • the distribution signal DIMIX_B controls the on / off of the transfer transistor 41B.
  • the distribution signal DIMIX_A is a signal having the same phase as the irradiation light
  • the distribution signal DIMIX_B has a phase in which the distribution signal DIMIX_A is inverted.
  • the electric charge generated by the photodiode 51 receiving the reflected light is transferred to the FD unit 42A according to the distribution signal DIMIX_A while the transfer transistor 41A is on, and is transferred to the transfer transistor according to the distribution signal DIMIX_B. While 41B is on, it is transferred to the FD unit 42B.
  • the charges transferred via the transfer transistor 41A are sequentially accumulated in the FD unit 42A and transferred via the transfer transistor 41B during a predetermined period in which the irradiation light of the irradiation time T is periodically irradiated.
  • the electric charge is sequentially accumulated in the FD unit 42B.
  • the selection transistor 43A is turned on according to the selection signal ADDRESS DECODE_A after the period for accumulating the electric charge
  • the electric charge accumulated in the FD unit 42A is read out via the signal line 53A and corresponds to the amount of the electric charge.
  • the detection signal A is output from the ranging sensor 24.
  • the selection transistor 43B is turned on according to the selection signal ADDRESS DECODE_B
  • the electric charge accumulated in the FD unit 42B is read out via the signal line 53B
  • the detection signal B according to the amount of the electric charge is the distance measuring sensor 24. Is output from.
  • the electric charge stored in the FD section 42A is discharged when the reset transistor 44A is turned on according to the reset signal RST_A, and the electric charge stored in the FD section 42B is discharged when the reset transistor 44B is turned on according to the reset signal RST_B. Will be done.
  • the pixel 31 distributes the electric charge generated by the reflected light received by the photodiode 51 to the first tap 52A or the second tap 52B according to the delay time ⁇ T, and outputs the detection signal A and the detection signal B.
  • the data obtained by AD-converting the detection signal A and the detection signal B into digital signals corresponds to the raw data described above.
  • the delay time ⁇ T corresponds to the time during which the light emitted from the light emitting source 21 flies to the object, is reflected by the object, and then flies to the distance measuring sensor 24, that is, the delay time according to the distance to the object. Therefore, the distance measuring module 11 can obtain the distance (depth value) to the object according to the delay time ⁇ T based on the detection signal A and the detection signal B.
  • the distance measuring sensor 24 reflects at the light receiving timing shifted by 0 °, 90 °, 180 °, and 270 ° with respect to the irradiation timing of the irradiation light, as shown in FIG. It receives light and outputs 4 phases (4 types) of raw data.
  • the detection signal A obtained by receiving light in the same phase (phase 0 °) as the irradiation light is shifted by 90 degrees from the detection signal A 0 and the irradiation light (phase).
  • the detection signal A obtained by receiving light at 90 °) is the detection signal A 90
  • the detection signal A obtained by receiving light at a phase (phase 180 °) shifted by 180 degrees from the irradiation light is detected signal A 180 , the irradiation light and 270.
  • the detection signal A obtained by receiving light in a shifted phase (phase 270 °) is referred to as a detection signal A 270 .
  • the detection signal B obtained by receiving the light in the same phase (phase 0 °) as the irradiation light is received in the detection signal B 0 and the phase (phase 90 °) shifted by 90 degrees from the irradiation light.
  • the detection signal B obtained by receiving the detection signal B with the detection signal B 90 and the phase shifted by 180 degrees from the irradiation light (phase 180 °) is received and the detection signal B obtained by receiving the detection signal B with the detection signal B 180 and the phase shifted by 270 degrees from the irradiation light (phase).
  • the detection signal B obtained by receiving light at 270 °) will be referred to as the detection signal B 270 .
  • FIG. 8 is a diagram illustrating a method of calculating the depth value and the reliability by the 4 Phase method.
  • the depth value d corresponding to the distance to the object can be obtained by the following equation (1).
  • equation (1) c is the speed of light
  • ⁇ T is the delay time
  • f is the modulation frequency of light.
  • ⁇ in the equation (1) represents the phase shift amount [rad] of the reflected light, and is represented by the following equation (2).
  • I and Q of the equation (2) set the phases to 0 °, 90 °, 180 ° and 270 ° to obtain the detection signals A 0 to A 270 and the detection signals B 0 to B 270 . It is calculated by the following equation (3).
  • I and Q are signals obtained by converting the phase of the cos wave from polar coordinates to a Cartesian coordinate system (IQ plane), assuming that the change in brightness of the irradiation light is a cos wave.
  • the detection signals A 0 and B 0 digital values obtained by receiving light at phase 0 ° are the raw data of phase 0 °
  • the detection signals A 90 and B 90 digital value obtained by receiving light at phase 90 ° are digital.
  • the value) is received in 90 ° phase Raw data
  • the detection signals A 180 and B 180 digital values obtained by receiving light in phase 180 ° are received in phase 180 ° Raw data and in phase 270 °.
  • the reliability cnf can be obtained by the following equation (4).
  • FIG. 9 is a conceptual diagram of the light emission control signal and the exposure control signal when the drive mode is the normal drive mode, and the raw data.
  • the light emission control signal is, for example, a rectangular wave pulse signal that becomes high at a predetermined period, as shown in FIG.
  • the exposure control signal is a rectangular wave pulse signal whose phase is shifted by 0 °, 90 °, 180 °, or 270 ° with respect to the irradiation timing indicated by the light emission control signal.
  • the distance measuring sensor 24 outputs the detection signals A 0 and B 0 having a phase of 0 ° as raw data P 0 having a phase of 0 °, and the detection signals A 90 and B 90 having a phase of 90 ° have a phase.
  • the 90 ° raw data P 90 is output, the 180 ° phase detection signals A 180 and B 180 are output as 180 ° phase Raw data P 180 , and the 270 ° phase detection signals A 270 and B 270 are in phase. It is outputted as Raw data P 270 of 270 °.
  • FIG. 10 is a conceptual diagram of a light emission control signal and an exposure control signal when the drive mode is a special drive mode, and raw data.
  • the light emission control signal is, for example, an impulse function-like signal that momentarily becomes high for a short time in a predetermined cycle.
  • the light emission control signal can be expressed as, for example, a periodic function ⁇ (t) such as the following equation (5) that causes an impulse only when the phase is 0.
  • f in the equation (5) represents the emission frequency
  • t represents the time
  • n represents an integer of 0 or more.
  • the periodic function ⁇ (t) is referred to as a light emitting function ⁇ (t).
  • the exposure control signal is a square wave pulse signal whose phase is shifted by 1 ° from 0 ° to 179 ° with respect to the irradiation timing indicated by the light emission control signal.
  • the exposure period T is set to be the same as the normal drive mode.
  • the distance measuring sensor 24 outputs the detection signals A 0 and B 0 having a phase of 0 ° as the raw data P 0 having a phase of 0 °, and the detection signals A 1 and B 1 having a phase of 1 ° have a phase.
  • Output as 1 ° raw data P 1 and phase 2 ° detection signals A 2 and B 2 are output as phase 2 ° raw data P 2 ..., phase 179 ° detection signals A 179 and B 179. Is output as Raw data P 179 with a phase of 179 °.
  • FIG. 11 is a block diagram showing a detailed configuration example of the teacher data generation unit 12.
  • the teacher data generation unit 12 is composed of a storage unit 61 and a calculation unit 62, and operates when the drive mode is a special drive mode.
  • the storage unit 61 sets a delay amount with respect to the irradiation timing indicated by the light emission control signal, that is, a phase difference, and supplies the delay amount to the delay generation unit 23 of the distance measuring module 11.
  • the storage unit 61 sets the phase difference from 0 ° to 179 ° in 1 ° units. Then, when the temperature is set from 0 ° to 179 ° in 1 ° units, the raw data P 1 to P 179 having a total of 180 phases, which are sequentially supplied from the distance measuring sensor 24, are stored. When the raw data P 1 to P 179 having a total of 180 phases are accumulated, the storage unit 61 supplies them to the calculation unit 62.
  • the calculation unit 62 generates impulse response data in the time direction of the reflected light with respect to the subject as teacher data from the raw data P 1 to P 179 having a total of 180 phases supplied from the storage unit 61.
  • the function representing the known emission control signal is ⁇ (t)
  • the function representing the known exposure control signal is E (t- ⁇ )
  • the unknown reflected light to the object to be measured is E (t- ⁇ )
  • the impulse response data in the time direction of is r (t)
  • the brightness value I ⁇ (x, y) of the predetermined pixel (x, y) acquired as raw data is expressed by the following equation (6). Can be done.
  • T is the exposure time
  • * is the operator representing the convolution integral
  • is the phase difference (delay amount) ⁇ T between the irradiation timing of the emission control signal and the exposure timing of the exposure control signal.
  • the displacement of the luminance values of the raw data P 1 to P 179 of a total of 180 phases obtained by receiving light while changing the phase difference from 0 ° to 179 ° for a predetermined pixel (x, y) is a function F xy ( ⁇ ). Then, the function F xy ( ⁇ ) can be expressed by Eq. (7).
  • the correlation function F xy ( ⁇ ) is an exposure function E in which the impulse response data r (t) of unknown reflected light in the time direction has a finite width. It can be said that it is in a "blurred" state by (t- ⁇ ).
  • the impulse response of the reflected light of a predetermined pixel (x, y) in the time direction is performed by performing a deconvolution operation on the correlation function F xy ( ⁇ ) with the blur function as the exposure function E (t- ⁇ ). It can be converted to data r xy (t).
  • the arithmetic unit 62 is based on the convolution by the inverse function of the correlation function F xy ( ⁇ ) and the exposure function E (t- ⁇ ) and the inverse function of the emission function ⁇ (t) represented by the following equation (8).
  • the impulse response data r xy (t) in the time direction of a predetermined pixel (x, y) is calculated.
  • * is an operator that represents the convolution integral.
  • a Wiener filter M (t) for noise suppression is used. It can be used to calculate the impulse response data r xy (t) of the reflected light of a predetermined pixel (x, y) in the time direction.
  • the impulse response data r xy (t) in the time direction of the reflected light of a predetermined pixel (x, y) when the Wiener filter M (t) is used is represented by the following equation (9).
  • Equation (9) * is an operator representing a convolution integral
  • M (t) is a Wiener filter.
  • the Wiener filter M (t) is represented by the equation (10).
  • FT -1 in equation (10) is the operator that performs the inverse Fourier transform (that is, inverse Fourier transform) of the Fourier transform.
  • F hat ( ⁇ on F) xy is FT [F xy ]
  • F hat * xy is ,
  • F represents the complex conjugate of xy .
  • the impulse response data r (t) in the time direction of the reflected light is calculated for each pixel 31 of the pixel array unit 32, it is a total of two dimensions in the spatial direction and one dimension in the time direction corresponding to the pixel array of the ToF sensor. It becomes three-dimensional data.
  • the calculation unit 62 calculates the impulse response data r (t) in the time direction of the reflected light with respect to the subject from the Raw data P 1 to P 179 having a total of 180 phases supplied from the storage unit 61. , Supply to the learning unit 13.
  • FIG. 12 is a block diagram showing a detailed configuration example of the learning unit 13.
  • the learning unit 13 is composed of a model calculation unit 81, an error calculation unit 82, and a parameter update unit 83.
  • the learning section 13, the ranging module 11 is generated and is driven in the normal drive mode, Raw data P 0 of the phase 0 °, the phase 90 ° of Raw data P 90, a phase 180 ° of Raw Data P 180 and, Raw data P 270 with a phase of 270 ° is supplied from the ranging module 11.
  • teacher data is generated based on 180 phase raw data P 0 to P 179 in 1 ° units from phase 0 ° to 179 °, which is generated by driving the distance measuring module 11 in the special drive mode.
  • the impulse response data r (t) in the time direction of the reflected light generated by the generation unit 12 is supplied from the teacher data generation unit 12.
  • P 0 to P 179 are acquired in a static scene and corresponded to each other.
  • the raw data used by the learning unit 13 is data obtained by the distance measuring module 11 measuring the same stationary subject in each of the normal drive mode and the special drive mode.
  • the impulse response data r (t) supplied to the calculation unit 81 in the time direction of the reflected light is supplied to the error calculation unit 82 of the learning unit 13 as training data of the learning model.
  • the model calculation unit 81 has a phase 0 ° raw data P 0 , a phase 90 ° raw data P 90 , and a phase 180 ° supplied as student data to the learning model using the model parameters set by the parameter update unit 83.
  • the raw data P 180 and the raw data P 270 having a phase of 270 ° are substituted to calculate the impulse response data r (t) in the time direction of the reflected light.
  • the impulse response data r (t) in the time direction of the reflected light calculated by the learning model is supplied to the error calculation unit 82.
  • the error calculation unit 82 includes the impulse response data r (t) in the time direction of the reflected light supplied from the model calculation unit 81 and the impulse response data in the time direction of the reflected light supplied as teacher data from the teacher data generation unit 12.
  • the error with r (t) is calculated and supplied to the parameter update unit 83.
  • the parameter update unit 83 updates the model parameters so that the error is small and supplies the model parameters to the model calculation unit 81.
  • the current model parameters are supplied to the estimation unit 14 (FIG. 1).
  • the learning model is sufficiently learned by repeating the update of the model parameters a predetermined number of times.
  • the learned model parameters are supplied from the learning unit 13 to the estimation unit 14 and set as model parameters of the CNN which is a predictor.
  • the estimation unit 14 receives the four-phase Raw data P 0 , P 90 , P 180 , and P 270 output by the ranging sensor 24 as inputs, and impulses the reflected light in the time direction.
  • the response data r (t) can be predicted and output.
  • the learning process in which the ranging system 1 learns the prediction model (learning model) for predicting the impulse response data in the time direction of the reflected light will be described with reference to the flowchart of FIG. This process is started, for example, when an operation unit (not shown) instructs the start of the learning process.
  • step S1 the control unit 15 sets the ranging module 11 to the normal drive mode, which is the first drive mode.
  • the light emission control unit 22 supplies a light emission control signal to the light emission source 21 that repeats irradiation on / off at the irradiation time T.
  • the delay generation unit 23 sequentially generates an exposure control signal whose phase is shifted by 0 °, 90 °, 180 °, and 270 ° with respect to the light emission control signal, and supplies the exposure control signal to the distance measuring sensor 24.
  • step S2 the distance measuring sensor 24 receives the reflected light that is reflected by the object and returned from the irradiation light emitted from the light emitting source 21. More specifically, the ranging sensor 24 sequentially receives the reflected light with a phase difference of 0 °, 90 °, 180 °, and 270 ° with respect to the irradiation timing of the irradiation light according to the exposure control signal. Then, the four-phase Raw data P 0 , P 90 , P 180 , and P 270 obtained as a result are output to the learning unit 13.
  • step S3 the control unit 15 sets the ranging module 11 to the special drive mode, which is the second drive mode.
  • the light emission control unit 22 supplies the light emission source 21 with an impulse function-like light emission control signal that becomes high for a short time in a predetermined cycle.
  • the delay generation unit 23 sequentially generates an exposure control signal whose phase is shifted by 1 ° from 0 ° to 179 ° with respect to the irradiation timing indicated by the light emission control signal, and supplies the exposure control signal to the distance measuring sensor 24.
  • step S4 the distance measuring sensor 24 receives the reflected light that is reflected by the object and returned by the irradiation light emitted from the light emitting source 21. More specifically, the ranging sensor 24 receives the reflected light by changing the phase difference in 1 ° increments from 0 ° to 179 ° with respect to the irradiation timing of the irradiation light according to the exposure control signal.
  • the obtained 180-phase Raw data P 0 to P 179 are output to the teacher data generation unit 12.
  • step S5 the teacher data generation unit 12 sequentially supplies each pixel (x ,,) of the pixel array unit 32 from the 180-phase raw data P 0 to P 179 from 0 ° to 179 °, which are sequentially supplied from the distance measuring sensor 24.
  • the correlation function F xy ( ⁇ ) of y ) is calculated.
  • the storage unit 61 stores raw data P 0 to P 179 of 180 phases from 0 ° to 179 ° sequentially supplied from the distance measuring sensor 24.
  • the calculation unit 62 uses the above-mentioned equation (7) to perform a correlation function F of each pixel (x, y) of the pixel array unit 32. Calculate xy ( ⁇ ).
  • step S6 the calculation unit 62 calculates the impulse response data r xy (t) of the reflected light in the time direction using the calculated correlation function F xy ( ⁇ ). More specifically, the arithmetic unit 62 convolves by the inverse function of the correlation function F xy ( ⁇ ) and the exposure function E (t- ⁇ ) and the emission function ⁇ (t) represented by the above equation (8).
  • the impulse response data r xy (t) in the time direction of the reflected light of each pixel (x, y) of the pixel array unit 32 is calculated by convolution by the inverse function of.
  • the calculated impulse response data r xy (t) of the reflected light of each pixel (x, y) in the time direction is supplied to the learning unit 13.
  • step S7 the learning unit 13 uses the four-phase raw data P 0 , P 90 , P 180 , and P 270 generated in the normal drive mode as student data, and the reflected light generated by the teacher data generation unit 12.
  • the prediction model is trained using the impulse response data r xy (t) in the time direction of.
  • the parameters (model parameters) of the prediction model obtained by the learning in step S7 are supplied to the estimation unit 14, and the learning process is completed.
  • the prediction process of the distance measuring system 1 that predicts and outputs the impulse response data in the time direction of the reflected light in the normal drive mode will be described. This process is started, for example, when an operation unit (not shown) instructs the start of the prediction process.
  • step S11 the control unit 15 sets the ranging module 11 to the normal drive mode, which is the first drive mode.
  • the light emission control unit 22 supplies a light emission control signal to the light emission source 21 that repeats irradiation on / off at the irradiation time T.
  • the delay generation unit 23 sequentially generates an exposure control signal whose phase is shifted by 0 °, 90 °, 180 °, and 270 ° with respect to the light emission control signal, and supplies the exposure control signal to the distance measuring sensor 24.
  • step S12 the distance measuring sensor 24 receives the reflected light that is reflected by the object and returned by the irradiation light emitted from the light emitting source 21. More specifically, the ranging sensor 24 sequentially receives the reflected light with a phase difference of 0 °, 90 °, 180 °, and 270 ° with respect to the irradiation timing of the irradiation light according to the exposure control signal. Then, the four-phase Raw data P 0 , P 90 , P 180 , and P 270 obtained as a result are output to the estimation unit 14.
  • step S13 the estimation unit 14 predicts that the four-phase raw data P 0 , P 90 , P 180 , and P 270 supplied from the distance measuring sensor 24 are set with the model parameters learned by the learning unit 13. Input to the device (CNN) to generate (predict) impulse response data in the time direction of the reflected light. Then, the estimation unit 14 outputs the impulse response data of the reflected light obtained as the prediction result in the time direction to the outside, and ends the prediction process.
  • CNN device
  • the estimation unit 14 includes not only the impulse response data of the generated reflected light in the time direction, but also the four-phase Raw data P 0 , P 90 , P 180 , and P 270 supplied from the ranging sensor 24. At the same time, it may be output.
  • the distance measuring system 1 when the distance measuring sensor 24, which is an Indirect ToF type ToF sensor, outputs light receiving data corresponding to the distance information to the subject in the normal drive mode, the light receiving data. Based on the above, it is possible to generate a model parameter of a predictor that generates impulse response data in the time direction of reflected light, which is light receiving data with improved resolution in the time direction.
  • the four-phase Raw data P 0 , P 90 , and P 180 output by the distance measuring sensor 24 in the normal drive mode using the model parameters learned in the learning process .
  • P 270 it is possible to generate impulse response data of reflected light in the time direction, which is received data with improved resolution in the time direction.
  • the ranging system 1 it is possible to more easily obtain impulse response data in the time direction of the reflected light.
  • Equation (11) c is the speed of light, ⁇ is the operator that extracts the phase in complex space, and R (f) is the complex Fourier transform of the impulse response data r (t) in the time direction of the reflected light at the emission frequency f. Represent.
  • the four-phase Raw data P 0 , P 90 , P 180 , and P 270 obtained by setting a plurality of emission frequencies f (emission frequencies f 1 and f 2 ) are obtained.
  • the learning device of the learning unit 13 By giving the learning device of the learning unit 13 as student data for learning, it is possible to perform learning by excluding error factors due to multi-pass or the like, and it is possible to learn model parameters with high prediction accuracy.
  • the special drive mode corresponds to either the emission frequencies f 1 or f 2 180.
  • the phase raw data P 0 to P 179 may be used. In other words, even when phase data is acquired at a plurality of emission frequencies in the normal drive mode, the raw data acquired in the special drive mode is the same as in the case of one emission frequency.
  • FIGS. 16 to 24 the parts corresponding to those in FIG. 1 and the like described above are the same, and the description thereof will be omitted as appropriate.
  • FIG. 16 is a block diagram showing a first application example using the ranging system 1.
  • the distance measuring system 1 of FIG. 16 is composed of a distance measuring module 11, a signal processing device 101, and a peak position calculation unit 102.
  • the signal processing device 101 of FIG. 16 corresponds to the signal processing device 16 shown by the broken line in FIG. 1, and the learned model parameters are set in the estimation unit 14.
  • the signal processing device 101 may be composed only of the estimation unit 14 in which the trained model parameters are set.
  • the ranging module 11 is set to the normal drive mode, which is the first drive mode, and receives the reflected light that is reflected by the object and the irradiation light emitted from the light emitting source 21 is received by the object, and the four-phase raw data P Outputs 0 , P 90 , P 180 , and P 270 .
  • the signal processing device 101 (estimating unit 14) inputs the four-phase Raw data P 0 , P 90 , P 180 , and P 270 supplied from the ranging sensor 24 of the ranging module 11 into the predictor for prediction. As a result, impulse response data in the time direction of the reflected light is generated and output to the peak position calculation unit 102.
  • the peak position calculation unit 102 detects the first peak from the impulse response data in the time direction output from the signal processing device 101 (estimating unit 14), and sets the distance corresponding to the first peak to the object. Generates and outputs a corrected depth image as the depth value of.
  • the impulse response data in the time direction of the reflected light has two or more peaks with respect to the time axis. It becomes the received light data of the profile having.
  • the light directly reflected from the object becomes the first peak, and is indirectly reflected and incident on the same pixel.
  • the emitted light becomes the second peak, the third peak, and so on.
  • the first peak, the second peak, the third peak, ... Are counted in order from the peak having the smaller time axis.
  • the peak position calculation unit 102 corrects the distance corresponding to the first peak as a true depth value, so that a more accurate depth image can be generated.
  • FIG. 18 is a block diagram showing a second application example using the ranging system 1.
  • the distance measuring system 1 of FIG. 18 is composed of a distance measuring module 11, a signal processing device 101, and a material estimation unit 103.
  • the impulse response data of the reflected light in the time direction differs depending on the material, for example, S. Su, F. Heide, R. Swanson, J. Klein, C. Callenberg, M. Hullin, W. Heidrich, Material Classification Using Raw. It is disclosed in Time-of-Flight Measurements, in CVPR 2016., etc.
  • the material estimation unit 103 estimates and outputs the material of the object as the subject from the impulse response data in the time direction of the reflected light output from the signal processing device 101 (estimation unit 14).
  • the material estimation unit 103 classifies the material of the object into, for example, cloth, wood, metal, and the like, and outputs the material.
  • the material estimation unit 103 includes, for example, a feature extraction unit 111 by a convolutional neural network having a three-dimensional (x, y, t) convolutional layer, and a classification unit using a fully connected layer. It can be configured with 112. Of course, other configurations may be adopted as the estimator for estimating the material.
  • FIG. 20 is a block diagram showing a third application example using the ranging system 1.
  • the distance measuring system 1 of FIG. 20 is composed of a distance measuring module 11, a signal processing device 101, a material estimation unit 103, an AR display control unit 104, and an AR glass 105.
  • the distance measuring system 1 of FIG. 20 has a configuration in which an AR display control unit 104 and an AR glass 105 are added to the distance measuring system 1 of FIG.
  • the ranging system 1 of FIG. 20 is a system that uses the material information estimated by the material estimation unit 103 for the superimposed display of the AR glass 105 that superimposes characters and images on the landscape in the real world.
  • FIG. 21 shows a display example of the AR glass 105 in the third application example.
  • the materials of the floor 121 and the floor 122 in the real world reflected in the user's view through the AR glass 105 are estimated by the material estimation unit 103 to be polyethylene and polypropylene, respectively.
  • the AR display control unit 104 superimposes and displays the estimated material name on the areas of the floor 121 and the floor 122 of the AR glass 105.
  • FIG. 22 shows another display example of the AR glass 105 in the third application example.
  • the AR display control unit 104 superimposes and displays the robots (images) 131 and 132 on the landscape in the real world.
  • the AR display control unit 104 uses the area of the floor 121 where the robots 131 and 132 are estimated to be polyethylene.
  • the robots 131 and 132 are displayed on the AR glass 105 so as to move only on the top.
  • the AR display control unit 104 of FIG. 20 generates an image to be displayed (superimposed display) on the AR glass 105 based on the material supplied from the material estimation unit 103, and supplies the image to the AR glass 105.
  • the image displayed on the AR glass 105 by the AR display control unit 104 is a character representing a material name such as polyethylene or polypropylene.
  • the images displayed on the AR glass 105 by the AR display control unit 104 are the robots 131 and 132.
  • the AR glass 105 displays an image supplied from the AR display control unit 104 on a predetermined display unit.
  • FIG. 23 is a flowchart of the AR display process for performing the AR display described with reference to FIG. 22 in the third application example.
  • step S31 the distance measuring module 11 measures the distance to the subject in the normal driving mode, which is the first driving mode.
  • the four-phase raw data P 0 , P 90 , P 180 , and P 270 obtained by the measurement are supplied to the signal processing device 101.
  • step S32 the signal processing device 101 inputs the four-phase Raw data P 0 , P 90 , P 180 , and P 270 supplied from the ranging sensor 24 into the predictor, and as a prediction result, the time of the reflected light.
  • the impulse response data of the direction is generated and output to the material estimation unit 103.
  • step S33 the material estimation unit 103 estimates the material in the real world reflected in the user's field of view from the impulse response data in the time direction of the reflected light supplied from the signal processing device 101, and outputs the material to the AR display control unit 104. To do.
  • the correspondence between the user's field of view through the AR glass 105 and the measurement area of the distance measuring module 11 is taken by calibration or the like.
  • step S34 the AR display control unit 104 determines the moving directions of the robots 131 and 132, and in step S35, determines whether the determined moving directions are the regions in which the robots 131 and 132 can move. For example, when moving only on the region of the floor 121 presumed to be polyethylene as in the above-mentioned example, it is determined whether the determined moving direction is on the region of the floor 121.
  • step S35 If it is determined in step S35 that the determined moving direction is not the area in which the robots 131 and 132 can move, the process returns to step S34, and the moving directions of the robots 131 and 132 are determined again.
  • step S35 determines whether the determined moving direction is the area in which the robots 131 and 132 can move. If it is determined in step S35 that the determined moving direction is the area in which the robots 131 and 132 can move, the process proceeds to step S36, and the AR display control unit 104 determines the robots 131 and 132. Move in the direction of movement.
  • step S36 the process returns to step S31, and the processes of steps S31 to S36 described above are repeated.
  • the AR display process of FIG. 23 is ended.
  • the material estimation result by the material estimation unit 103 was used for the superimposed display of the AR glass 105, but it can also be used for the movement control of the autonomously moving robot.
  • FIG. 24 shows an application example in which the material estimation result by the material estimation unit 103 is used for the movement control of the autonomously moving robot.
  • the robot 141 of FIG. 24 has an image sensor and a distance measuring module 11 as visual sensors, and includes captured images and information such as a distance to an object existing in the traveling direction measured by the distance measuring module 11. Move autonomously based on.
  • the robot 141 includes the distance measuring module 11, the signal processing device 101, the material estimation unit 103, and the movement control unit that controls the movement, as described above.
  • the movement control unit determines the movement direction based on the materials of the floor 142 and the floor 143 supplied from the material estimation unit 103. For example, it is estimated that the floor 142 is a soft material such as a carpet that becomes unstable when the robot 141 touches the ground, and the floor 143 is a hard material such as a tile that is stable when the robot 141 touches the ground. In this case, the movement control unit controls the movement so as to move the area of the floor 143 without moving the area of the floor 142.
  • step S33 of the AR display process described with reference to FIG. 23 the material estimation unit 103 estimates the material of the robot's field of view instead of the user's field of view, and in steps S34 to S36, instead of the AR display control unit 104.
  • the control as described with reference to FIG. 24 becomes possible.
  • the robot 141 can select and move a stable ground on which the posture can be easily maintained.
  • the series of processes described above can be executed by hardware or by software.
  • the programs constituting the software are installed on the computer.
  • the computer includes a microcomputer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
  • FIG. 25 is a block diagram showing a configuration example of computer hardware that programmatically executes a series of processes by the signal processing device 16 in the subsequent stage of the ranging module 11.
  • the CPU Central Processing Unit
  • ROM ReadOnly Memory
  • RAM RandomAccessMemory
  • An input / output interface 205 is further connected to the bus 204.
  • An input unit 206, an output unit 207, a storage unit 208, a communication unit 209, and a drive 210 are connected to the input / output interface 205.
  • the input unit 206 includes a keyboard, a mouse, a microphone, a touch panel, an input terminal, and the like.
  • the output unit 207 includes a display, a speaker, an output terminal, and the like.
  • the storage unit 208 includes a hard disk, a RAM disk, a non-volatile memory, and the like.
  • the communication unit 209 includes a network interface and the like.
  • the drive 210 drives a removable recording medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 201 loads the program stored in the storage unit 208 into the RAM 203 via the input / output interface 205 and the bus 204 and executes the above-described series. Is processed.
  • the RAM 203 also appropriately stores data and the like necessary for the CPU 201 to execute various processes.
  • the program executed by the computer can be recorded and provided on the removable recording medium 211 as a package medium or the like, for example. Programs can also be provided via wired or wireless transmission media such as local area networks, the Internet, and digital satellite broadcasting.
  • the program can be installed in the storage unit 208 via the input / output interface 205 by mounting the removable recording medium 211 in the drive 210. Further, the program can be received by the communication unit 209 and installed in the storage unit 208 via a wired or wireless transmission medium. In addition, the program can be pre-installed in the ROM 202 or the storage unit 208.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
  • this technology can have a cloud computing configuration in which one function is shared by a plurality of devices via a network and processed jointly.
  • each step described in the above flowchart can be executed by one device or shared by a plurality of devices.
  • one step includes a plurality of processes
  • the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
  • the present technology can have the following configurations.
  • (1) The reflection on the subject from the received data output in the second drive mode by the distance measuring sensor that receives the reflected light reflected by the subject in the first drive mode or the second drive mode.
  • a teacher data generator that generates impulse response data in the time direction of light as teacher data
  • a signal processing device including a learning unit that calculates model parameters of a learning model that learns the relationship between the received light data in the first drive mode and the impulse response data in the time direction of the reflected light.
  • (2) An estimation unit that inputs the received data of the distance measuring sensor in the first drive mode to the learning model using the model parameters calculated by the learning unit and outputs the impulse response data in the time direction of the reflected light.
  • the signal processing device according to (1) above.
  • the teacher data generation unit refers to the subject from a plurality of received data when the distance measuring sensor in the second drive mode receives the reflected light a plurality of times while changing the phase difference with the irradiation light.
  • the signal processing device according to (1) or (2) above which generates impulse response data in the time direction of the reflected light as training data.
  • the teacher data generation unit exposes a plurality of received data when the distance measuring sensor in the second drive mode receives the reflected light a plurality of times while changing the phase difference with the irradiation light.
  • the signal processing device according to (3) above which generates impulse response data in the time direction of the reflected light as the teacher data by performing convolution by the inverse function of the exposure function indicating the timing.
  • the signal processing device according to any one of (1) to (4), wherein in the second drive mode, the irradiation light is emitted based on an impulse function-shaped light emission control signal.
  • the learning unit uses the teacher data as impulse response data in the time direction of the reflected light obtained in the second drive mode, and the student data as light receiving data in the first drive mode, as model parameters of the learning model.
  • the signal processing apparatus according to any one of (1) to (5) above.
  • (7) The signal processing device according to any one of (1) to (6) above, wherein the learning model of the learning unit is composed of a convolutional neural network.
  • the learning unit calculates model parameters of the learning model as the light receiving data obtained by the distance measuring sensor by setting a plurality of emission frequencies in the first drive mode from the student data (1).
  • the signal processing apparatus according to any one of (7) to (7).
  • (9) With the distance measuring sensor A light emitting source that emits the irradiation light and A light emission control unit that supplies a light emission control signal that controls the light emission source,
  • the signal processing according to any one of (1) to (8) above, further comprising a distance measuring module including a delay generating unit for generating an exposure control signal for controlling the exposure timing of the distance measuring sensor from the light emission control signal. apparatus.
  • the signal processing device The reflection on the subject from the received light data output by the distance measuring sensor in the second drive mode, which receives the reflected light reflected by the subject in the first drive mode or the second drive mode. Generate impulse response data in the time direction of light as teacher data, A signal processing method for calculating model parameters of a learning model for learning the relationship between the received light data in the first drive mode and the impulse response data in the time direction of the reflected light.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The present invention relates to a signal processing device and signal processing method, which make it possible to obtain impulse response data regarding the temporal direction of reflected light in a simpler manner. This signal processing device comprises: a training data generation unit in which, on the basis of received-light data outputted in a second drive mode by a distance measurement sensor which, in either a first drive mode or the second drive mode, receives reflected light, which is radiated light that has reflected off a subject and returned, impulse response data regarding the temporal direction of the light reflected off the subject is generated as training data; and a learning unit which calculates a model parameter for a learning model that learns a relationship between received-light data in the first drive mode and impulse response data regarding the temporal direction of the reflected light. The present invention can be applied to a distance measurement system, for example.

Description

信号処理装置および信号処理方法Signal processing device and signal processing method
 本技術は、信号処理装置および信号処理方法に関し、特に、より簡単に、反射光の時間方向のインパルス応答のデータを得ることができるようにした信号処理装置および信号処理方法に関する。 The present technology relates to a signal processing device and a signal processing method, and more particularly to a signal processing device and a signal processing method that enable more easily to obtain impulse response data in the time direction of reflected light.
 近年、スマートフォンなどのモバイル端末に、測距モジュールが搭載されているものがある。測距モジュールにおける測距方法としては、例えば、Indirect ToF(Time of Flight)方式がある。Indirect ToF方式では、光を物体に向かって照射して物体の表面で反射してくる反射光が検出される。そして、反射光の飛行時間を測定した測定値に基づいて、物体までの距離が算出される。 In recent years, some mobile terminals such as smartphones are equipped with a ranging module. As a distance measuring method in the distance measuring module, for example, there is an Indirect ToF (Time of Flight) method. In the IndirectToF method, the reflected light that is reflected on the surface of the object by irradiating the light toward the object is detected. Then, the distance to the object is calculated based on the measured value obtained by measuring the flight time of the reflected light.
 Indirect ToFセンサにおける露光制御は、例えば、1画素につき2つの電荷蓄積部を備えたイメージセンサを用い、光電変換された電荷を2つの電荷蓄積部に高速に振り分けることにより実現される。 Exposure control in the Indirect ToF sensor is realized, for example, by using an image sensor provided with two charge storage units per pixel and distributing the photoelectrically converted charges to the two charge storage units at high speed.
 一方、特許文献1、非特許文献1には、1画素につき4つあるいは8つの電荷蓄積部を備え、反射光の数十nsの時間方向のインパルス応答を取得するセンサが提案されている。 On the other hand, Patent Document 1 and Non-Patent Document 1 propose a sensor having four or eight charge storage units per pixel and acquiring an impulse response of reflected light in the time direction of several tens of ns.
 非特許文献2乃至4には、Indirect ToFセンサの出力をニューラルネットワークに入力し、主に多重反射による奥行きの推定誤差を低減し、正確な奥行き情報を出力させる技術が提案されている。 Non-Patent Documents 2 to 4 propose a technique of inputting the output of the IndirectToF sensor to the neural network, reducing the depth estimation error mainly due to multiple reflections, and outputting accurate depth information.
特開2018-33008号公報JP-A-2018-33008
 特許文献1および非特許文献1の技術では、反射光の時間方向のインパルス応答を得るために、多くのパラメータを変えながら時分割で情報を取得するため、撮影を行うだけでも長時間を要し、ユーザは簡便な計測ができない。また、完全に静止している物体にのみ適用可能であることから、一般環境での利用・コンシューマユースには向かない。 In the techniques of Patent Document 1 and Non-Patent Document 1, in order to obtain an impulse response in the time direction of reflected light, information is acquired in time division while changing many parameters, so it takes a long time just to take a picture. , The user cannot perform simple measurement. Moreover, since it can be applied only to a completely stationary object, it is not suitable for use in a general environment or consumer use.
 非特許文献2乃至4の技術では、多重反射による奥行きの推定誤差を減少させる効果はあるが、奥行きマップが出力であるため、反射光の時間方向のインパルス応答を得ることはできない。 The techniques of Non-Patent Documents 2 to 4 have the effect of reducing the depth estimation error due to multiple reflections, but since the depth map is an output, it is not possible to obtain an impulse response in the time direction of the reflected light.
 本技術は、このような状況に鑑みてなされたものであり、より簡単に、反射光の時間方向のインパルス応答のデータを得ることができるようにするものである。 This technology was made in view of such a situation, and makes it possible to more easily obtain the impulse response data in the time direction of the reflected light.
 本技術の一側面の信号処理装置は、照射光が被写体に反射されて返ってきた反射光を第1の駆動モードまたは第2の駆動モードで受光する測距センサが前記第2の駆動モードで出力した受光データから、前記被写体に対する前記反射光の時間方向のインパルス応答データを教師データとして生成する教師データ生成部と、前記第1の駆動モードにおける受光データと前記反射光の時間方向のインパルス応答データとの関係を学習する学習モデルのモデルパラメータを算出する学習部とを備える。 In the signal processing device on one aspect of the present technology, the ranging sensor that receives the reflected light reflected by the subject in the first drive mode or the second drive mode is in the second drive mode. A teacher data generation unit that generates impulse response data of the reflected light with respect to the subject in the time direction as teacher data from the output received data, and an impulse response of the received data in the first drive mode and the reflected light in the time direction. It is provided with a learning unit that calculates model parameters of a learning model that learns the relationship with data.
 本技術の一側面の信号処理方法は、信号処理装置が、照射光が被写体に反射されて返ってきた反射光を第1の駆動モードまたは第2の駆動モードで受光する測距センサが前記第2の駆動モードで出力した受光データから、前記被写体に対する前記反射光の時間方向のインパルス応答データを教師データとして生成し、前記第1の駆動モードにおける受光データと前記反射光の時間方向のインパルス応答データとの関係を学習する学習モデルのモデルパラメータを算出する。 In the signal processing method of one aspect of the present technology, the distance measuring sensor in which the signal processing device receives the reflected light reflected by the subject in the first drive mode or the second drive mode is the first. From the received light data output in the second drive mode, impulse response data in the time direction of the reflected light to the subject is generated as teacher data, and the received light data in the first drive mode and the impulse response in the time direction of the reflected light are generated. Calculate the model parameters of the learning model that learns the relationship with the data.
 本技術の一側面においては、照射光が被写体に反射されて返ってきた反射光を第1の駆動モードまたは第2の駆動モードで受光する測距センサが前記第2の駆動モードで出力した受光データから、前記被写体に対する前記反射光の時間方向のインパルス応答データが教師データとして生成され、前記第1の駆動モードにおける受光データと前記反射光の時間方向のインパルス応答データとの関係を学習する学習モデルのモデルパラメータが算出される。 In one aspect of the present technology, the light receiving received by the distance measuring sensor that receives the reflected light reflected by the subject in the first drive mode or the second drive mode is output in the second drive mode. From the data, the impulse response data of the reflected light with respect to the subject in the time direction is generated as teacher data, and learning to learn the relationship between the received light data in the first drive mode and the impulse response data of the reflected light in the time direction. The model parameters of the model are calculated.
 なお、本技術の一側面の信号処理装置は、コンピュータにプログラムを実行させることにより実現することができる。コンピュータに実行させるプログラムは、伝送媒体を介して伝送することにより、又は、記録媒体に記録して、提供することができる。 The signal processing device of one aspect of the present technology can be realized by causing a computer to execute a program. The program to be executed by the computer can be provided by transmitting through a transmission medium or by recording on a recording medium.
 信号処理装置は、独立した装置であっても良いし、1つの装置を構成している内部ブロックであっても良い。 The signal processing device may be an independent device or an internal block constituting one device.
本技術を適用した測距システムの一実施の形態の構成例を示すブロック図である。It is a block diagram which shows the structural example of one Embodiment of the distance measuring system to which this technique is applied. 時間方向のインパルス応答データについて説明する図である。It is a figure explaining the impulse response data in the time direction. 時間方向のインパルス応答データについて説明する図である。It is a figure explaining the impulse response data in the time direction. 測距モジュールの詳細構成例を示すブロック図である。It is a block diagram which shows the detailed configuration example of a distance measurement module. 測距センサの動作について説明する図である。It is a figure explaining the operation of a distance measuring sensor. 測距センサの動作について説明する図である。It is a figure explaining the operation of a distance measuring sensor. 測距センサの動作について説明する図である。It is a figure explaining the operation of a distance measuring sensor. 測距センサの動作について説明する図である。It is a figure explaining the operation of a distance measuring sensor. 測距センサの動作について説明する図である。It is a figure explaining the operation of a distance measuring sensor. 測距センサの動作について説明する図である。It is a figure explaining the operation of a distance measuring sensor. 教師データ生成部の詳細構成例を示すブロック図である。It is a block diagram which shows the detailed structure example of a teacher data generation part. 学習部の詳細構成例を示すブロック図である。It is a block diagram which shows the detailed structure example of a learning part. 時間方向のインパルス応答データを予測する予測モデルを学習する学習処理を説明するフローチャートである。It is a flowchart explaining the learning process which learns the prediction model which predicts the impulse response data in the time direction. 測距システムの予測処理について説明するフローチャートである。It is a flowchart explaining the prediction process of a distance measurement system. 測距システムの変形例を説明する図である。It is a figure explaining the modification of the distance measurement system. 測距システムの第1の応用例を示すブロック図である。It is a block diagram which shows the 1st application example of a distance measuring system. 測距システムの第1の応用例を説明する図である。It is a figure explaining the 1st application example of a distance measuring system. 測距システムの第2の応用例を示すブロック図である。It is a block diagram which shows the 2nd application example of the distance measuring system. 測距システムの第2の応用例を説明する図である。It is a figure explaining the 2nd application example of the distance measuring system. 測距システムの第3の応用例を示すブロック図である。It is a block diagram which shows the 3rd application example of a distance measuring system. 測距システムの第3の応用例を説明する図である。It is a figure explaining the 3rd application example of the distance measuring system. 測距システムの第3の応用例を説明する図である。It is a figure explaining the 3rd application example of the distance measuring system. 第3の応用例のAR表示処理を説明するフローチャートである。It is a flowchart explaining the AR display processing of the 3rd application example. 自律移動ロボットの移動制御の応用例を説明する図である。It is a figure explaining the application example of the movement control of an autonomous mobile robot. 本技術を適用したコンピュータの一実施の形態の構成例を示すブロック図である。It is a block diagram which shows the structural example of one Embodiment of the computer to which this technique is applied.
 以下、本技術を実施するための形態(以下、実施の形態という)について説明する。なお、説明は以下の順序で行う。
1.測距システムの構成例
2.測距システムの処理概要
3.測距モジュールの詳細構成例
4.教師データ生成部の詳細構成例
5.学習部の詳細構成例
6.測距システムの処理
7.変形例
8.応用例
9.コンピュータ適用例
Hereinafter, embodiments for carrying out the present technology (hereinafter referred to as embodiments) will be described. The explanation will be given in the following order.
1. 1. Configuration example of distance measurement system 2. Outline of processing of distance measurement system 3. Detailed configuration example of the ranging module 4. Detailed configuration example of the teacher data generation unit 5. Detailed configuration example of the learning unit 6. Distance measurement system processing 7. Modification example 8. Application example 9. Computer application example
<1.測距システムの構成例>
 図1は、本技術を適用した測距システムの一実施の形態の構成例を示すブロック図である。
<1. Configuration example of distance measurement system>
FIG. 1 is a block diagram showing a configuration example of an embodiment of a distance measuring system to which the present technology is applied.
 図1の測距システム1は、測距モジュール11、教師データ生成部12、学習部13、推定部14、および、制御部15を備える。 The distance measuring system 1 of FIG. 1 includes a distance measuring module 11, a teacher data generation unit 12, a learning unit 13, an estimation unit 14, and a control unit 15.
 測距モジュール11は、照射光が被写体としての所定の物体に反射されて返ってきた反射光を受光することで、被写体までの距離情報に相当する受光データを出力するIndirect ToF方式の距離測定モジュールである。測距モジュール11が出力する受光データを、Rawデータとも称する。 The distance measuring module 11 is an Indirect ToF type distance measuring module that outputs light receiving data corresponding to distance information to a subject by receiving the reflected light that is reflected by a predetermined object as a subject and returned. Is. The received light data output by the ranging module 11 is also referred to as raw data.
 測距モジュール11は、駆動モードとして第1の駆動モードと第2の駆動モードを有しており、制御部15により指定された駆動モードで動作して、Rawデータを出力する。第1の駆動モードは、一般的なIndirect ToF方式の測距モジュールと同様に、照射光の照射タイミングを基準に、位相を0°、90°、180°、または、270°だけずらした受光タイミングで反射光を受光する駆動モードであり、以下、通常駆動モードとも称する。第2の駆動モードは、照射光の照射タイミングを基準に、位相を、例えば0°から179°まで1°ずつずらした受光タイミングで反射光を受光する駆動モードであり、以下、特殊駆動モードとも称する。 The ranging module 11 has a first drive mode and a second drive mode as drive modes, operates in the drive mode specified by the control unit 15, and outputs Raw data. The first drive mode is the light receiving timing in which the phase is shifted by 0 °, 90 °, 180 °, or 270 ° based on the irradiation timing of the irradiation light, as in the case of the general Indirect ToF type ranging module. This is a drive mode for receiving reflected light, and is also referred to as a normal drive mode below. The second drive mode is a drive mode in which the reflected light is received at a light reception timing in which the phase is shifted by 1 ° from 0 ° to 179 °, for example, with reference to the irradiation timing of the irradiation light. Refer to.
 測距モジュール11が第1の駆動モード(通常駆動モード)で動作した場合、測距モジュール11は、反射光を受光して生成したRawデータを、学習部13または推定部14に供給する。一方、測距モジュール11が第2の駆動モード(特殊駆動モード)で動作した場合、測距モジュール11は、反射光を受光して生成したRawデータを、教師データ生成部12に供給する。 When the distance measuring module 11 operates in the first drive mode (normal drive mode), the distance measuring module 11 receives the reflected light and supplies the raw data generated to the learning unit 13 or the estimation unit 14. On the other hand, when the distance measuring module 11 operates in the second drive mode (special drive mode), the distance measuring module 11 receives the reflected light and supplies the raw data generated to the teacher data generation unit 12.
 教師データ生成部12は、測距モジュール11から第2の駆動モードのRawデータを取得し、取得した第2の駆動モードのRawデータから、被写体に対する反射光の時間方向のインパルス応答データを教師データとして生成し、学習部13に供給する。 The teacher data generation unit 12 acquires the raw data of the second drive mode from the distance measuring module 11, and from the acquired raw data of the second drive mode, the teacher data is the impulse response data of the reflected light to the subject in the time direction. Is generated and supplied to the learning unit 13.
 学習部13は、測距モジュール11から供給される、第1の駆動モードにおける受光データと、教師データ生成部12から供給される、反射光の時間方向のインパルス応答データとを取得し、第1の駆動モードにおける受光データと、反射光の時間方向のインパルス応答データとの関係を学習する。より具体的には、学習部13は、例えば、学習モデルとして畳み込みニューラルネットワーク(以下、CNN(Convolutional Neural Network)と称する。)を用いた学習器を有し、学習モデルとしてのCNNに、教師データを第2の駆動モードで得られた反射光の時間方向のインパルス応答データとし、生徒データを第1の駆動モードにおける受光データとして、第1の駆動モードにおける受光データを入力として時間方向のインパルス応答データを予測するCNNのパラメータ(モデルパラメータ)を算出する。算出されたモデルパラメータは、推定部14に供給される。 The learning unit 13 acquires the light receiving data in the first drive mode supplied from the distance measuring module 11 and the impulse response data in the time direction of the reflected light supplied from the teacher data generation unit 12, and the first unit. Learn the relationship between the received light data in the drive mode and the impulse response data in the time direction of the reflected light. More specifically, the learning unit 13 has, for example, a learner using a convolutional neural network (hereinafter referred to as CNN (Convolutional Neural Network)) as a learning model, and the teacher data is stored in the CNN as a learning model. Is the time-direction impulse response data of the reflected light obtained in the second drive mode, the student data is the light-receiving data in the first drive mode, and the light-receiving data in the first drive mode is input. Calculate the CNN parameters (model parameters) that predict the data. The calculated model parameters are supplied to the estimation unit 14.
 推定部14は、学習部13による学習器で得られたモデルパラメータを用いて予測処理を実行する予測器を備える。すなわち、推定部14は、学習部13により算出されたモデルパラメータを用いた学習モデル(CNN)を予測モデルとして使用し、測距モジュール11から供給される、第1の駆動モードの受光データを予測モデルに入力し、反射光の時間方向のインパルス応答データを予測結果として出力する。 The estimation unit 14 includes a predictor that executes prediction processing using the model parameters obtained by the learning device by the learning unit 13. That is, the estimation unit 14 uses the learning model (CNN) using the model parameters calculated by the learning unit 13 as a prediction model, and predicts the light receiving data of the first drive mode supplied from the distance measuring module 11. Input to the model and output the impulse response data in the time direction of the reflected light as the prediction result.
 なお、機械学習の学習器および予測器に用いるモデルとして、本実施の形態では、CNNを採用することとするが、その他のニューラルネットワークを採用してもよいし、その他アルゴリズムを採用してもよい。すなわち、機械学習のモデルおよびアルゴリズムは、限定されない。 In this embodiment, CNN is adopted as a model used for the learner and the predictor of machine learning, but other neural networks may be adopted, or other algorithms may be adopted. .. That is, machine learning models and algorithms are not limited.
 制御部15は、例えば、測距システム1が組み込まれた機器の操作部からのユーザの操作に基づく動作制御信号や、測距システム1が組み込まれた機器の上位の制御部からの動作制御信号などに基づいて、測距モジュール11、教師データ生成部12、学習部13、および、推定部14を制御する。 The control unit 15 is, for example, an operation control signal based on the user's operation from the operation unit of the device in which the distance measurement system 1 is incorporated, or an operation control signal from a higher control unit of the device in which the distance measurement system 1 is incorporated. The distance measurement module 11, the teacher data generation unit 12, the learning unit 13, and the estimation unit 14 are controlled based on the above.
 具体的には、学習部13がモデルパラメータを学習する学習モードでは、制御部15は、測距モジュール11を第1の駆動モードと第2の駆動モードで順次駆動させ、Rawデータを、教師データ生成部12または学習部13へ出力させる。また、モデルパラメータが学習された後の予測モードでは、制御部15は、測距モジュール11を第1の駆動モードで駆動させ、Rawデータを推定部14へ出力させる。 Specifically, in the learning mode in which the learning unit 13 learns the model parameters, the control unit 15 sequentially drives the distance measuring module 11 in the first drive mode and the second drive mode, and sequentially drives the raw data into the teacher data. Output to the generation unit 12 or the learning unit 13. Further, in the prediction mode after the model parameters are learned, the control unit 15 drives the ranging module 11 in the first drive mode and outputs the raw data to the estimation unit 14.
 測距モジュール11から出力されるRawデータを処理する後段の、教師データ生成部12、学習部13、推定部14、および、制御部15は、DSP (Digital Signal Processor)、ISP(Image Signal Processor)等の1つの信号処理装置16で構成することができる。勿論、教師データ生成部12、学習部13、推定部14、および、制御部15のそれぞれを、個別のDSPやISPで構成してもよい。 The teacher data generation unit 12, the learning unit 13, the estimation unit 14, and the control unit 15 in the latter stage of processing the raw data output from the distance measuring module 11 are DSP (Digital Signal Processor) and ISP (Image Signal Processor). It can be configured by one signal processing device 16 such as. Of course, each of the teacher data generation unit 12, the learning unit 13, the estimation unit 14, and the control unit 15 may be configured by individual DSPs or ISPs.
<2.測距システムの処理概要>
 図2および図3を参照して、測距システム1が出力する、反射光の時間方向のインパルス応答データについて説明する。
<2. Outline of processing of distance measurement system >
The impulse response data in the time direction of the reflected light output by the ranging system 1 will be described with reference to FIGS. 2 and 3.
 一般的なIndirect ToF方式のToFセンサは、例えば、4Phase方式の場合、照射光の照射タイミングを基準に、位相を0°、90°、180°、および、270°だけずらしたタイミングで反射光を受光し、その結果得られるRawデータP0、P90、P180、およびP270を出力する。このRawデータP0、P90、P180、およびP270を用いて、後述する所定の演算を行うことにより、物体までの距離を算出することができる。 In the case of a general Indirect ToF type ToF sensor, for example, in the case of the 4Phase method, the reflected light is shifted by 0 °, 90 °, 180 °, and 270 ° based on the irradiation timing of the irradiation light. It receives light and outputs the resulting Raw data P 0 , P 90 , P 180 , and P 270 . Using the raw data P 0 , P 90 , P 180 , and P 270 , the distance to the object can be calculated by performing a predetermined operation described later.
 ところで、ToFセンサが受光する反射光には、図2の実線に示されるように、測定対象の物体に直接照射され、反射されてきた反射光と、図2の破線に示されるように、他の物体に一旦照射された後、測定対象の物体に照射された反射光や、測定対象の物体に照射された反射光が、他の物体でさらに反射された光などが含まれる。したがって、時間方向の分解能を高めると、図2の波形に示されるような、所定のタイミングで瞬間的に受光するようなプロファイルの受光データが得られる。 By the way, the reflected light received by the ToF sensor includes the reflected light that is directly irradiated and reflected on the object to be measured as shown by the solid line in FIG. 2, and the reflected light that is reflected by the object as shown in FIG. This includes the reflected light that is once irradiated to the object to be measured and then irradiated to the object to be measured, the reflected light that is irradiated to the object to be measured, and the light that is further reflected by another object. Therefore, if the resolution in the time direction is increased, the received light data of the profile that momentarily receives light at a predetermined timing as shown in the waveform of FIG. 2 can be obtained.
 図2のプロファイルの受光データは、照射光を、発光時間が非常に短いインパルス発光とし、ToFセンサの蓄積時間も短くすることで得られる、反射光の時間方向のインパルス応答データである。このような反射光の時間方向のインパルス応答データは、照射光を発光時間が非常に短いインパルス発光とし、ToFセンサの蓄積時間も短くすることで得ることができるが、実際の使用環境で、そのような駆動を行うことは現実的には難しい。 The received light data of the profile in FIG. 2 is impulse response data in the time direction of the reflected light obtained by using impulse light emission with a very short emission time and shortening the accumulation time of the ToF sensor. Such impulse response data in the time direction of the reflected light can be obtained by using impulse light emission with a very short emission time and shortening the accumulation time of the ToF sensor, but in an actual usage environment, the impulse response data can be obtained. It is practically difficult to perform such a drive.
 そこで、測距システム1の推定部14は、図3に示されるように、ToFセンサが出力する4位相のRawデータP0、P90、P180、およびP270を入力として、反射光の時間方向のインパルス応答のデータを予測し、予測結果として出力する。反射光の時間方向のインパルス応答データは、ToFセンサの画素アレイに対応する空間方向の2次元と、時間方向1次元の計3次元で表されるデータとなる。 Therefore, as shown in FIG. 3, the estimation unit 14 of the distance measuring system 1 inputs the four-phase Raw data P 0 , P 90 , P 180 , and P 270 output by the ToF sensor, and the time of the reflected light is reflected. Predicts the impulse response data in the direction and outputs it as the prediction result. The impulse response data in the time direction of the reflected light is data represented in a total of three dimensions, two dimensions in the spatial direction corresponding to the pixel array of the ToF sensor and one dimension in the time direction.
 なお、勿論、推定部14は、反射光の時間方向のインパルス応答データだけでなく、一般的なToFセンサと同様に、4位相のRawデータP0、P90、P180、およびP270も出力することができる。 Of course, the estimation unit 14 outputs not only the impulse response data of the reflected light in the time direction but also the four-phase raw data P 0 , P 90 , P 180 , and P 270 like a general ToF sensor. can do.
 学習部13は、推定部14が、反射光の時間方向のインパルス応答データを予測する予測器としてのCNNのモデルパラメータを学習する。 In the learning unit 13, the estimation unit 14 learns the model parameters of the CNN as a predictor for predicting the impulse response data in the time direction of the reflected light.
 教師データ生成部12は、学習部13が学習するための教師データとなる、反射光の時間方向のインパルス応答データを生成する。 The teacher data generation unit 12 generates impulse response data in the time direction of the reflected light, which is the teacher data for the learning unit 13 to learn.
<3.測距モジュールの詳細構成例>
 図4は、測距モジュール11の詳細構成例を示すブロック図である。
<3. Detailed configuration example of ranging module>
FIG. 4 is a block diagram showing a detailed configuration example of the ranging module 11.
 測距モジュール11は、発光源21、発光制御部22、遅延生成部23、および、測距センサ24を有する。 The distance measuring module 11 includes a light emitting source 21, a light emitting control unit 22, a delay generation unit 23, and a distance measuring sensor 24.
 発光源21は、例えば、光源として赤外線レーザダイオードなどを有し、発光制御部22から供給される発光制御信号に応じたタイミングで発光し、物体に対して照射光を照射する。発光制御信号は、例えば、所定の周期でhighとなる矩形波のパルス信号や、所定の周期で短時間だけ瞬間的にhighとなるインパルス信号などで構成される。 The light emitting source 21 has, for example, an infrared laser diode as a light source, emits light at a timing corresponding to a light emitting control signal supplied from the light emitting control unit 22, and irradiates an object with irradiation light. The light emission control signal is composed of, for example, a rectangular wave pulse signal that becomes high in a predetermined period, an impulse signal that momentarily becomes high for a short time in a predetermined period, and the like.
 発光制御部22は、所定の周波数(例えば、20MHzなど)の発光制御信号を発光源21に供給することにより、発光源21を制御する。また、発光制御部22は、発光源21における発光のタイミングに合わせて測距センサ24を駆動させるために、発光制御信号を遅延生成部23にも供給する。 The light emission control unit 22 controls the light emission source 21 by supplying a light emission control signal of a predetermined frequency (for example, 20 MHz or the like) to the light emission source 21. Further, the light emission control unit 22 also supplies a light emission control signal to the delay generation unit 23 in order to drive the distance measuring sensor 24 in accordance with the timing of light emission in the light emission source 21.
 遅延生成部23は、発光制御部22からの発光制御信号が示すタイミング(照射タイミング)に対して所定の遅延量だけ遅延させた露光制御信号を生成し、測距センサ24に供給する。遅延量は、教師データ生成部12の記憶部61(図11)から供給される。 The delay generation unit 23 generates an exposure control signal delayed by a predetermined delay amount with respect to the timing (irradiation timing) indicated by the light emission control signal from the light emission control unit 22, and supplies the exposure control signal to the ranging sensor 24. The delay amount is supplied from the storage unit 61 (FIG. 11) of the teacher data generation unit 12.
 測距センサ24は、遅延生成部23から供給される露光制御信号に基づいて、物体からの反射光を受光する。遅延生成部23から供給される露光制御信号が、発光源21に供給される発光制御信号よりも所定の位相差(遅延量)だけ遅延されている場合、測距センサ24は、発光源21の照射タイミングに対して所定の位相差だけずれたタイミングで受光する。 The distance measuring sensor 24 receives the reflected light from the object based on the exposure control signal supplied from the delay generation unit 23. When the exposure control signal supplied from the delay generation unit 23 is delayed by a predetermined phase difference (delay amount) from the light emission control signal supplied to the light emission source 21, the distance measuring sensor 24 uses the light emission source 21. The light is received at a timing shifted by a predetermined phase difference from the irradiation timing.
 測距センサ24は、Indirect ToF方式のToFセンサである。測距センサ24は、動作モードとして通常駆動モード(第1の駆動モード)と特殊駆動モード(第2の駆動モード)を有しており、制御部15(図1)により指定された駆動モードで動作して、Rawデータを出力する。通常駆動モードと特殊駆動モードとは、測距センサ24の駆動は同じであるが、露光制御信号が異なる。具体的には、通常駆動モードにおいて遅延生成部23から供給される露光制御信号は、発光制御部22が出力する発光制御信号に対して、位相が、0°、90°、180°、または、270°だけずれた信号である。一方、特殊駆動モードにおいて遅延生成部23から供給される露光制御信号は、発光制御部22が出力する発光制御信号に対して、位相が、0°から179°までの1°単位のいずれかだけずれた信号である。 The distance measuring sensor 24 is an Indirect ToF type ToF sensor. The distance measuring sensor 24 has a normal drive mode (first drive mode) and a special drive mode (second drive mode) as operation modes, and is in the drive mode designated by the control unit 15 (FIG. 1). It works and outputs Raw data. The normal drive mode and the special drive mode drive the distance measuring sensor 24 in the same manner, but differ in the exposure control signal. Specifically, the exposure control signal supplied from the delay generation unit 23 in the normal drive mode has a phase of 0 °, 90 °, 180 °, or a phase with respect to the light emission control signal output by the light emission control unit 22. It is a signal shifted by 270 °. On the other hand, the exposure control signal supplied from the delay generation unit 23 in the special drive mode has a phase of only one of 1 ° units from 0 ° to 179 ° with respect to the light emission control signal output by the light emission control unit 22. It is a shifted signal.
 駆動モードとして通常駆動モードが指定された場合、測距センサ24は、照射光の照射タイミングに対して位相が0°、90°、180°、または、270°だけずれた受光タイミングで反射光を受光し、その結果得られる位相差0°、90°、180°、および、270°のRawデータを、学習部13または推定部14に供給する。 When the normal drive mode is specified as the drive mode, the ranging sensor 24 emits the reflected light at a light receiving timing that is out of phase with the irradiation timing of the irradiation light by 0 °, 90 °, 180 °, or 270 °. The light is received, and the raw data of the resulting phase differences of 0 °, 90 °, 180 °, and 270 ° are supplied to the learning unit 13 or the estimation unit 14.
 駆動モードとして特殊駆動モードが指定された場合、測距センサ24は、照射光の照射タイミングに対して位相を0°から179°までの1°単位に変化させた受光タイミングで反射光を受光し、その結果得られる位相差0°乃至179°のRawデータを、教師データ生成部12の記憶部61に供給する。 When the special drive mode is specified as the drive mode, the distance measuring sensor 24 receives the reflected light at the light receiving timing in which the phase is changed in 1 ° units from 0 ° to 179 ° with respect to the irradiation timing of the irradiation light. The raw data having a phase difference of 0 ° to 179 ° as a result is supplied to the storage unit 61 of the teacher data generation unit 12.
 いま、測距センサ24が出力する所定位相差のRawデータを、反射光の輝度値が画素値として格納された画像と考えると、測距センサ24は、通常駆動モードでは、位相が0°、90°、180°、および、270°の4位相(4種類)のRaw画像を出力する。一方、特殊駆動モードでは、測距センサ24は、位相が0°乃至179°の180位相(180種類)のRaw画像を出力する。 Considering that the raw data of the predetermined phase difference output by the distance measuring sensor 24 is an image in which the brightness value of the reflected light is stored as a pixel value, the distance measuring sensor 24 has a phase of 0 ° in the normal drive mode. It outputs 90 °, 180 °, and 270 ° 4-phase (4 types) raw images. On the other hand, in the special drive mode, the ranging sensor 24 outputs 180 phases (180 types) of raw images having a phase of 0 ° to 179 °.
<測距センサの詳細>
 次に、図5乃至図10を参照して、測距センサ24の詳細について説明する。
<Details of ranging sensor>
Next, the details of the distance measuring sensor 24 will be described with reference to FIGS. 5 to 10.
 測距センサ24は、図5に示されるように、画素31が行方向および列方向の行列状に2次元配置された画素アレイ部32と、画素アレイ部32の周辺領域に配置された駆動制御回路33とを有する。画素31は、受光した反射光の光量に応じた電荷を生成し、その電荷に応じた信号を出力する。 As shown in FIG. 5, the distance measuring sensor 24 has a pixel array unit 32 in which pixels 31 are two-dimensionally arranged in a matrix in the row direction and a column direction, and a drive control in which the pixels 31 are arranged in a peripheral region of the pixel array unit 32. It has a circuit 33. The pixel 31 generates an electric charge according to the amount of reflected light received, and outputs a signal corresponding to the electric charge.
 駆動制御回路33は、例えば、遅延生成部23から供給される露光制御信号に基づいて、画素31の駆動を制御するための制御信号(例えば、後述する振り分け信号DIMIXや、選択信号ADDRESS DECODE、リセット信号RSTなど)を出力する。 The drive control circuit 33 is, for example, a control signal for controlling the drive of the pixel 31 based on the exposure control signal supplied from the delay generation unit 23 (for example, the distribution signal DIMIX described later, the selection signal ADDRESS DECODE, and the reset). (Signal RST, etc.) is output.
 画素31は、図5に示されるように、フォトダイオード51と、フォトダイオード51で光電変換された電荷を検出する第1タップ52Aおよび第2タップ52Bとを有する。画素31では、1つのフォトダイオード51で発生した電荷が、第1タップ52Aまたは第2タップ52Bに振り分けられる。そして、フォトダイオード51で発生した電荷のうち、第1タップ52Aに振り分けられた電荷が信号線53Aから検出信号Aとして出力され、第2タップ52Bに振り分けられた電荷が信号線53Bから検出信号Bとして出力される。 As shown in FIG. 5, the pixel 31 has a photodiode 51 and a first tap 52A and a second tap 52B that detect the charge photoelectrically converted by the photodiode 51. In the pixel 31, the electric charge generated by one photodiode 51 is distributed to the first tap 52A or the second tap 52B. Then, of the charges generated by the photodiode 51, the charges distributed to the first tap 52A are output as a detection signal A from the signal line 53A, and the charges distributed to the second tap 52B are detected signals B from the signal line 53B. Is output as.
 第1タップ52Aは、転送トランジスタ41A、FD(Floating Diffusion)部42A、選択トランジスタ43A、およびリセットトランジスタ44Aにより構成される。同様に、第2タップ52Bは、転送トランジスタ41B、FD部42B、選択トランジスタ43B、およびリセットトランジスタ44Bにより構成される。 The first tap 52A is composed of a transfer transistor 41A, an FD (Floating Diffusion) unit 42A, a selection transistor 43A, and a reset transistor 44A. Similarly, the second tap 52B is composed of a transfer transistor 41B, an FD unit 42B, a selection transistor 43B, and a reset transistor 44B.
 なお、本実施の形態では、図5に示されるように、画素31が、第1タップ52Aおよび第2タップ52Bの2つの電荷振分部を有する2タップの画素構造を採用するが、本技術は、1タップや4タップなどでも実現できる。 In the present embodiment, as shown in FIG. 5, the pixel 31 adopts a 2-tap pixel structure having two charge distribution portions of the first tap 52A and the second tap 52B. Can be realized with 1 tap or 4 taps.
 画素31の動作について説明する。 The operation of the pixel 31 will be described.
 発光源21から、例えば、図6に示されるように、照射時間Tで照射のオン/オフを繰り返すように変調(1周期=2T)された照射光が出力され、物体までの距離に応じた遅延時間ΔTだけ遅れて、フォトダイオード51において反射光が受光される。また、振り分け信号DIMIX_Aは、転送トランジスタ41Aのオン/オフを制御し、振り分け信号DIMIX_Bは、転送トランジスタ41Bのオン/オフを制御する。振り分け信号DIMIX_Aは、照射光と同一位相の信号であり、振り分け信号DIMIX_Bは、振り分け信号DIMIX_Aを反転した位相となっている。 As shown in FIG. 6, for example, irradiation light modulated (1 cycle = 2T) so as to repeat irradiation on / off at the irradiation time T is output from the light emitting source 21 according to the distance to the object. The reflected light is received by the photodiode 51 with a delay time ΔT. Further, the distribution signal DIMIX_A controls the on / off of the transfer transistor 41A, and the distribution signal DIMIX_B controls the on / off of the transfer transistor 41B. The distribution signal DIMIX_A is a signal having the same phase as the irradiation light, and the distribution signal DIMIX_B has a phase in which the distribution signal DIMIX_A is inverted.
 従って、図6において、フォトダイオード51が反射光を受光することにより発生する電荷は、振り分け信号DIMIX_Aに従って転送トランジスタ41Aがオンとなっている間ではFD部42Aに転送され、振り分け信号DIMIX_Bに従って転送トランジスタ41Bがオンとなっている間ではFD部42Bに転送される。これにより、照射時間Tの照射光の照射が周期的に行われる所定の期間において、転送トランジスタ41Aを介して転送された電荷はFD部42Aに順次蓄積され、転送トランジスタ41Bを介して転送された電荷はFD部42Bに順次蓄積される。 Therefore, in FIG. 6, the electric charge generated by the photodiode 51 receiving the reflected light is transferred to the FD unit 42A according to the distribution signal DIMIX_A while the transfer transistor 41A is on, and is transferred to the transfer transistor according to the distribution signal DIMIX_B. While 41B is on, it is transferred to the FD unit 42B. As a result, the charges transferred via the transfer transistor 41A are sequentially accumulated in the FD unit 42A and transferred via the transfer transistor 41B during a predetermined period in which the irradiation light of the irradiation time T is periodically irradiated. The electric charge is sequentially accumulated in the FD unit 42B.
 そして、電荷を蓄積する期間の終了後、選択信号ADDRESS DECODE_Aに従って選択トランジスタ43Aがオンとなると、FD部42Aに蓄積されている電荷が信号線53Aを介して読み出され、その電荷量に応じた検出信号Aが測距センサ24から出力される。同様に、選択信号ADDRESS DECODE_Bに従って選択トランジスタ43Bがオンとなると、FD部42Bに蓄積されている電荷が信号線53Bを介して読み出され、その電荷量に応じた検出信号Bが測距センサ24から出力される。また、FD部42Aに蓄積されている電荷は、リセット信号RST_Aに従ってリセットトランジスタ44Aがオンになると排出され、FD部42Bに蓄積されている電荷は、リセット信号RST_Bに従ってリセットトランジスタ44Bがオンになると排出される。 Then, when the selection transistor 43A is turned on according to the selection signal ADDRESS DECODE_A after the period for accumulating the electric charge, the electric charge accumulated in the FD unit 42A is read out via the signal line 53A and corresponds to the amount of the electric charge. The detection signal A is output from the ranging sensor 24. Similarly, when the selection transistor 43B is turned on according to the selection signal ADDRESS DECODE_B, the electric charge accumulated in the FD unit 42B is read out via the signal line 53B, and the detection signal B according to the amount of the electric charge is the distance measuring sensor 24. Is output from. Further, the electric charge stored in the FD section 42A is discharged when the reset transistor 44A is turned on according to the reset signal RST_A, and the electric charge stored in the FD section 42B is discharged when the reset transistor 44B is turned on according to the reset signal RST_B. Will be done.
 このように、画素31は、フォトダイオード51が受光した反射光により発生する電荷を、遅延時間ΔTに応じて第1タップ52Aまたは第2タップ52Bに振り分けて、検出信号Aおよび検出信号Bを出力する。この検出信号Aおよび検出信号Bをデジタル信号にAD変換したデータが、上述したRawデータに相当する。遅延時間ΔTは、発光源21で発光した光が物体まで飛行し、物体で反射した後に測距センサ24まで飛行する時間に応じたもの、即ち、物体までの距離に応じたものである。従って、測距モジュール11は、検出信号Aおよび検出信号Bに基づき、遅延時間ΔTに従って物体までの距離(デプス値)を求めることができる。 In this way, the pixel 31 distributes the electric charge generated by the reflected light received by the photodiode 51 to the first tap 52A or the second tap 52B according to the delay time ΔT, and outputs the detection signal A and the detection signal B. To do. The data obtained by AD-converting the detection signal A and the detection signal B into digital signals corresponds to the raw data described above. The delay time ΔT corresponds to the time during which the light emitted from the light emitting source 21 flies to the object, is reflected by the object, and then flies to the distance measuring sensor 24, that is, the delay time according to the distance to the object. Therefore, the distance measuring module 11 can obtain the distance (depth value) to the object according to the delay time ΔT based on the detection signal A and the detection signal B.
 測距センサ24は、通常駆動モードでは、図7に示されるように、照射光の照射タイミングを基準に、位相を0°、90°、180°、および、270°だけずらした受光タイミングで反射光を受光し、4位相(4種類)のRawデータを出力する。 In the normal drive mode, the distance measuring sensor 24 reflects at the light receiving timing shifted by 0 °, 90 °, 180 °, and 270 ° with respect to the irradiation timing of the irradiation light, as shown in FIG. It receives light and outputs 4 phases (4 types) of raw data.
 図8に示されるように、第1タップ52Aにおいて、照射光と同一の位相(位相0°)で受光して得られる検出信号Aを検出信号A0、照射光と90度ずらした位相(位相90°)で受光して得られる検出信号Aを検出信号A90、照射光と180度ずらした位相(位相180°)で受光して得られる検出信号Aを検出信号A180、照射光と270度ずらした位相(位相270°)で受光して得られる検出信号Aを検出信号A270、と呼ぶことにする。 As shown in FIG. 8, in the first tap 52A, the detection signal A obtained by receiving light in the same phase (phase 0 °) as the irradiation light is shifted by 90 degrees from the detection signal A 0 and the irradiation light (phase). The detection signal A obtained by receiving light at 90 °) is the detection signal A 90 , and the detection signal A obtained by receiving light at a phase (phase 180 °) shifted by 180 degrees from the irradiation light is detected signal A 180 , the irradiation light and 270. The detection signal A obtained by receiving light in a shifted phase (phase 270 °) is referred to as a detection signal A 270 .
 また、第2タップ52Bにおいて、照射光と同一の位相(位相0°)で受光して得られる検出信号Bを検出信号B0、照射光と90度ずらした位相(位相90°)で受光して得られる検出信号Bを検出信号B90、照射光と180度ずらした位相(位相180°)で受光して得られる検出信号Bを検出信号B180、照射光と270度ずらした位相(位相270°)で受光して得られる検出信号Bを検出信号B270、と呼ぶことにする。 Further, in the second tap 52B, the detection signal B obtained by receiving the light in the same phase (phase 0 °) as the irradiation light is received in the detection signal B 0 and the phase (phase 90 °) shifted by 90 degrees from the irradiation light. The detection signal B obtained by receiving the detection signal B with the detection signal B 90 and the phase shifted by 180 degrees from the irradiation light (phase 180 °) is received and the detection signal B obtained by receiving the detection signal B with the detection signal B 180 and the phase shifted by 270 degrees from the irradiation light (phase). The detection signal B obtained by receiving light at 270 °) will be referred to as the detection signal B 270 .
 図8は、4Phase方式によるデプス値と信頼度の算出方法を説明する図である。 FIG. 8 is a diagram illustrating a method of calculating the depth value and the reliability by the 4 Phase method.
 Indirect ToF方式において、物体までの距離に対応するデプス値dは、次式(1)で求めることができる。
Figure JPOXMLDOC01-appb-M000001
 式(1)のcは光速であり、ΔTは遅延時間であり、fは光の変調周波数を表す。また、式(1)のφは、反射光の位相ずれ量[rad]を表し、次式(2)で表される。
Figure JPOXMLDOC01-appb-M000002
In the Indirect ToF method, the depth value d corresponding to the distance to the object can be obtained by the following equation (1).
Figure JPOXMLDOC01-appb-M000001
In equation (1), c is the speed of light, ΔT is the delay time, and f is the modulation frequency of light. Further, φ in the equation (1) represents the phase shift amount [rad] of the reflected light, and is represented by the following equation (2).
Figure JPOXMLDOC01-appb-M000002
 4Phase方式では、式(2)のI,Qが、位相を0°、90°、180°、270°に設定して得られた検出信号A0乃至A270および検出信号B0乃至B270を用いて、次式(3)で計算される。I,Qは、照射光の輝度変化をcos波と仮定し、cos波の位相を極座標から直交座標系(IQ平面)に変換した信号である。
 I=c0-c180=(A0-B0)-(A180-B180
 Q=c90-c270=(A90-B90)-(A270-B270)  ・・・・・・・・・・(3)
In the 4-Phase method, I and Q of the equation (2) set the phases to 0 °, 90 °, 180 ° and 270 ° to obtain the detection signals A 0 to A 270 and the detection signals B 0 to B 270 . It is calculated by the following equation (3). I and Q are signals obtained by converting the phase of the cos wave from polar coordinates to a Cartesian coordinate system (IQ plane), assuming that the change in brightness of the irradiation light is a cos wave.
I = c 0- c 180 = (A 0- B 0 )-(A 180- B 180 )
Q = c 90 -c 270 = (A 90 -B 90 ) - (A 270 -B 270 ) ・ ・ ・ ・ ・ ・ ・ ・ ・ ・ (3)
 従って、位相0°で受光して得られる検出信号A0およびB0(のデジタル値)を位相0°のRawデータ、位相90°で受光して得られる検出信号A90およびB90(のデジタル値)を位相90°のRawデータ、位相180°で受光して得られる検出信号A180およびB180(のデジタル値)を位相180°のRawデータ、並びに、位相270°で受光して得られる検出信号A270およびB270(のデジタル値)を位相270°のRawデータとして出力することで、物体までの距離を算出することができる。 Therefore, the detection signals A 0 and B 0 (digital values) obtained by receiving light at phase 0 ° are the raw data of phase 0 °, and the detection signals A 90 and B 90 (digital value) obtained by receiving light at phase 90 ° are digital. The value) is received in 90 ° phase Raw data, and the detection signals A 180 and B 180 (digital values) obtained by receiving light in phase 180 ° are received in phase 180 ° Raw data and in phase 270 °. By outputting the detection signals A 270 and B 270 (digital values) as raw data with a phase of 270 °, the distance to the object can be calculated.
 また、信頼度cnfは、次式(4)で求めることができる。
Figure JPOXMLDOC01-appb-M000003
Further, the reliability cnf can be obtained by the following equation (4).
Figure JPOXMLDOC01-appb-M000003
 図9は、駆動モードが通常駆動モードである場合の発光制御信号および露光制御信号と、Rawデータの概念図である。 FIG. 9 is a conceptual diagram of the light emission control signal and the exposure control signal when the drive mode is the normal drive mode, and the raw data.
 通常駆動モードでは、発光制御信号は、図9に示されるように、例えば、所定の周期でhighとなる矩形波のパルス信号となる。また、露光制御信号は、発光制御信号が示す照射タイミングに対して、位相を0°、90°、180°、または、270°でずらした矩形波のパルス信号となる。 In the normal drive mode, the light emission control signal is, for example, a rectangular wave pulse signal that becomes high at a predetermined period, as shown in FIG. Further, the exposure control signal is a rectangular wave pulse signal whose phase is shifted by 0 °, 90 °, 180 °, or 270 ° with respect to the irradiation timing indicated by the light emission control signal.
 そして、通常駆動モードでは、測距センサ24から、位相0°の検出信号A0およびB0が位相0°のRawデータP0として出力され、位相90°の検出信号A90およびB90が位相90°のRawデータP90として出力され、位相180°の検出信号A180およびB180が位相180°のRawデータP180として出力され、並びに、位相270°の検出信号A270およびB270が位相270°のRawデータP270として出力される。 Then, in the normal drive mode, the distance measuring sensor 24 outputs the detection signals A 0 and B 0 having a phase of 0 ° as raw data P 0 having a phase of 0 °, and the detection signals A 90 and B 90 having a phase of 90 ° have a phase. The 90 ° raw data P 90 is output, the 180 ° phase detection signals A 180 and B 180 are output as 180 ° phase Raw data P 180 , and the 270 ° phase detection signals A 270 and B 270 are in phase. It is outputted as Raw data P 270 of 270 °.
 図10は、駆動モードが特殊駆動モードである場合の発光制御信号および露光制御信号と、Rawデータの概念図である。 FIG. 10 is a conceptual diagram of a light emission control signal and an exposure control signal when the drive mode is a special drive mode, and raw data.
 特殊駆動モードでは、発光制御信号は、図10に示されるように、例えば、所定の周期で短時間だけ瞬間的にhighとなるインパルス関数状の信号となる。発光制御信号は、例えば、次式(5)のような、位相が0のときにのみインパルスとなるような周期関数δ(t)として表すことができる。 In the special drive mode, as shown in FIG. 10, the light emission control signal is, for example, an impulse function-like signal that momentarily becomes high for a short time in a predetermined cycle. The light emission control signal can be expressed as, for example, a periodic function δ (t) such as the following equation (5) that causes an impulse only when the phase is 0.
Figure JPOXMLDOC01-appb-M000004
 ここで、式(5)のfは発光周波数、tは時刻、nは0以上の整数を表す。周期関数δ(t)を以下では、発光関数δ(t)と称する。
Figure JPOXMLDOC01-appb-M000004
Here, f in the equation (5) represents the emission frequency, t represents the time, and n represents an integer of 0 or more. Hereinafter, the periodic function δ (t) is referred to as a light emitting function δ (t).
 一方、露光制御信号は、発光制御信号が示す照射タイミングに対して、位相を0°から179°まで1°単位でずらした矩形波のパルス信号となる。露光期間Tは通常駆動モードと同一に設定される。 On the other hand, the exposure control signal is a square wave pulse signal whose phase is shifted by 1 ° from 0 ° to 179 ° with respect to the irradiation timing indicated by the light emission control signal. The exposure period T is set to be the same as the normal drive mode.
 そして、特殊駆動モードでは、測距センサ24から、位相0°の検出信号A0およびB0が位相0°のRawデータP0として出力され、位相1°の検出信号A1およびB1が位相1°のRawデータP1として出力され、位相2°の検出信号A2およびB2が位相2°のRawデータP2として出力され、・・・、位相179°の検出信号A179およびB179が位相179°のRawデータP179として出力される。 Then, in the special drive mode, the distance measuring sensor 24 outputs the detection signals A 0 and B 0 having a phase of 0 ° as the raw data P 0 having a phase of 0 °, and the detection signals A 1 and B 1 having a phase of 1 ° have a phase. Output as 1 ° raw data P 1 and phase 2 ° detection signals A 2 and B 2 are output as phase 2 ° raw data P 2 ..., phase 179 ° detection signals A 179 and B 179. Is output as Raw data P 179 with a phase of 179 °.
<4.教師データ生成部の詳細構成例>
 図11は、教師データ生成部12の詳細構成例を示すブロック図である。
<4. Detailed configuration example of the teacher data generation unit>
FIG. 11 is a block diagram showing a detailed configuration example of the teacher data generation unit 12.
 教師データ生成部12は、記憶部61と、演算部62とで構成され、駆動モードが特殊駆動モードである場合に動作する。 The teacher data generation unit 12 is composed of a storage unit 61 and a calculation unit 62, and operates when the drive mode is a special drive mode.
 記憶部61は、発光制御信号が示す照射タイミングに対する遅延量、即ち、位相差を設定し、測距モジュール11の遅延生成部23に供給する。記憶部61は、位相差を、0°から179°まで1°単位で設定する。そして、0°から179°まで1°単位で設定したとき、測距センサ24から順次供給される、合計180位相のRawデータP1乃至P179を記憶する。記憶部61は、合計180位相のRawデータP1乃至P179が蓄積された場合、それらを演算部62に供給する。 The storage unit 61 sets a delay amount with respect to the irradiation timing indicated by the light emission control signal, that is, a phase difference, and supplies the delay amount to the delay generation unit 23 of the distance measuring module 11. The storage unit 61 sets the phase difference from 0 ° to 179 ° in 1 ° units. Then, when the temperature is set from 0 ° to 179 ° in 1 ° units, the raw data P 1 to P 179 having a total of 180 phases, which are sequentially supplied from the distance measuring sensor 24, are stored. When the raw data P 1 to P 179 having a total of 180 phases are accumulated, the storage unit 61 supplies them to the calculation unit 62.
 演算部62は、記憶部61から供給される、合計180位相のRawデータP1乃至P179から、被写体に対する反射光の時間方向のインパルス応答データを、教師データとして生成する。 The calculation unit 62 generates impulse response data in the time direction of the reflected light with respect to the subject as teacher data from the raw data P 1 to P 179 having a total of 180 phases supplied from the storage unit 61.
 ここで、既知の発光制御信号を表す関数(発光関数)をδ(t)、既知の露光制御信号を表す関数(露光関数)をE(t-τ)、測定対象の物体に対する未知の反射光の時間方向のインパルス応答データをr(t)とするとき、Rawデータとして取得される所定画素(x,y)の輝度値Iτ(x,y)は、以下の式(6)で表すことができる。
Figure JPOXMLDOC01-appb-M000005
Here, the function representing the known emission control signal (emission function) is δ (t), the function representing the known exposure control signal (exposure function) is E (t-τ), and the unknown reflected light to the object to be measured. When the impulse response data in the time direction of is r (t), the brightness value I τ (x, y) of the predetermined pixel (x, y) acquired as raw data is expressed by the following equation (6). Can be done.
Figure JPOXMLDOC01-appb-M000005
 式(6)におけるTは露光時間、*は畳み込み積分を表す演算子、τは発光制御信号の照射タイミングと、露光制御信号の露光タイミングとの位相差(遅延量)ΔTを表す。 In equation (6), T is the exposure time, * is the operator representing the convolution integral, and τ is the phase difference (delay amount) ΔT between the irradiation timing of the emission control signal and the exposure timing of the exposure control signal.
 所定画素(x,y)について、0°から179°まで位相差を変化させながら受光して得られた合計180位相のRawデータP1乃至P179の輝度値の変位を関数Fxy(τ)とすると、関数Fxy(τ)は、式(7)で表すことができる。
Figure JPOXMLDOC01-appb-M000006
The displacement of the luminance values of the raw data P 1 to P 179 of a total of 180 phases obtained by receiving light while changing the phase difference from 0 ° to 179 ° for a predetermined pixel (x, y) is a function F xy (τ). Then, the function F xy (τ) can be expressed by Eq. (7).
Figure JPOXMLDOC01-appb-M000006
 この関数Fxy(τ)を相関関数と呼ぶこととすると、相関関数Fxy(τ)は、未知の反射光の時間方向のインパルス応答データr(t)が、有限の幅を持つ露光関数E(t-τ)により「ぼかされた」状態であると言うことができる。 If this function F xy (τ) is called a correlation function, the correlation function F xy (τ) is an exposure function E in which the impulse response data r (t) of unknown reflected light in the time direction has a finite width. It can be said that it is in a "blurred" state by (t-τ).
 したがって、相関関数Fxy(τ)に対して、ぼけ関数を露光関数E(t-τ)とする逆畳み込み演算を行うことにより、所定画素(x,y)の反射光の時間方向のインパルス応答データrxy(t)に変換することができる。 Therefore, the impulse response of the reflected light of a predetermined pixel (x, y) in the time direction is performed by performing a deconvolution operation on the correlation function F xy (τ) with the blur function as the exposure function E (t-τ). It can be converted to data r xy (t).
 すなわち、演算部62は、以下の式(8)で表される、相関関数Fxy(τ)と露光関数E(t-τ)の逆関数による畳み込みおよび発光関数δ(t)の逆関数による畳み込みにより、所定画素(x,y)の時間方向のインパルス応答データrxy(t)を算出する。
Figure JPOXMLDOC01-appb-M000007
That is, the arithmetic unit 62 is based on the convolution by the inverse function of the correlation function F xy (τ) and the exposure function E (t-τ) and the inverse function of the emission function δ (t) represented by the following equation (8). By convolution, the impulse response data r xy (t) in the time direction of a predetermined pixel (x, y) is calculated.
Figure JPOXMLDOC01-appb-M000007
 式(8)において、*は畳み込み積分を表す演算子である。 In equation (8), * is an operator that represents the convolution integral.
 また、Rawデータとして取得される所定画素(x,y)の輝度値Iτ(x,y)のSN比Γを見積もることが出来る場合には、ノイズ抑制のためのウィナーフィルタM(t)を用いて、所定画素(x,y)の反射光の時間方向のインパルス応答データrxy(t)を算出することができる。ウィナーフィルタM(t)を用いた場合の所定画素(x,y)の反射光の時間方向のインパルス応答データrxy(t)は、以下の式(9)で表される。
Figure JPOXMLDOC01-appb-M000008
If the SN ratio Γ of the brightness value I τ (x, y) of a predetermined pixel (x, y) acquired as raw data can be estimated, a Wiener filter M (t) for noise suppression is used. It can be used to calculate the impulse response data r xy (t) of the reflected light of a predetermined pixel (x, y) in the time direction. The impulse response data r xy (t) in the time direction of the reflected light of a predetermined pixel (x, y) when the Wiener filter M (t) is used is represented by the following equation (9).
Figure JPOXMLDOC01-appb-M000008
 式(9)において、*は畳み込み積分を表す演算子、M(t)はウィナーフィルタである。ウィナーフィルタM(t)は、式(10)で表される。
Figure JPOXMLDOC01-appb-M000009
In equation (9), * is an operator representing a convolution integral, and M (t) is a Wiener filter. The Wiener filter M (t) is represented by the equation (10).
Figure JPOXMLDOC01-appb-M000009
 式(10)のFT-1は、フーリエ変換FTの逆変換(即ち逆フーリエ変換)を行う演算子、Fハット(Fの上に∧)xyは、FT[Fxy]、Fハット* xyは、Fハットxyの複素共役を表す。 FT -1 in equation (10) is the operator that performs the inverse Fourier transform (that is, inverse Fourier transform) of the Fourier transform. F hat (∧ on F) xy is FT [F xy ], F hat * xy is , F represents the complex conjugate of xy .
 反射光の時間方向のインパルス応答データr(t)は、画素アレイ部32の各画素31について算出されるので、ToFセンサの画素アレイに対応する空間方向の2次元と、時間方向1次元の計3次元のデータとなる。 Since the impulse response data r (t) in the time direction of the reflected light is calculated for each pixel 31 of the pixel array unit 32, it is a total of two dimensions in the spatial direction and one dimension in the time direction corresponding to the pixel array of the ToF sensor. It becomes three-dimensional data.
 以上のようにして、演算部62は、記憶部61から供給される、合計180位相のRawデータP1乃至P179から、被写体に対する反射光の時間方向のインパルス応答データr(t)を算出し、学習部13に供給する。 As described above, the calculation unit 62 calculates the impulse response data r (t) in the time direction of the reflected light with respect to the subject from the Raw data P 1 to P 179 having a total of 180 phases supplied from the storage unit 61. , Supply to the learning unit 13.
<5.学習部の詳細構成例>
 図12は、学習部13の詳細構成例を示すブロック図である。
<5. Detailed configuration example of the learning department>
FIG. 12 is a block diagram showing a detailed configuration example of the learning unit 13.
 学習部13は、モデル演算部81、誤差計算部82、および、パラメータ更新部83により構成される。 The learning unit 13 is composed of a model calculation unit 81, an error calculation unit 82, and a parameter update unit 83.
 学習部13には、測距モジュール11が通常駆動モードで駆動されて生成された、位相0°のRawデータP0、位相90°のRawデータP90、位相180°のRawデータP180、および、位相270°のRawデータP270が、測距モジュール11から供給される。 The learning section 13, the ranging module 11 is generated and is driven in the normal drive mode, Raw data P 0 of the phase 0 °, the phase 90 ° of Raw data P 90, a phase 180 ° of Raw Data P 180 and, Raw data P 270 with a phase of 270 ° is supplied from the ranging module 11.
 また、学習部13には、測距モジュール11が特殊駆動モードで駆動されて生成された、位相0°から179°まで1°単位の180位相のRawデータP0乃至P179に基づいて教師データ生成部12で生成された反射光の時間方向のインパルス応答データr(t)が、教師データ生成部12から供給される。 Further, in the learning unit 13, teacher data is generated based on 180 phase raw data P 0 to P 179 in 1 ° units from phase 0 ° to 179 °, which is generated by driving the distance measuring module 11 in the special drive mode. The impulse response data r (t) in the time direction of the reflected light generated by the generation unit 12 is supplied from the teacher data generation unit 12.
 ここで、測距モジュール11が通常駆動モードで生成した4位相のRawデータP0、P90、P180、および、P270と、測距モジュール11が特殊駆動モードで生成した180位相のRawデータP0乃至P179とは、静的なシーンで取得され、対応が取れたものである。換言すれば、学習部13で使用されるRawデータは、測距モジュール11が、静止している同一の被写体を通常駆動モードおよび特殊駆動モードのそれぞれで測定したデータである。 Here, the four-phase raw data P 0 , P 90 , P 180 , and P 270 generated by the ranging module 11 in the normal drive mode and the 180-phase raw data generated by the ranging module 11 in the special drive mode. P 0 to P 179 are acquired in a static scene and corresponded to each other. In other words, the raw data used by the learning unit 13 is data obtained by the distance measuring module 11 measuring the same stationary subject in each of the normal drive mode and the special drive mode.
 位相0°のRawデータP0、位相90°のRawデータP90、位相180°のRawデータP180、および、位相270°のRawデータP270は、学習モデルの生徒データとして学習部13のモデル演算部81に供給され、反射光の時間方向のインパルス応答データr(t)は、学習モデルの教師データとして学習部13の誤差計算部82に供給される。 Raw data P 0 of the phase 0 °, the phase 90 ° of Raw data P 90, a phase 180 ° of Raw Data P 180 and,, Raw data P 270 phase 270 °, the model learning unit 13 as student data for learning model The impulse response data r (t) supplied to the calculation unit 81 in the time direction of the reflected light is supplied to the error calculation unit 82 of the learning unit 13 as training data of the learning model.
 モデル演算部81は、パラメータ更新部83により設定されたモデルパラメータを用いた学習モデルに、生徒データとして供給された位相0°のRawデータP0、位相90°のRawデータP90、位相180°のRawデータP180、および、位相270°のRawデータP270を代入し、反射光の時間方向のインパルス応答データr(t)を算出する。学習モデルによって算出された反射光の時間方向のインパルス応答データr(t)は、誤差計算部82に供給される。 The model calculation unit 81 has a phase 0 ° raw data P 0 , a phase 90 ° raw data P 90 , and a phase 180 ° supplied as student data to the learning model using the model parameters set by the parameter update unit 83. The raw data P 180 and the raw data P 270 having a phase of 270 ° are substituted to calculate the impulse response data r (t) in the time direction of the reflected light. The impulse response data r (t) in the time direction of the reflected light calculated by the learning model is supplied to the error calculation unit 82.
 誤差計算部82は、モデル演算部81から供給される反射光の時間方向のインパルス応答データr(t)と、教師データ生成部12から教師データとして供給される反射光の時間方向のインパルス応答データr(t)との誤差を計算し、パラメータ更新部83に供給する。 The error calculation unit 82 includes the impulse response data r (t) in the time direction of the reflected light supplied from the model calculation unit 81 and the impulse response data in the time direction of the reflected light supplied as teacher data from the teacher data generation unit 12. The error with r (t) is calculated and supplied to the parameter update unit 83.
 パラメータ更新部83は、誤差計算部82から供給される誤差が所定値以上である場合には、誤差が少なくなるようにモデルパラメータを更新し、モデル演算部81に供給する。一方、誤差計算部82から供給される誤差が所定値より小さく、学習モデルが十分に学習されたと判定された場合には、現在のモデルパラメータを、推定部14(図1)に供給する。畳み込みニューラルネットワーク等の機械学習では、一般的に、所定回数モデルパラメータの更新を繰り返すことで、学習モデルが十分に学習された状態となる。 When the error supplied from the error calculation unit 82 is equal to or greater than a predetermined value, the parameter update unit 83 updates the model parameters so that the error is small and supplies the model parameters to the model calculation unit 81. On the other hand, when the error supplied from the error calculation unit 82 is smaller than the predetermined value and it is determined that the learning model has been sufficiently learned, the current model parameters are supplied to the estimation unit 14 (FIG. 1). In machine learning such as a convolutional neural network, in general, the learning model is sufficiently learned by repeating the update of the model parameters a predetermined number of times.
 学習されたモデルパラメータが、学習部13から推定部14に供給され、予測器であるCNNのモデルパラメータとして設定される。これにより推定部14は、図3で説明したように、測距センサ24が出力する4位相のRawデータP0、P90、P180、およびP270を入力として、反射光の時間方向のインパルス応答データr(t)を予測して出力することができるようになる。 The learned model parameters are supplied from the learning unit 13 to the estimation unit 14 and set as model parameters of the CNN which is a predictor. As a result, as described in FIG. 3, the estimation unit 14 receives the four-phase Raw data P 0 , P 90 , P 180 , and P 270 output by the ranging sensor 24 as inputs, and impulses the reflected light in the time direction. The response data r (t) can be predicted and output.
<6.測距システムの処理>
 図13のフローチャートを参照して、測距システム1が、反射光の時間方向のインパルス応答データを予測する予測モデル(学習モデル)を学習する学習処理を説明する。この処理は、例えば、不図示の操作部で学習処理の開始が指示されたとき、開始される。
<6. Distance measurement system processing>
The learning process in which the ranging system 1 learns the prediction model (learning model) for predicting the impulse response data in the time direction of the reflected light will be described with reference to the flowchart of FIG. This process is started, for example, when an operation unit (not shown) instructs the start of the learning process.
 初めに、ステップS1において、制御部15は、測距モジュール11を第1の駆動モードである通常駆動モードに設定する。発光制御部22は、照射時間Tで照射のオン/オフを繰り返す発光制御信号を発光源21に供給する。遅延生成部23は、発光制御信号に対して、位相が、0°、90°、180°、および、270°だけずれた露光制御信号を、順次、生成し、測距センサ24に供給する。 First, in step S1, the control unit 15 sets the ranging module 11 to the normal drive mode, which is the first drive mode. The light emission control unit 22 supplies a light emission control signal to the light emission source 21 that repeats irradiation on / off at the irradiation time T. The delay generation unit 23 sequentially generates an exposure control signal whose phase is shifted by 0 °, 90 °, 180 °, and 270 ° with respect to the light emission control signal, and supplies the exposure control signal to the distance measuring sensor 24.
 ステップS2において、測距センサ24は、発光源21から照射された照射光が物体に反射されて返ってきた反射光を受光する。より具体的には、測距センサ24は、露光制御信号に従って、照射光の照射タイミングに対して、0°、90°、180°、および、270°の位相差で反射光を、順次、受光し、その結果得られる4位相のRawデータP0、P90、P180、およびP270を、学習部13に出力する。 In step S2, the distance measuring sensor 24 receives the reflected light that is reflected by the object and returned from the irradiation light emitted from the light emitting source 21. More specifically, the ranging sensor 24 sequentially receives the reflected light with a phase difference of 0 °, 90 °, 180 °, and 270 ° with respect to the irradiation timing of the irradiation light according to the exposure control signal. Then, the four-phase Raw data P 0 , P 90 , P 180 , and P 270 obtained as a result are output to the learning unit 13.
 ステップS3において、制御部15は、測距モジュール11を第2の駆動モードである特殊駆動モードに設定する。発光制御部22は、所定の周期で短時間だけhighとなるインパルス関数状の発光制御信号を発光源21に供給する。遅延生成部23は、発光制御信号が示す照射タイミングに対して、位相が、0°から179°まで1°単位でずれた露光制御信号を、順次、生成し、測距センサ24に供給する。 In step S3, the control unit 15 sets the ranging module 11 to the special drive mode, which is the second drive mode. The light emission control unit 22 supplies the light emission source 21 with an impulse function-like light emission control signal that becomes high for a short time in a predetermined cycle. The delay generation unit 23 sequentially generates an exposure control signal whose phase is shifted by 1 ° from 0 ° to 179 ° with respect to the irradiation timing indicated by the light emission control signal, and supplies the exposure control signal to the distance measuring sensor 24.
 ステップS4において、測距センサ24は、発光源21から照射された照射光が物体に反射されて返ってきた反射光を受光する。より具体的には、測距センサ24は、露光制御信号に従って、照射光の照射タイミングに対して、0°から179°まで1°単位に位相差を変化させて反射光を受光し、その結果得られる180位相のRawデータP0乃至P179を、教師データ生成部12に出力する。 In step S4, the distance measuring sensor 24 receives the reflected light that is reflected by the object and returned by the irradiation light emitted from the light emitting source 21. More specifically, the ranging sensor 24 receives the reflected light by changing the phase difference in 1 ° increments from 0 ° to 179 ° with respect to the irradiation timing of the irradiation light according to the exposure control signal. The obtained 180-phase Raw data P 0 to P 179 are output to the teacher data generation unit 12.
 ステップS5において、教師データ生成部12は、測距センサ24から順次供給される、0°から179°までの180位相のRawデータP0乃至P179から、画素アレイ部32の各画素(x,y)の相関関数Fxy(τ)を算出する。具体的には、記憶部61が、測距センサ24から順次供給される、0°から179°までの180位相のRawデータP0乃至P179を記憶する。演算部62は、記憶部61に記憶された180位相のRawデータP0乃至P179を用いて、上述した式(7)により、画素アレイ部32の各画素(x,y)の相関関数Fxy(τ)を算出する。 In step S5, the teacher data generation unit 12 sequentially supplies each pixel (x ,,) of the pixel array unit 32 from the 180-phase raw data P 0 to P 179 from 0 ° to 179 °, which are sequentially supplied from the distance measuring sensor 24. The correlation function F xy (τ) of y ) is calculated. Specifically, the storage unit 61 stores raw data P 0 to P 179 of 180 phases from 0 ° to 179 ° sequentially supplied from the distance measuring sensor 24. Using the 180-phase Raw data P 0 to P 179 stored in the storage unit 61, the calculation unit 62 uses the above-mentioned equation (7) to perform a correlation function F of each pixel (x, y) of the pixel array unit 32. Calculate xy (τ).
 ステップS6において、演算部62は、算出された相関関数Fxy(τ)を用いて、反射光の時間方向のインパルス応答データrxy(t)を算出する。より具体的には、演算部62は、上述した式(8)で表される、相関関数Fxy(τ)と露光関数E(t-τ)の逆関数による畳み込みおよび発光関数δ(t)の逆関数による畳み込みにより、画素アレイ部32の各画素(x,y)の反射光の時間方向のインパルス応答データrxy(t)を算出する。算出された各画素(x,y)の反射光の時間方向のインパルス応答データrxy(t)は、学習部13に供給される。 In step S6, the calculation unit 62 calculates the impulse response data r xy (t) of the reflected light in the time direction using the calculated correlation function F xy (τ). More specifically, the arithmetic unit 62 convolves by the inverse function of the correlation function F xy (τ) and the exposure function E (t-τ) and the emission function δ (t) represented by the above equation (8). The impulse response data r xy (t) in the time direction of the reflected light of each pixel (x, y) of the pixel array unit 32 is calculated by convolution by the inverse function of. The calculated impulse response data r xy (t) of the reflected light of each pixel (x, y) in the time direction is supplied to the learning unit 13.
 ステップS7において、学習部13は、通常駆動モードで生成された4位相のRawデータP0、P90、P180、および、P270を生徒データとし、教師データ生成部12で生成された反射光の時間方向のインパルス応答データrxy(t)を教師データとして、予測モデルを学習する。 In step S7, the learning unit 13 uses the four-phase raw data P 0 , P 90 , P 180 , and P 270 generated in the normal drive mode as student data, and the reflected light generated by the teacher data generation unit 12. The prediction model is trained using the impulse response data r xy (t) in the time direction of.
 ステップS7における学習により得られた予測モデルのパラメータ(モデルパラメータ)が、推定部14に供給され、学習処理が終了する。 The parameters (model parameters) of the prediction model obtained by the learning in step S7 are supplied to the estimation unit 14, and the learning process is completed.
 次に、図14のフローチャートを参照して、通常駆動モードで反射光の時間方向のインパルス応答データを予測して出力する測距システム1の予測処理について説明する。この処理は、例えば、不図示の操作部で予測処理の開始が指示されたとき、開始される。 Next, with reference to the flowchart of FIG. 14, the prediction process of the distance measuring system 1 that predicts and outputs the impulse response data in the time direction of the reflected light in the normal drive mode will be described. This process is started, for example, when an operation unit (not shown) instructs the start of the prediction process.
 初めに、ステップS11において、制御部15は、測距モジュール11を、第1の駆動モードである通常駆動モードに設定する。発光制御部22は、照射時間Tで照射のオン/オフを繰り返す発光制御信号を発光源21に供給する。遅延生成部23は、発光制御信号に対して、位相が、0°、90°、180°、および、270°だけずれた露光制御信号を、順次、生成し、測距センサ24に供給する。 First, in step S11, the control unit 15 sets the ranging module 11 to the normal drive mode, which is the first drive mode. The light emission control unit 22 supplies a light emission control signal to the light emission source 21 that repeats irradiation on / off at the irradiation time T. The delay generation unit 23 sequentially generates an exposure control signal whose phase is shifted by 0 °, 90 °, 180 °, and 270 ° with respect to the light emission control signal, and supplies the exposure control signal to the distance measuring sensor 24.
 ステップS12において、測距センサ24は、発光源21から照射された照射光が物体に反射されて返ってきた反射光を受光する。より具体的には、測距センサ24は、露光制御信号に従って、照射光の照射タイミングに対して、0°、90°、180°、および、270°の位相差で反射光を、順次、受光し、その結果得られる4位相のRawデータP0、P90、P180、およびP270を、推定部14に出力する。 In step S12, the distance measuring sensor 24 receives the reflected light that is reflected by the object and returned by the irradiation light emitted from the light emitting source 21. More specifically, the ranging sensor 24 sequentially receives the reflected light with a phase difference of 0 °, 90 °, 180 °, and 270 ° with respect to the irradiation timing of the irradiation light according to the exposure control signal. Then, the four-phase Raw data P 0 , P 90 , P 180 , and P 270 obtained as a result are output to the estimation unit 14.
 ステップS13において、推定部14は、測距センサ24から供給される4位相のRawデータP0、P90、P180、およびP270を、学習部13で学習されたモデルパラメータが設定された予測器(CNN)に入力し、反射光の時間方向のインパルス応答データを生成(予測)する。そして、推定部14は、予測結果として得られた反射光の時間方向のインパルス応答データを外部に出力して、予測処理を終了する。 In step S13, the estimation unit 14 predicts that the four-phase raw data P 0 , P 90 , P 180 , and P 270 supplied from the distance measuring sensor 24 are set with the model parameters learned by the learning unit 13. Input to the device (CNN) to generate (predict) impulse response data in the time direction of the reflected light. Then, the estimation unit 14 outputs the impulse response data of the reflected light obtained as the prediction result in the time direction to the outside, and ends the prediction process.
 ステップS13において、推定部14は、生成した反射光の時間方向のインパルス応答データだけでなく、測距センサ24から供給された4位相のRawデータP0、P90、P180、およびP270も併せて、出力してもよい。 In step S13, the estimation unit 14 includes not only the impulse response data of the generated reflected light in the time direction, but also the four-phase Raw data P 0 , P 90 , P 180 , and P 270 supplied from the ranging sensor 24. At the same time, it may be output.
 以上の測距システム1による学習処理によれば、Indirect ToF方式のToFセンサである測距センサ24が、通常駆動モードで被写体までの距離情報に相当する受光データを出力する場合に、その受光データに基づいて、時間方向の分解能を高めた受光データである、反射光の時間方向のインパルス応答データを生成する予測器のモデルパラメータを生成することができる。 According to the above learning process by the distance measuring system 1, when the distance measuring sensor 24, which is an Indirect ToF type ToF sensor, outputs light receiving data corresponding to the distance information to the subject in the normal drive mode, the light receiving data. Based on the above, it is possible to generate a model parameter of a predictor that generates impulse response data in the time direction of reflected light, which is light receiving data with improved resolution in the time direction.
 そして、測距システム1による予測処理によれば、学習処理で学習されたモデルパラメータを用いて、測距センサ24が通常駆動モードで出力した、4位相のRawデータP0、P90、P180、およびP270から、時間方向の分解能を高めた受光データである、反射光の時間方向のインパルス応答データを生成することができる。 Then, according to the prediction processing by the distance measuring system 1, the four-phase Raw data P 0 , P 90 , and P 180 output by the distance measuring sensor 24 in the normal drive mode using the model parameters learned in the learning process . , And P 270 , it is possible to generate impulse response data of reflected light in the time direction, which is received data with improved resolution in the time direction.
 したがって、測距システム1によれば、より簡単に、反射光の時間方向のインパルス応答データを得ることができる。 Therefore, according to the ranging system 1, it is possible to more easily obtain impulse response data in the time direction of the reflected light.
<7.変形例>
<複数周波数による4位相のRawデータの使用>
 発光周波数fで照射光を発光させ、通常駆動モードで4位相のRawデータを取得して物体までの距離を算出する場合、算出されるデプス値dには、真のデプス値dtrueに比べてマルチパス等の影響による誤差Δdが生じる。発光周波数fにおける真のデプス値dtrueに対する誤差Δdは、以下の式(11)で表すことができる。
Figure JPOXMLDOC01-appb-M000010
<7. Modification example>
<Use of 4-phase raw data with multiple frequencies>
When the irradiation light is emitted at the emission frequency f and the raw data of four phases is acquired in the normal drive mode to calculate the distance to the object, the calculated depth value d is compared with the true depth value d true. An error Δd occurs due to the influence of multipath or the like. The error Δd with respect to the true depth value d true at the emission frequency f can be expressed by the following equation (11).
Figure JPOXMLDOC01-appb-M000010
 式(11)において、cは光速、∠は複素空間での位相を取り出す演算子、R(f)は、発光周波数fにおける反射光の時間方向のインパルス応答データr(t)の複素フーリエ変換を表す。 In equation (11), c is the speed of light, ∠ is the operator that extracts the phase in complex space, and R (f) is the complex Fourier transform of the impulse response data r (t) in the time direction of the reflected light at the emission frequency f. Represent.
 したがって、図15に示されるように、発光周波数fを複数(発光周波数f1とf2)設定してそれぞれ得られた4位相のRawデータP0、P90、P180、およびP270を、生徒データとして学習部13の学習器に与えて学習させることにより、マルチパス等による誤差要因をより除外した学習を行うことができ、予測精度の高いモデルパラメータを学習することができる。 Therefore, as shown in FIG. 15, the four-phase Raw data P 0 , P 90 , P 180 , and P 270 obtained by setting a plurality of emission frequencies f (emission frequencies f 1 and f 2 ) are obtained. By giving the learning device of the learning unit 13 as student data for learning, it is possible to perform learning by excluding error factors due to multi-pass or the like, and it is possible to learn model parameters with high prediction accuracy.
 なお、発光周波数f1とf2に設定して通常駆動モードで4位相のRawデータを学習データとして取得した場合、特殊駆動モードは、発光周波数f1またはf2のいずれか一方に対応する180位相のRawデータP0乃至P179とすればよい。換言すれば、通常駆動モードで複数の発光周波数で位相データを取得した場合であっても、特殊駆動モードで取得するRawデータは1つの発光周波数の場合と変わらない。 When the emission frequencies f 1 and f 2 are set and the raw data of four phases is acquired as learning data in the normal drive mode, the special drive mode corresponds to either the emission frequencies f 1 or f 2 180. The phase raw data P 0 to P 179 may be used. In other words, even when phase data is acquired at a plurality of emission frequencies in the normal drive mode, the raw data acquired in the special drive mode is the same as in the case of one emission frequency.
<8.応用例>
<第1の応用例>
 次に、図1の測距システム1を用いた応用例について説明する。
<8. Application example>
<First application example>
Next, an application example using the ranging system 1 of FIG. 1 will be described.
 なお、図16乃至図24において、上述した図1等と対応する部分については同一の付してあり、その説明は適宜省略する。 Note that, in FIGS. 16 to 24, the parts corresponding to those in FIG. 1 and the like described above are the same, and the description thereof will be omitted as appropriate.
 図16は、測距システム1を用いた第1の応用例を示すブロック図である。 FIG. 16 is a block diagram showing a first application example using the ranging system 1.
 図16の測距システム1は、測距モジュール11、信号処理装置101、および、ピーク位置算出部102で構成されている。 The distance measuring system 1 of FIG. 16 is composed of a distance measuring module 11, a signal processing device 101, and a peak position calculation unit 102.
 図16の信号処理装置101は、図1において破線で示した信号処理装置16と対応し、推定部14には、学習済みのモデルパラメータが設定されている。あるいはまた、信号処理装置101は、学習済みのモデルパラメータが設定された推定部14のみで構成されてもよい。 The signal processing device 101 of FIG. 16 corresponds to the signal processing device 16 shown by the broken line in FIG. 1, and the learned model parameters are set in the estimation unit 14. Alternatively, the signal processing device 101 may be composed only of the estimation unit 14 in which the trained model parameters are set.
 測距モジュール11は、第1の駆動モードである通常駆動モードに設定され、発光源21から照射された照射光が物体に反射されて返ってきた反射光を受光し、4位相のRawデータP0、P90、P180、およびP270を出力する。 The ranging module 11 is set to the normal drive mode, which is the first drive mode, and receives the reflected light that is reflected by the object and the irradiation light emitted from the light emitting source 21 is received by the object, and the four-phase raw data P Outputs 0 , P 90 , P 180 , and P 270 .
 信号処理装置101(の推定部14)は、測距モジュール11の測距センサ24から供給される4位相のRawデータP0、P90、P180、およびP270を予測器に入力し、予測結果として、反射光の時間方向のインパルス応答データを生成し、ピーク位置算出部102に出力する。 The signal processing device 101 (estimating unit 14) inputs the four-phase Raw data P 0 , P 90 , P 180 , and P 270 supplied from the ranging sensor 24 of the ranging module 11 into the predictor for prediction. As a result, impulse response data in the time direction of the reflected light is generated and output to the peak position calculation unit 102.
 ピーク位置算出部102は、信号処理装置101(の推定部14)から出力される時間方向のインパルス応答データから、第1のピークを検出し、第1のピークに相当する距離を物体までの真のデプス値として補正したデプス画像を生成して出力する。 The peak position calculation unit 102 detects the first peak from the impulse response data in the time direction output from the signal processing device 101 (estimating unit 14), and sets the distance corresponding to the first peak to the object. Generates and outputs a corrected depth image as the depth value of.
 図2等を参照して説明したように、照射光が多重反射されて、測距モジュール11で受光された場合、反射光の時間方向のインパルス応答データは、時間軸に対して2以上のピークをもつプロファイルの受光データとなる。図17に示されるように、反射光の時間方向のインパルス応答データの複数のピークのうち、物体から直接反射されてきた光が第1のピークとなり、間接的に反射されて同じ画素に入射された光が第2のピーク、第3のピーク、・・・となる。第1のピーク、第2のピーク、第3のピーク、・・・は、時間軸の小さい方のピークから順に数えられる。 As described with reference to FIG. 2 and the like, when the irradiation light is multiple-reflected and received by the ranging module 11, the impulse response data in the time direction of the reflected light has two or more peaks with respect to the time axis. It becomes the received light data of the profile having. As shown in FIG. 17, of the plurality of peaks of the impulse response data in the time direction of the reflected light, the light directly reflected from the object becomes the first peak, and is indirectly reflected and incident on the same pixel. The emitted light becomes the second peak, the third peak, and so on. The first peak, the second peak, the third peak, ... Are counted in order from the peak having the smaller time axis.
 ピーク位置算出部102が、第1のピークに相当する距離を、真のデプス値として補正することにより、より正確なデプス画像を生成することができる。 The peak position calculation unit 102 corrects the distance corresponding to the first peak as a true depth value, so that a more accurate depth image can be generated.
<第2の応用例>
 図18は、測距システム1を用いた第2の応用例を示すブロック図である。
<Second application example>
FIG. 18 is a block diagram showing a second application example using the ranging system 1.
 図18の測距システム1は、測距モジュール11、信号処理装置101、および、材質推定部103で構成されている。 The distance measuring system 1 of FIG. 18 is composed of a distance measuring module 11, a signal processing device 101, and a material estimation unit 103.
 材質によって反射光の時間方向のインパルス応答データが異なることは、例えば、S. Su, F. Heide, R. Swanson, J. Klein, C. Callenberg, M. Hullin, W. Heidrich, Material Classification Using Raw Time-of-Flight Measurements, in CVPR 2016.などに開示されている。 The impulse response data of the reflected light in the time direction differs depending on the material, for example, S. Su, F. Heide, R. Swanson, J. Klein, C. Callenberg, M. Hullin, W. Heidrich, Material Classification Using Raw. It is disclosed in Time-of-Flight Measurements, in CVPR 2016., etc.
 材質推定部103は、信号処理装置101(の推定部14)から出力される反射光の時間方向のインパルス応答データから、被写体である物体の材質を推定して出力する。材質推定部103により、物体の材質が、例えば、布、木、金属などに分類されて出力される。 The material estimation unit 103 estimates and outputs the material of the object as the subject from the impulse response data in the time direction of the reflected light output from the signal processing device 101 (estimation unit 14). The material estimation unit 103 classifies the material of the object into, for example, cloth, wood, metal, and the like, and outputs the material.
 材質推定部103は、例えば、図19に示されるように、例えば3次元(x,y,t)の畳み込み層を備えた畳み込みニューラルネットワークによる特徴抽出部111と、全結合層を用いた分類部112とで構成することができる。勿論、材質を推定する推定器として、その他の構成を採用してもよい。 As shown in FIG. 19, the material estimation unit 103 includes, for example, a feature extraction unit 111 by a convolutional neural network having a three-dimensional (x, y, t) convolutional layer, and a classification unit using a fully connected layer. It can be configured with 112. Of course, other configurations may be adopted as the estimator for estimating the material.
<第3の応用例>
 図20は、測距システム1を用いた第3の応用例を示すブロック図である。
<Third application example>
FIG. 20 is a block diagram showing a third application example using the ranging system 1.
 図20の測距システム1は、測距モジュール11、信号処理装置101、材質推定部103、AR表示制御部104、および、ARグラス105で構成されている。 The distance measuring system 1 of FIG. 20 is composed of a distance measuring module 11, a signal processing device 101, a material estimation unit 103, an AR display control unit 104, and an AR glass 105.
 図20の測距システム1は、図18の測距システム1に、AR表示制御部104、および、ARグラス105を追加した構成である。 The distance measuring system 1 of FIG. 20 has a configuration in which an AR display control unit 104 and an AR glass 105 are added to the distance measuring system 1 of FIG.
 図20の測距システム1は、現実世界の風景に、文字や映像を重ね合わせて表示するARグラス105の重畳表示に、材質推定部103によって推定された材質の情報を用いるシステムである。 The ranging system 1 of FIG. 20 is a system that uses the material information estimated by the material estimation unit 103 for the superimposed display of the AR glass 105 that superimposes characters and images on the landscape in the real world.
 図21は、第3の応用例におけるARグラス105の表示例を示している。 FIG. 21 shows a display example of the AR glass 105 in the third application example.
 ARグラス105を通したユーザの視界に写る現実世界の床121と床122の材質が、材質推定部103によって、それぞれ、ポリエチレン、ポリプロピレンと推定される。 The materials of the floor 121 and the floor 122 in the real world reflected in the user's view through the AR glass 105 are estimated by the material estimation unit 103 to be polyethylene and polypropylene, respectively.
 AR表示制御部104は、ARグラス105の床121と床122の領域上に、推定された材質名を、重畳表示させる。 The AR display control unit 104 superimposes and displays the estimated material name on the areas of the floor 121 and the floor 122 of the AR glass 105.
 図22は、第3の応用例におけるARグラス105のその他の表示例を示している。 FIG. 22 shows another display example of the AR glass 105 in the third application example.
 AR表示制御部104は、ロボット(の画像)131および132を現実世界の風景に重畳表示させる。現実世界の床121と床122の材質が、材質推定部103によって、それぞれ、ポリエチレン、ポリプロピレンと推定された場合、AR表示制御部104は、ロボット131および132がポリエチレンと推定された床121の領域上のみを移動するように、ロボット131および132をARグラス105に表示させる。 The AR display control unit 104 superimposes and displays the robots (images) 131 and 132 on the landscape in the real world. When the materials of the floor 121 and the floor 122 in the real world are estimated to be polyethylene and polypropylene by the material estimation unit 103, respectively, the AR display control unit 104 uses the area of the floor 121 where the robots 131 and 132 are estimated to be polyethylene. The robots 131 and 132 are displayed on the AR glass 105 so as to move only on the top.
 図20のAR表示制御部104は、材質推定部103から供給される材質に基づいて、ARグラス105に表示(重畳表示)させる画像を生成し、ARグラス105に供給する。図21の例では、AR表示制御部104がARグラス105に表示させる画像は、ポリエチレンや、ポリプロピレンなどの材質名を表す文字である。図22の例では、AR表示制御部104がARグラス105に表示させる画像は、ロボット131および132である。 The AR display control unit 104 of FIG. 20 generates an image to be displayed (superimposed display) on the AR glass 105 based on the material supplied from the material estimation unit 103, and supplies the image to the AR glass 105. In the example of FIG. 21, the image displayed on the AR glass 105 by the AR display control unit 104 is a character representing a material name such as polyethylene or polypropylene. In the example of FIG. 22, the images displayed on the AR glass 105 by the AR display control unit 104 are the robots 131 and 132.
 ARグラス105は、AR表示制御部104から供給される画像を所定の表示部に表示する。 The AR glass 105 displays an image supplied from the AR display control unit 104 on a predetermined display unit.
 図23は、第3の応用例において図22で説明したAR表示を行うAR表示処理のフローチャートである。 FIG. 23 is a flowchart of the AR display process for performing the AR display described with reference to FIG. 22 in the third application example.
 初めに、ステップS31において、測距モジュール11は、第1の駆動モードである通常駆動モードで、被写体までの距離を測定する。測定により得られた4位相のRawデータP0、P90、P180、およびP270が、信号処理装置101に供給される。 First, in step S31, the distance measuring module 11 measures the distance to the subject in the normal driving mode, which is the first driving mode. The four-phase raw data P 0 , P 90 , P 180 , and P 270 obtained by the measurement are supplied to the signal processing device 101.
 ステップS32において、信号処理装置101は、測距センサ24から供給された4位相のRawデータP0、P90、P180、およびP270を予測器に入力し、予測結果として、反射光の時間方向のインパルス応答データを生成し、材質推定部103に出力する。 In step S32, the signal processing device 101 inputs the four-phase Raw data P 0 , P 90 , P 180 , and P 270 supplied from the ranging sensor 24 into the predictor, and as a prediction result, the time of the reflected light. The impulse response data of the direction is generated and output to the material estimation unit 103.
 ステップS33において、材質推定部103は、信号処理装置101から供給された反射光の時間方向のインパルス応答データから、ユーザの視界に写る現実世界の材質を推定して、AR表示制御部104に出力する。ARグラス105を通したユーザの視界と、測距モジュール11の測定領域との対応は、キャリブレーション等によって取られている。 In step S33, the material estimation unit 103 estimates the material in the real world reflected in the user's field of view from the impulse response data in the time direction of the reflected light supplied from the signal processing device 101, and outputs the material to the AR display control unit 104. To do. The correspondence between the user's field of view through the AR glass 105 and the measurement area of the distance measuring module 11 is taken by calibration or the like.
 ステップS34において、AR表示制御部104は、ロボット131および132の移動方向を決定し、ステップS35において、決定した移動方向が、ロボット131および132が移動可能な領域であるかを判定する。例えば、上述した例のように、ポリエチレンと推定された床121の領域上のみを移動させる場合には、決定した移動方向が床121の領域上であるかを判定する。 In step S34, the AR display control unit 104 determines the moving directions of the robots 131 and 132, and in step S35, determines whether the determined moving directions are the regions in which the robots 131 and 132 can move. For example, when moving only on the region of the floor 121 presumed to be polyethylene as in the above-mentioned example, it is determined whether the determined moving direction is on the region of the floor 121.
 ステップS35で、決定した移動方向が、ロボット131および132が移動可能な領域ではないと判定された場合、処理はステップS34に戻り、ロボット131および132の移動方向が、再度決定される。 If it is determined in step S35 that the determined moving direction is not the area in which the robots 131 and 132 can move, the process returns to step S34, and the moving directions of the robots 131 and 132 are determined again.
 一方、ステップS35で、決定した移動方向が、ロボット131および132が移動可能な領域であると判定された場合、処理はステップS36に進み、AR表示制御部104は、ロボット131および132を、決定した移動方向へ移動させる。 On the other hand, if it is determined in step S35 that the determined moving direction is the area in which the robots 131 and 132 can move, the process proceeds to step S36, and the AR display control unit 104 determines the robots 131 and 132. Move in the direction of movement.
 ステップS36の後、処理はステップS31へ戻り、上述したステップS31乃至S36の処理が、繰り返される。例えば、不図示の操作部等でAR表示の終了が指示された場合、図23のAR表示処理が終了される。 After step S36, the process returns to step S31, and the processes of steps S31 to S36 described above are repeated. For example, when the end of the AR display is instructed by an operation unit (not shown) or the like, the AR display process of FIG. 23 is ended.
 上述した第3の応用例では、材質推定部103による材質の推定結果を、ARグラス105の重畳表示に用いたが、自律移動するロボットの移動制御に用いることもできる。 In the third application example described above, the material estimation result by the material estimation unit 103 was used for the superimposed display of the AR glass 105, but it can also be used for the movement control of the autonomously moving robot.
 図24は、自律移動するロボットの移動制御に、材質推定部103による材質の推定結果を用いた応用例を示している。 FIG. 24 shows an application example in which the material estimation result by the material estimation unit 103 is used for the movement control of the autonomously moving robot.
 図24のロボット141は、視覚センサとして、イメージセンサや測距モジュール11を有し、撮影された画像と、測距モジュール11によって測定される、進行方向に存在する物体までの距離などの情報とに基づいて、自律的に移動する。 The robot 141 of FIG. 24 has an image sensor and a distance measuring module 11 as visual sensors, and includes captured images and information such as a distance to an object existing in the traveling direction measured by the distance measuring module 11. Move autonomously based on.
 ロボット141は、上述した、測距モジュール11、信号処理装置101、および、材質推定部103と、移動を制御する移動制御部とを備える。 The robot 141 includes the distance measuring module 11, the signal processing device 101, the material estimation unit 103, and the movement control unit that controls the movement, as described above.
 移動制御部は、材質推定部103から供給される床142および床143の材質に基づいて、移動方向を決定する。例えば、床142が、絨毯などの柔らかい、ロボット141の接地時に不安定となる材質であり、床143が、タイルなどの固い、ロボット141の接地時に安定する材質であることが推定結果として供給された場合、移動制御部は、床142の領域を移動せず、床143の領域を移動するように、移動を制御する。 The movement control unit determines the movement direction based on the materials of the floor 142 and the floor 143 supplied from the material estimation unit 103. For example, it is estimated that the floor 142 is a soft material such as a carpet that becomes unstable when the robot 141 touches the ground, and the floor 143 is a hard material such as a tile that is stable when the robot 141 touches the ground. In this case, the movement control unit controls the movement so as to move the area of the floor 143 without moving the area of the floor 142.
 図23を参照して説明したAR表示処理のステップS33において、材質推定部103が、ユーザの視界ではなく、ロボットの視界の材質を推定し、ステップS34乃至S36において、AR表示制御部104の代わりに、移動制御部が同様の処理を実行することで、図24で説明したような制御が可能となる。これにより、ロボット141は、姿勢維持が容易な安定する地面を選択して移動することができる。 In step S33 of the AR display process described with reference to FIG. 23, the material estimation unit 103 estimates the material of the robot's field of view instead of the user's field of view, and in steps S34 to S36, instead of the AR display control unit 104. In addition, when the movement control unit executes the same process, the control as described with reference to FIG. 24 becomes possible. As a result, the robot 141 can select and move a stable ground on which the posture can be easily maintained.
<9.コンピュータ適用例>
 上述した一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているマイクロコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
<9. Computer application example>
The series of processes described above can be executed by hardware or by software. When a series of processes are executed by software, the programs constituting the software are installed on the computer. Here, the computer includes a microcomputer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
 図25は、測距モジュール11より後段の信号処理装置16による一連の処理をプログラムにより実行するコンピュータのハードウエアの構成例を示すブロック図である。 FIG. 25 is a block diagram showing a configuration example of computer hardware that programmatically executes a series of processes by the signal processing device 16 in the subsequent stage of the ranging module 11.
 コンピュータにおいて、CPU(Central Processing Unit)201,ROM(Read Only Memory)202,RAM(Random Access Memory)203は、バス204により相互に接続されている。 In a computer, the CPU (Central Processing Unit) 201, ROM (ReadOnly Memory) 202, and RAM (RandomAccessMemory) 203 are connected to each other by a bus 204.
 バス204には、さらに、入出力インタフェース205が接続されている。入出力インタフェース205には、入力部206、出力部207、記憶部208、通信部209、及びドライブ210が接続されている。 An input / output interface 205 is further connected to the bus 204. An input unit 206, an output unit 207, a storage unit 208, a communication unit 209, and a drive 210 are connected to the input / output interface 205.
 入力部206は、キーボード、マウス、マイクロホン、タッチパネル、入力端子などよりなる。出力部207は、ディスプレイ、スピーカ、出力端子などよりなる。記憶部208は、ハードディスク、RAMディスク、不揮発性のメモリなどよりなる。通信部209は、ネットワークインタフェースなどよりなる。ドライブ210は、磁気ディスク、光ディスク、光磁気ディスク、或いは半導体メモリなどのリムーバブル記録媒体211を駆動する。 The input unit 206 includes a keyboard, a mouse, a microphone, a touch panel, an input terminal, and the like. The output unit 207 includes a display, a speaker, an output terminal, and the like. The storage unit 208 includes a hard disk, a RAM disk, a non-volatile memory, and the like. The communication unit 209 includes a network interface and the like. The drive 210 drives a removable recording medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
 以上のように構成されるコンピュータでは、CPU201が、例えば、記憶部208に記憶されているプログラムを、入出力インタフェース205及びバス204を介して、RAM203にロードして実行することにより、上述した一連の処理が行われる。RAM203にはまた、CPU201が各種の処理を実行する上において必要なデータなども適宜記憶される。 In the computer configured as described above, the CPU 201 loads the program stored in the storage unit 208 into the RAM 203 via the input / output interface 205 and the bus 204 and executes the above-described series. Is processed. The RAM 203 also appropriately stores data and the like necessary for the CPU 201 to execute various processes.
 コンピュータ(CPU201)が実行するプログラムは、例えば、パッケージメディア等としてのリムーバブル記録媒体211に記録して提供することができる。また、プログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することができる。 The program executed by the computer (CPU201) can be recorded and provided on the removable recording medium 211 as a package medium or the like, for example. Programs can also be provided via wired or wireless transmission media such as local area networks, the Internet, and digital satellite broadcasting.
 コンピュータでは、プログラムは、リムーバブル記録媒体211をドライブ210に装着することにより、入出力インタフェース205を介して、記憶部208にインストールすることができる。また、プログラムは、有線または無線の伝送媒体を介して、通信部209で受信し、記憶部208にインストールすることができる。その他、プログラムは、ROM202や記憶部208に、あらかじめインストールしておくことができる。 In the computer, the program can be installed in the storage unit 208 via the input / output interface 205 by mounting the removable recording medium 211 in the drive 210. Further, the program can be received by the communication unit 209 and installed in the storage unit 208 via a wired or wireless transmission medium. In addition, the program can be pre-installed in the ROM 202 or the storage unit 208.
 なお、本明細書において、フローチャートに記述されたステップは、記載された順序に沿って時系列的に行われる場合はもちろん、必ずしも時系列的に処理されなくとも、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで実行されてもよい。 In addition, in this specification, the steps described in the flowchart are not necessarily processed in chronological order as well as in chronological order in the order described, but are called in parallel or are called. It may be executed at a necessary timing such as when.
 本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 In the present specification, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a device in which a plurality of modules are housed in one housing are both systems. ..
 本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 The embodiment of the present technology is not limited to the above-described embodiment, and various changes can be made without departing from the gist of the present technology.
 例えば、上述した複数の実施の形態の全てまたは一部を組み合わせた形態を採用することができる。 For example, a form in which all or a part of the plurality of embodiments described above can be combined can be adopted.
 例えば、本技術は、1つの機能をネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 For example, this technology can have a cloud computing configuration in which one function is shared by a plurality of devices via a network and processed jointly.
 また、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。 In addition, each step described in the above flowchart can be executed by one device or shared by a plurality of devices.
 さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, when one step includes a plurality of processes, the plurality of processes included in the one step can be executed by one device or shared by a plurality of devices.
 なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、本明細書に記載されたもの以外の効果があってもよい。 Note that the effects described in the present specification are merely examples and are not limited, and effects other than those described in the present specification may be obtained.
 なお、本技術は、以下の構成を取ることができる。
(1)
 照射光が被写体に反射されて返ってきた反射光を第1の駆動モードまたは第2の駆動モードで受光する測距センサが前記第2の駆動モードで出力した受光データから、前記被写体に対する前記反射光の時間方向のインパルス応答データを教師データとして生成する教師データ生成部と、
 前記第1の駆動モードにおける受光データと前記反射光の時間方向のインパルス応答データとの関係を学習する学習モデルのモデルパラメータを算出する学習部と
 を備える信号処理装置。
(2)
 前記学習部により算出された前記モデルパラメータを用いた学習モデルに、前記第1の駆動モードの前記測距センサの受光データを入力し、反射光の時間方向のインパルス応答データを出力する推定部をさらに備える
 前記(1)に記載の信号処理装置。
(3)
 前記教師データ生成部は、前記第2の駆動モードの前記測距センサが前記照射光との位相差を変化させながら前記反射光を複数回受光したときの複数の前記受光データから、前記被写体に対する前記反射光の時間方向のインパルス応答データを教師データとして生成する
 前記(1)または(2)に記載の信号処理装置。
(4)
 前記教師データ生成部は、前記第2の駆動モードの前記測距センサが前記照射光との位相差を変化させながら前記反射光を複数回受光したときの複数の前記受光データに対して、露光タイミングを示す露光関数の逆関数による畳み込みを行うことで、前記教師データとしての前記反射光の時間方向のインパルス応答データを生成する
 前記(3)に記載の信号処理装置。
(5)
 前記第2の駆動モードでは、インパルス関数状の発光制御信号に基づいて前記照射光が発光される
 前記(1)乃至(4)のいずれかに記載の信号処理装置。
(6)
 前記学習部は、教師データを前記第2の駆動モードで得られた前記反射光の時間方向のインパルス応答データとし、生徒データを前記第1の駆動モードにおける受光データとして、前記学習モデルのモデルパラメータを算出する
 前記(1)乃至(5)のいずれかに記載の信号処理装置。
(7)
 前記学習部の前記学習モデルは、畳み込みニューラルネットワークで構成される
 前記(1)乃至(6)のいずれかに記載の信号処理装置。
(8)
 前記学習部は、生徒データを、前記第1の駆動モードにおいて発光周波数を複数に設定して前記測距センサで得られた前記受光データとして、前記学習モデルのモデルパラメータを算出する
 前記(1)乃至(7)のいずれかに記載の信号処理装置。
(9)
 前記測距センサと、
 前記照射光を発光する発光源と、
 前記発光源を制御する発光制御信号を供給する発光制御部と、
 前記発光制御信号から、前記測距センサの露光タイミングを制御する露光制御信号を生成する遅延生成部と
 を備える測距モジュールをさらに備える
 前記(1)乃至(8)のいずれかに記載の信号処理装置。
(10)
 信号処理装置が、
 照射光が被写体に反射されて返ってきた反射光を第1の駆動モードまたは第2の駆動モードで受光する測距センサが前記第2の駆動モードで出力した受光データから、前記被写体に対する前記反射光の時間方向のインパルス応答データを教師データとして生成し、
 前記第1の駆動モードにおける受光データと前記反射光の時間方向のインパルス応答データとの関係を学習する学習モデルのモデルパラメータを算出する
 信号処理方法。
The present technology can have the following configurations.
(1)
The reflection on the subject from the received data output in the second drive mode by the distance measuring sensor that receives the reflected light reflected by the subject in the first drive mode or the second drive mode. A teacher data generator that generates impulse response data in the time direction of light as teacher data,
A signal processing device including a learning unit that calculates model parameters of a learning model that learns the relationship between the received light data in the first drive mode and the impulse response data in the time direction of the reflected light.
(2)
An estimation unit that inputs the received data of the distance measuring sensor in the first drive mode to the learning model using the model parameters calculated by the learning unit and outputs the impulse response data in the time direction of the reflected light. The signal processing device according to (1) above.
(3)
The teacher data generation unit refers to the subject from a plurality of received data when the distance measuring sensor in the second drive mode receives the reflected light a plurality of times while changing the phase difference with the irradiation light. The signal processing device according to (1) or (2) above, which generates impulse response data in the time direction of the reflected light as training data.
(4)
The teacher data generation unit exposes a plurality of received data when the distance measuring sensor in the second drive mode receives the reflected light a plurality of times while changing the phase difference with the irradiation light. The signal processing device according to (3) above, which generates impulse response data in the time direction of the reflected light as the teacher data by performing convolution by the inverse function of the exposure function indicating the timing.
(5)
The signal processing device according to any one of (1) to (4), wherein in the second drive mode, the irradiation light is emitted based on an impulse function-shaped light emission control signal.
(6)
The learning unit uses the teacher data as impulse response data in the time direction of the reflected light obtained in the second drive mode, and the student data as light receiving data in the first drive mode, as model parameters of the learning model. The signal processing apparatus according to any one of (1) to (5) above.
(7)
The signal processing device according to any one of (1) to (6) above, wherein the learning model of the learning unit is composed of a convolutional neural network.
(8)
The learning unit calculates model parameters of the learning model as the light receiving data obtained by the distance measuring sensor by setting a plurality of emission frequencies in the first drive mode from the student data (1). The signal processing apparatus according to any one of (7) to (7).
(9)
With the distance measuring sensor
A light emitting source that emits the irradiation light and
A light emission control unit that supplies a light emission control signal that controls the light emission source,
The signal processing according to any one of (1) to (8) above, further comprising a distance measuring module including a delay generating unit for generating an exposure control signal for controlling the exposure timing of the distance measuring sensor from the light emission control signal. apparatus.
(10)
The signal processing device
The reflection on the subject from the received light data output by the distance measuring sensor in the second drive mode, which receives the reflected light reflected by the subject in the first drive mode or the second drive mode. Generate impulse response data in the time direction of light as teacher data,
A signal processing method for calculating model parameters of a learning model for learning the relationship between the received light data in the first drive mode and the impulse response data in the time direction of the reflected light.
 1 測距システム, 11 測距モジュール, 12 教師データ生成部, 13 学習部, 14 推定部, 15 制御部, 16 信号処理装置, 21 発光源, 22 発光制御部, 23 遅延生成部, 24 測距センサ, 101 信号処理装置, 102 ピーク位置算出部, 103 材質推定部, 104 AR表示制御部, 105 ARグラス, 201 CPU, 202 ROM, 203 RAM, 206 入力部, 207 出力部, 208 記憶部, 209 通信部, 210 ドライブ 1 distance measurement system, 11 distance measurement module, 12 teacher data generation unit, 13 learning unit, 14 estimation unit, 15 control unit, 16 signal processing device, 21 light emission source, 22 light emission control unit, 23 delay generation unit, 24 distance measurement Sensor, 101 signal processing device, 102 peak position calculation unit, 103 material estimation unit, 104 AR display control unit, 105 AR glass, 201 CPU, 202 ROM, 203 RAM, 206 input unit, 207 output unit, 208 storage unit, 209 Communication unit, 210 drive

Claims (10)

  1.  照射光が被写体に反射されて返ってきた反射光を第1の駆動モードまたは第2の駆動モードで受光する測距センサが前記第2の駆動モードで出力した受光データから、前記被写体に対する前記反射光の時間方向のインパルス応答データを教師データとして生成する教師データ生成部と、
     前記第1の駆動モードにおける受光データと前記反射光の時間方向のインパルス応答データとの関係を学習する学習モデルのモデルパラメータを算出する学習部と
     を備える信号処理装置。
    The reflection on the subject from the received data output in the second drive mode by the distance measuring sensor that receives the reflected light reflected by the subject in the first drive mode or the second drive mode. A teacher data generator that generates impulse response data in the time direction of light as teacher data,
    A signal processing device including a learning unit that calculates model parameters of a learning model that learns the relationship between the received light data in the first drive mode and the impulse response data in the time direction of the reflected light.
  2.  前記学習部により算出された前記モデルパラメータを用いた学習モデルに、前記第1の駆動モードの前記測距センサの受光データを入力し、反射光の時間方向のインパルス応答データを出力する推定部をさらに備える
     請求項1に記載の信号処理装置。
    An estimation unit that inputs the received data of the distance measuring sensor in the first drive mode to the learning model using the model parameters calculated by the learning unit and outputs the impulse response data in the time direction of the reflected light. The signal processing device according to claim 1, further comprising.
  3.  前記教師データ生成部は、前記第2の駆動モードの前記測距センサが前記照射光との位相差を変化させながら前記反射光を複数回受光したときの複数の前記受光データから、前記被写体に対する前記反射光の時間方向のインパルス応答データを教師データとして生成する
     請求項1に記載の信号処理装置。
    The teacher data generation unit refers to the subject from a plurality of received data when the distance measuring sensor in the second drive mode receives the reflected light a plurality of times while changing the phase difference with the irradiation light. The signal processing device according to claim 1, wherein the impulse response data in the time direction of the reflected light is generated as training data.
  4.  前記教師データ生成部は、前記第2の駆動モードの前記測距センサが前記照射光との位相差を変化させながら前記反射光を複数回受光したときの複数の前記受光データに対して、露光タイミングを示す露光関数の逆関数による畳み込みを行うことで、前記教師データとしての前記反射光の時間方向のインパルス応答データを生成する
     請求項3に記載の信号処理装置。
    The teacher data generation unit exposes a plurality of received data when the distance measuring sensor in the second drive mode receives the reflected light a plurality of times while changing the phase difference with the irradiation light. The signal processing apparatus according to claim 3, wherein the impulse response data in the time direction of the reflected light as the teacher data is generated by convolution by the inverse function of the exposure function indicating the timing.
  5.  前記第2の駆動モードでは、インパルス関数状の発光制御信号に基づいて前記照射光が発光される
     請求項1に記載の信号処理装置。
    The signal processing device according to claim 1, wherein in the second drive mode, the irradiation light is emitted based on an impulse function-shaped emission control signal.
  6.  前記学習部は、教師データを前記第2の駆動モードで得られた前記反射光の時間方向のインパルス応答データとし、生徒データを前記第1の駆動モードにおける受光データとして、前記学習モデルのモデルパラメータを算出する
     請求項1に記載の信号処理装置。
    The learning unit uses the teacher data as impulse response data in the time direction of the reflected light obtained in the second drive mode, and the student data as light receiving data in the first drive mode, as model parameters of the learning model. The signal processing device according to claim 1.
  7.  前記学習部の前記学習モデルは、畳み込みニューラルネットワークで構成される
     請求項1に記載の信号処理装置。
    The signal processing device according to claim 1, wherein the learning model of the learning unit is composed of a convolutional neural network.
  8.  前記学習部は、生徒データを、前記第1の駆動モードにおいて発光周波数を複数に設定して前記測距センサで得られた前記受光データとして、前記学習モデルのモデルパラメータを算出する
     請求項1に記載の信号処理装置。
    The learning unit calculates the model parameter of the learning model as the light receiving data obtained by the distance measuring sensor by setting a plurality of emission frequencies in the first drive mode. The signal processing device described.
  9.  前記測距センサと、
     前記照射光を発光する発光源と、
     前記発光源を制御する発光制御信号を供給する発光制御部と、
     前記発光制御信号から、前記測距センサの露光タイミングを制御する露光制御信号を生成する遅延生成部と
     を備える測距モジュール
     をさらに備える
     請求項1に記載の信号処理装置。
    With the distance measuring sensor
    A light emitting source that emits the irradiation light and
    A light emission control unit that supplies a light emission control signal that controls the light emission source,
    The signal processing device according to claim 1, further comprising a distance measuring module including a delay generation unit for generating an exposure control signal for controlling the exposure timing of the distance measuring sensor from the light emission control signal.
  10.  信号処理装置が、
     照射光が被写体に反射されて返ってきた反射光を第1の駆動モードまたは第2の駆動モードで受光する測距センサが前記第2の駆動モードで出力した受光データから、前記被写体に対する前記反射光の時間方向のインパルス応答データを教師データとして生成し、
     前記第1の駆動モードにおける受光データと前記反射光の時間方向のインパルス応答データとの関係を学習する学習モデルのモデルパラメータを算出する
     信号処理方法。
    The signal processing device
    The reflection on the subject from the received light data output in the second drive mode by the distance measuring sensor that receives the reflected light reflected by the subject in the first drive mode or the second drive mode. Generate impulse response data in the time direction of light as teacher data,
    A signal processing method for calculating model parameters of a learning model for learning the relationship between the received light data in the first drive mode and the impulse response data in the time direction of the reflected light.
PCT/JP2020/024971 2019-07-08 2020-06-25 Signal processing device and signal processing method WO2021006048A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-126688 2019-07-08
JP2019126688 2019-07-08

Publications (1)

Publication Number Publication Date
WO2021006048A1 true WO2021006048A1 (en) 2021-01-14

Family

ID=74114698

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/024971 WO2021006048A1 (en) 2019-07-08 2020-06-25 Signal processing device and signal processing method

Country Status (1)

Country Link
WO (1) WO2021006048A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023085138A1 (en) * 2021-11-12 2023-05-19 ソニーグループ株式会社 Solid-state imaging device, method for driving same, and electronic apparatus
WO2023186057A1 (en) * 2022-04-01 2023-10-05 深圳市灵明光子科技有限公司 Laser radar detection parameter adjustment control method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130088726A1 (en) * 2011-10-07 2013-04-11 Vivek K. Goyal Method and Apparatus to Determine Depth Information For A Scene of Interest
JP2017011693A (en) * 2015-06-17 2017-01-12 パナソニックIpマネジメント株式会社 Imaging device
WO2017187484A1 (en) * 2016-04-25 2017-11-02 株式会社日立製作所 Object imaging device
JP2018033008A (en) * 2016-08-24 2018-03-01 国立大学法人静岡大学 Photoelectric conversion element and solid state imaging device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130088726A1 (en) * 2011-10-07 2013-04-11 Vivek K. Goyal Method and Apparatus to Determine Depth Information For A Scene of Interest
JP2017011693A (en) * 2015-06-17 2017-01-12 パナソニックIpマネジメント株式会社 Imaging device
WO2017187484A1 (en) * 2016-04-25 2017-11-02 株式会社日立製作所 Object imaging device
JP2018033008A (en) * 2016-08-24 2018-03-01 国立大学法人静岡大学 Photoelectric conversion element and solid state imaging device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023085138A1 (en) * 2021-11-12 2023-05-19 ソニーグループ株式会社 Solid-state imaging device, method for driving same, and electronic apparatus
WO2023186057A1 (en) * 2022-04-01 2023-10-05 深圳市灵明光子科技有限公司 Laser radar detection parameter adjustment control method and device

Similar Documents

Publication Publication Date Title
EP3707674B1 (en) Method and apparatus for performing depth estimation of object
EP3550330B1 (en) Distance measuring device
KR102136850B1 (en) A depth sensor, and a method of operating the same
WO2021006048A1 (en) Signal processing device and signal processing method
JP5698527B2 (en) Depth sensor depth estimation method and recording medium therefor
US9900581B2 (en) Parametric online calibration and compensation in ToF imaging
US8849616B2 (en) Method and system for noise simulation analysis useable with systems including time-of-flight depth systems
US10557921B2 (en) Active brightness-based strategy for invalidating pixels in time-of-flight depth-sensing
EP2894492B1 (en) A method for driving a time-of-flight system
JP5624976B2 (en) Estimating the position of objects in various ranges
US8553093B2 (en) Method and apparatus for super-resolution imaging using digital imaging devices
JP7071312B2 (en) Measuring and removing time-of-flight depth image corruption due to internal scattering
Faion et al. Intelligent sensor-scheduling for multi-kinect-tracking
WO2016017450A1 (en) Image-processing device, image processing method, program, and image sensor
US20160232684A1 (en) Motion compensation method and apparatus for depth images
US20150310622A1 (en) Depth Image Generation Utilizing Pseudoframes Each Comprising Multiple Phase Images
JPWO2011099244A1 (en) Image processing apparatus and image processing method
JP6412673B1 (en) Image processing apparatus and method, and program
KR20120138304A (en) Method of depth image signal processing, depth sensor of the same and image sensing system of the same
CN105659583A (en) Image processing device, image processing method, electronic apparatus, and program
JP2019023640A (en) Image processing device and method, and program
US20240134053A1 (en) Time-of-flight data generation circuitry and time-of-flight data generation method
JP2011122913A (en) Distance image generation apparatus and distance image generation method
JP2009010847A (en) Color component interpolation apparatus, and method thereof
CN113030999A (en) Time-of-flight sensing system and image sensor used therein

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20837355

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20837355

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP