WO2020189339A1 - Distance measuring device and distance measuring method - Google Patents
Distance measuring device and distance measuring method Download PDFInfo
- Publication number
- WO2020189339A1 WO2020189339A1 PCT/JP2020/009677 JP2020009677W WO2020189339A1 WO 2020189339 A1 WO2020189339 A1 WO 2020189339A1 JP 2020009677 W JP2020009677 W JP 2020009677W WO 2020189339 A1 WO2020189339 A1 WO 2020189339A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- light
- irradiation
- unit
- distance
- light source
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/02—Details
- G01C3/06—Use of electric means to obtain final indication
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
Definitions
- the present disclosure relates to a distance measuring device and a distance measuring method, and more particularly to a distance measuring device and a distance measuring method capable of measuring a distance with high accuracy even in an environment where a plurality of distance measuring devices exist.
- a laser beam that repeatedly emits light with a predetermined pulse width is irradiated, and the irradiated laser beam hits an object and receives the reflected light, so that the reciprocating time of the laser beam (laser related to the reciprocation).
- Distance measurement is performed based on the phase difference of light).
- each of the plurality of distance measuring devices is affected by the reflected light of the laser beam emitted from the other distance measuring devices. Therefore, there is a risk that proper distance measurement cannot be achieved.
- This disclosure has been made in view of such a situation, and in particular, it enables highly accurate distance measurement even in an environment where a plurality of distance measurement sensors exist.
- the ranging device on one side of the present disclosure includes a light receiving portion that receives reflected light generated by reflection of irradiation light by an object in the detection region, and the object due to a phase difference between the irradiation light and the reflected light.
- the calculation unit that calculates the distance to the object, the required irradiation light that is the irradiation light used to calculate the distance to the object in the light receiving unit, and the unnecessary irradiation that is not used to calculate the distance to the object. It is a distance measuring device including a light receiving unit and a control unit that controls the calculation unit so as to suppress the influence of collision with light.
- the distance measuring method on one aspect of the present disclosure corresponds to the distance measuring device.
- the required irradiation light which is the irradiation light generated by the reflection of the irradiation light by the object in the detection region and used for calculating the distance to the object, and the object. It is controlled so as to suppress the influence of collision with unnecessary irradiation light, which is irradiation light that is not used for calculating the distance to.
- a plurality of depth sensors including a distance measuring device 1-1 and 1-2 are used to capture a depth image (distance image) up to the object 4 respectively. ..
- the depth sensors 1-1 and 1-2 in FIG. 1 are provided with a light source 2-1 and 2-2 and a distance measuring device 3-1 and 3-2, respectively.
- the light sources 2-1 and 2-2 irradiate the object 4 with a laser beam having a predetermined pulse width.
- the distance measuring devices 3-1 and 3-2 receive the reflected light R1 and R2 generated by the irradiation light L1 and L2 emitted from the light sources 2-1 and 2-2 being reflected by the object 4, respectively.
- the distance to the object 4 is measured in pixel units based on the round-trip time of the irradiation light, and a depth image composed of the distance measurement results in pixel units is generated.
- the depth sensors 1-1, 1-2, the light sources 2-1 and 2, and the distance measuring devices 3-1 and 3-2 are simply referred to as the depth sensor 1, the light source 2, and the distance measuring devices 3-1 and 3-2, respectively, unless it is necessary to distinguish them.
- the distance measuring device 3, and other configurations are also referred to in the same manner.
- the ranging device 3 receives the reflected light generated by reflecting the irradiation light composed of the laser light having a predetermined pulse width emitted from the light source 2 by the object 4, and based on the phase difference between the irradiation light and the reflected light. , The distance to the object 4 is measured by obtaining the round-trip time of the laser beam.
- the light source 2 irradiates, for example, a laser beam having a pulse width as shown in the uppermost stage of FIG. 2 as irradiation light.
- the distance measuring device 3 receives the reflected light as shown in the second stage from the top of FIG. 2 generated by the irradiation light reflected by the object 4 as shown in the uppermost stage of FIG. 2, and the received light amount. It is provided with a light receiving element that generates a pixel signal according to the above. As for the phase of the reflected light, a delay time ⁇ T occurs due to the phase difference according to the distance to the object with respect to the phase of the irradiation light.
- the distance measuring device 3 measures the distance to the object 4 based on the delay time ⁇ T caused by this phase difference. At this time, the distance measuring device 3 synchronizes the exposure time of the light receiving element with the phase of the irradiation light, and receives light, for example, by dividing it into two periods.
- the distance measuring device 3 includes exposure A, which is the exposure time when the pulse of the irradiation light is a Hi signal, and the fourth stage from the top of FIG. 2, which is shown in the third stage from the top of FIG.
- the light is received by being divided into two periods of the same length as the exposure B, which is the exposure time when the pulse of the irradiation light is a Low signal, which is indicated by.
- the light receiving element exposed in the exposure period shown by the exposure A in FIG. 2 is referred to as Tap A
- the light receiving element exposed in the exposure period shown in the exposure B in FIG. The element is called TapB.
- the sum of the pixel signals generated by photoelectric conversion in each of the exposure periods of TapA and TapB is always the pixel signal detected in the exposure period corresponding to the light emission period of the light source 2, where the phase of the irradiation light is the Hi signal. ..
- the pixel signal generated by TapB is a pixel signal in the exposure period that is reduced by the delay time ⁇ T with respect to the pixel signal generated in the period that is regarded as the Hi signal in the irradiation light.
- the distance measuring device 3 obtains the delay time ⁇ T from the ratio of the pixel signals of Tap B to the sum of the pixel signals of Tap A and Tap B, and uses the obtained delay time ⁇ T to reciprocate the light between the distance measuring device 3 and the object 4. It is regarded as time, and the distance from the distance measuring device 3 to the object 4 is measured based on the round-trip time.
- the depth sensor 1 measures the distance according to the principle of distance measurement described with reference to FIG. 2, but when a plurality of depth sensors 1 measure the distance to the same object 4, the depth sensor 1 from the object 4 with respect to its own irradiation light. Appropriate distance measurement cannot be achieved unless the reflected light is received.
- the irradiation lights L1 and L2 emitted by the light sources 2-1 and 2-1 in the depth sensors 1-1 and 1-2 are reflected by the object 4, respectively, and the reflected light R1 and R1 As long as the distance measuring devices 3-1 and 3-2 receive light and measure the distance as R2, appropriate distance measurement can be realized.
- the ranging device 3-2 receives the reflected light R2 with respect to the irradiation light L2 of the light source 2-2 and the reflected light R1 corresponding to the irradiation light L1 of the light source 2-1 together, the reflected light R1 Due to the influence, proper distance measurement will not be possible. That is, for the distance measuring device 3-2, the reflected light R2 corresponding to the irradiation light L2 is the reflected light necessary for distance measurement, but the reflected light R1 corresponding to the irradiation light L1 is the reflected light unnecessary for the distance measurement. It becomes.
- the reflected light required for distance measurement is also referred to as a necessary reflected light
- the irradiation light corresponding to the required reflected light is also referred to as a necessary irradiation light
- the irradiation light unnecessary for distance measurement is also referred to as unnecessary irradiation light
- the reflected light corresponding to the unnecessary irradiation light is also referred to as unnecessary reflected light. Therefore, for the ranging device 3-2, the irradiation light L2 is the necessary irradiation light and the reflected light R2 is the necessary reflected light, but the irradiation light L1 is the unnecessary irradiation light and the reflected light R1 is unnecessary. It is reflected light. This will be similarly affected by the distance measuring device 3-1.
- the distance measuring device 3-1 and 3-2 have mutual influences between the required reflected light and the unnecessary reflected light (or the required irradiation light and the light). There is a risk that proper distance measurement cannot be achieved due to mutual influence with unnecessary irradiation light). That is, the modulation frequency and phase of the required irradiation light and the required reflected light are known, but the modulation frequency and phase are unknown due to the influence of the unnecessary irradiation light and the unnecessary reflected light whose modulation frequency and phase are unknown. There is a risk that it will change to something and it will not be possible to measure the distance properly.
- the effect of changing the modulation frequency or phase on the required irradiation light or required reflected light having a known modulation frequency or phase is also referred to as collision. That is, the collision between the required irradiation light or the required reflected light and the unnecessary irradiation light or the unnecessary reflected light may cause a change in the known modulation frequency or phase, making it impossible to realize appropriate distance measurement.
- the depth sensors 1-1 and 1-2 should be able to receive the reflected light R1 and R2 separately. Therefore, it is necessary to suppress the influence of collision so that only the necessary reflected light can be used for distance measurement.
- Several methods can be considered in order to distinguish the reflected light R1 and R2 so that they can be received.
- the light sources 2-1 and 2-2 do not overlap with each other and emit light at equal intervals.
- a method of controlling the timing can be considered.
- the waveform of the emission timing of the irradiation light L1 irradiated by the light source 2-1 is shown in the upper waveforms of the upper, middle, and lower stages, and in the lower waveform, the emission timing waveform is shown.
- the waveform of the emission timing of the irradiation light L2 emitted by the light source 2-2 is shown.
- Both waveforms indicate that the timing at which the rectangular pulse waveform is generated is the timing at which the predetermined modulated light is emitted.
- a second method of receiving the reflected light R1 and R2 separately for example, as shown in the middle part of FIG. 3, light is emitted so that the emission timings of the light sources 2-1 and 2-2 do not overlap.
- a method of randomly controlling the timing (interval) by a random number or the like can be considered.
- the emission timing of the irradiation light L1 is set to the cycle T1
- the emission timing of the irradiation light L2 is set to the cycle T2.
- the modulation method related to the light emission of the light sources 2-1 and 2-2 is randomly changed.
- a method of controlling such as this can be considered.
- the distance measuring devices 3-1 and 3-2 know the modulation method related to the light emission of the light sources 2-1 and 2, so that even if the light emission timings of the irradiation light overlap. It can be separated as background light.
- the depth sensors 1-1, 1-2 are configured to control the light emission timings of the light sources 2-1 and 2-2 of the depth sensors 1-1 and 1-2 so as not to overlap with each other. It is necessary to separately provide the device or one of the depth sensors 1-1 and 1-2 to control the timing, which increases the cost related to the device configuration and complicates the control.
- the light emission timing of the light sources 2-1 and 2-2 changes randomly, the light may be emitted at the timing when the irradiation lights L1 and L2 overlap.
- the ranging devices 3-1, 3-2 cannot determine whether or not the timings overlap, and cannot separate the reflected lights R1 and R2 corresponding to the irradiation lights L1 and L2, so that they are not always appropriate. Distance measurement may not be possible.
- the third method even if the modulation method related to the light emission of the light sources 2-1 and 2-2 changes randomly, the possibility of matching cannot be denied, so that it may not always be possible to realize appropriate distance measurement. is there.
- the interval between the emission timings of the own light source 2 is set wide by a predetermined time, and the pixel signal obtained by the reflected light with respect to the irradiation light from the own light source 2 in the distance measuring device 3 and others.
- the pixel signal obtained by the reflected light with respect to the irradiation light from the light source 2 of the above is integrated multiple times, and the irradiation light and the reflected light by the own light source 2 are integrated by the distance measurement calculation of the 4-phase method from the difference of the integration result.
- the distance to the object 4 is measured by obtaining the phase difference with.
- the interval of the light emission timing referred to here is not the interval of each pulse waveform, but the light source in which the light source blinks and emits light in a state of being modulated at a predetermined modulation frequency, and the light source is completely. Indicates the interval from the non-lighting state.
- the ranging calculation method described with reference to FIG. 2 is a 2-phase ranging calculation method as opposed to the 4-phase ranging calculation method, and the irradiation light is emitted in synchronization with the phase of the irradiation light.
- Two-phase pixel signals are detected in one frame (during one cycle related to modulation) by TapA and TapB in synchronization with the timing and the timing when the lights are turned off, and the pixels of TapB with respect to the sum of the pixel signals of TapA and TapB. This is a method of performing distance measurement calculation based on the phase difference expressed by the ratio of signals.
- the 4-phase method is the following distance measurement calculation method.
- the exposure timings of TapA and TapB are controlled so that the phase is the same as that of the irradiation light (that is, Phase0).
- the exposure is performed at four timings of a phase shifted by 90 degrees (Phase90), a phase shifted by 180 degrees (Phase180), and a phase shifted by 270 degrees (Phase270), and a pixel signal is detected in each exposure period.
- the timings of the same phase (Phase0) and the phase (Phase180) can be set continuously in the same frame, and the phase (Phase90) and the phase (Phase270) can also be set continuously in the same frame. can do. Therefore, for phase (Phase0) and phase (Phase180), TapA receives light at continuous timing, and for phase (Phase90) and phase (Phase270), TapB receives light at continuous timing, so that in one frame. At the same time, the exposure time for 4 phases can be set.
- the signal values detected in Phase 0, Phase 90, Phase 180, and Phase 270 of Tap A and Tap B in the 4 Phase method are also referred to as q 0A , q 1A , q 2A , and q 3A , respectively.
- the phase shift amount ⁇ corresponding to the delay time ⁇ T can be detected by the distribution ratio of the signal values q 0A , q 1A , q 2A , and q 3A . That is, since the delay time ⁇ T is obtained based on the phase shift amount ⁇ , the distance to the object can be obtained from the delay time ⁇ T.
- the distance to the object 4 is calculated by, for example, the following equation (1).
- C is the speed of light
- ⁇ T is the delay time
- f mod is the modulation frequency of light
- ⁇ 0 to ⁇ 3 are shown in the third to seventh stages of FIG. 4, respectively.
- the distance measurement calculation is normally performed by the 4-phase method, and hereinafter, the operation mode by this distance measurement calculation is referred to as a normal mode.
- the own irradiation light is used.
- the light emission interval of is set longer than the time when the reflected light is sufficiently attenuated and the amount of light is reduced.
- a turn-off period is set in which the light source 2 having a sufficient length for sufficiently diminishing the amount of the irradiation light is turned off.
- the waveform WL of its own irradiation light the waveform of the reflected light WR from the object 4 due to its own irradiation light, and the object 4 due to other irradiation light.
- the waveform of the reflected light WRo from is shown, and the exposure timings of TapA and TapB are shown. Further, at each exposure timing by Tap A and Tap B, which of the reflected light WR and WRo is received during each exposure period is color-coded. That is, in Tap A and Tap B of FIG. 5, the pointillistic region represents the timing at which the reflected light WRo is received, and the grid-like region represents the timing at which the reflected light WR is received. ..
- the waveform WL of the irradiation light emits light with a period TL as shown by the times t0 to t1, t2 to t3, and t4 to t5.
- the period TL is set to be sufficiently longer than the time Td at which the brightness is sufficiently attenuated after the emission period of the reflected light ends.
- the waveform WR of the reflected light corresponding to the own irradiation light is only the delay time ⁇ T according to the distance to the object 4 with respect to the waveform WL of the own irradiation light.
- the light is received in a state of light emission at the delayed timings t11 to t12, t13 to t14, and t15 to t16.
- TapA has the same phase (that is, Phase 0) and a phase shifted by 180 degrees (Phase180) from the upper part of FIG. 5 with respect to the period during which the irradiation light is emitted for each of the N frame to the N + 3 frame.
- Phase90 phase shifted by 90 degrees
- Phase270 phase shifted by 270 degrees
- the time Td at which the brightness sufficiently attenuates after the emission period of the reflected light ends in synchronization with the period during which the irradiation light is emitted. Is exposed in the same phase (that is, Phase 0), 180 degree shifted phase (Phase 180), 90 degree shifted phase (Phase 90), and 270 degree shifted phase (Phase 270), and other irradiation light is an object. Only the reflected light reflected in 4 is received to detect the pixel signal.
- a pixel signal that combines the reflected light of its own light source 2 for four phases and the reflected light of another light source 2 thus obtained in units of four frames, and another light source for four phases.
- the pixel signal of only the reflected light of 2 is repeatedly integrated a predetermined number of times.
- the integrated value of the pixel signals for four phases which is the sum of the reflected light corresponding to the own irradiation light detected by TapA and the reflected light of the other light source 2, and the other ones detected by TapB.
- ⁇ 0 to ⁇ 3 in the above equation (1) are expressed as shown by the following equation (2), and the distance measurement is performed.
- ⁇ TapA N , ⁇ TapA N + 1 , ⁇ TapA N + 2 , and ⁇ TapA N + 3 are the integrated values of the pixel signals detected in each of the four frames N to N + 3 in TapA, respectively. ..
- ⁇ TapB N , ⁇ TapB N + 1 , ⁇ TapB N + 2 , and ⁇ TapB N + 3 are the integrated values of the predetermined number of pixel signals detected in each of the four frames N to N + 3 in TapB, respectively. Is.
- ⁇ 0 to ⁇ 3 correspond to the signal values q 0A , q 1A , q 2A , and q 3A detected in each of the phases Phase 0, Phase 90, Phase 180, and Phase 270, respectively.
- the delay time ⁇ T is represented by ⁇ / (2 ⁇ f mod ).
- FIG. 5 the process of performing the distance measurement calculation by correcting so as to suppress the influence of the reflected light by the other light sources 2 by the plurality of depth sensors 1-1 and 1-2 is shown in FIG.
- the normal mode processing described with reference to 4 is referred to as a correction mode processing.
- FIG. 6 is a block diagram showing a configuration example of an embodiment of a depth sensor to which the present disclosure is applied.
- the depth sensor 11 includes an optical modulation unit 21, a light emitting diode 22, a light projecting lens 23, a light receiving lens 24, a filter 25, a TOF sensor 26, an image storage unit 27, a synchronization processing unit 28, a calculation unit 29, and a depth image generation. It is configured to include a unit 30 and a control unit 31.
- the optical modulation unit 21 is controlled by the control unit 31 and supplies a modulation signal for modulating the light output from the light emitting diode 22 at a high frequency of, for example, about 10 MHz to the light emitting diode 22. Further, the optical modulation unit 21 is controlled by the control unit 31 and supplies a timing signal indicating the timing at which the light of the light emitting diode 22 is modulated to the TOF sensor 26 and the synchronization processing unit 28.
- the light emitting diode 22 emits light while modulating light in an invisible region such as infrared light at high speed according to a modulation signal supplied from the light modulation unit 21, and the depth sensor 11 acquires a depth image of the light. Then, the light is emitted toward the detection area, which is the area for detecting the distance.
- the light source that irradiates the light toward the detection region is described as the light emitting diode 22, but another light source such as a laser diode may be used.
- the light emitting diode 22 may be arranged adjacent to the TOF sensor 26 described later. With this configuration, when the emitted light is reflected by the object and returned to the depth sensor 11, the path difference between the going and returning is minimized, and the distance measurement error can be reduced. Further, the light emitting diode 22 and the TOF sensor 26 may be integrally formed in one housing. With this configuration, it is possible to suppress variations in the going and returning paths when the emitted light is reflected by the object and returned to the depth sensor 11, and it is possible to reduce the distance measurement error. Further, the light emitting diode 22 and the TOF sensor 26 may be formed from different housings as long as the mutual positional relationship can be grasped.
- the light projecting lens 23 is composed of a lens that adjusts the light distribution so that the light emitted from the light emitting diode 22 has a desired irradiation angle.
- the light receiving lens 24 is composed of a lens that captures the detection region where the light is emitted from the light emitting diode 22 in the field of view, and the light reflected by the object in the detection region is imaged on the sensor surface of the TOF sensor 26.
- the filter 25 is a BPF (Band Pass Filter) that allows only light in a predetermined band to pass through, and is of the light in a predetermined pass band among the light reflected by an object in the detection region and incident on the TOF sensor 26. Only light passes through.
- the filter 25 has a central wavelength of the pass band set between 920 nm and 960 nm, and allows more light in the pass band to pass than light having a wavelength other than the pass band.
- light having a wavelength in the pass band is transmitted by 60% or more, and light having a wavelength other than the pass band is transmitted by less than 30%.
- the pass band through which the filter 25 passes light is narrower than the pass band of the filter used in the conventional TOF method, and is limited to a narrow band corresponding to the wavelength of the light emitted from the light emitting diode 22.
- the wavelength of the light emitted from the light emitting diode 22 is 940 nm
- a band of 10 nm before and after (930 to 950 nm) centered on 940 nm is adopted as a pass band of the filter 25 in conjunction with the wavelength. ..
- the TOF sensor 26 can detect the irradiation light while suppressing the influence of the disturbance light of the sun.
- the pass band of the filter 25 is not limited to this, and may be a band of 15 nm or less before and after the predetermined wavelength. Further, as the pass band of the filter 25, a band (840 to 860 nm) of 10 nm before and after centering on 850 nm, which is the wavelength band having the best characteristics of the TOF sensor 26, may be adopted. As a result, the TOF sensor 26 can effectively detect the irradiation light.
- the TOF sensor 26 is composed of an image pickup element having sensitivity in the wavelength range of the light emitted from the light emitting diode 22, and the light collected by the light receiving lens 24 and passed through the filter 25 is arranged in an array on the sensor surface. Light is received by a plurality of pixels. As shown in the figure, the TOF sensor 26 is arranged in the vicinity of the light emitting diode 22 and can receive the light reflected by the object in the detection region where the light is irradiated by the light emitting diode 22. Then, the TOF sensor 26 outputs a pixel signal in which the amount of light received by each pixel is used as a pixel value for generating a depth image.
- various sensors such as SPAD (single photon avalanche diode), APD (Avalanche Photo Diode) and CAPD (Current Assisted Photonic Demodulator) can be applied.
- SPAD single photon avalanche diode
- APD Avalanche Photo Diode
- CAPD Current Assisted Photonic Demodulator
- the image storage unit 27 stores an image constructed by the pixel signal output from the TOF sensor 26.
- the image storage unit 27 can store the latest image when there is a change in the detection area, or can store an image in a state where no object exists in the detection area as a background image. ..
- the synchronization processing unit 28 receives the reflected light corresponding to the modulated light emitted by the light emitting diode 22 among the pixel signals supplied from the TOF sensor 26 in synchronization with the timing signal supplied from the optical modulation unit 21. Performs the process of extracting the pixel signal of. As a result, the synchronization processing unit 28 can supply only the pixel signal of the timing whose main component is the light emitted from the light emitting diode 22 and reflected by the object in the detection region to the calculation unit 29. Further, the synchronization processing unit 28 moves in the detection region by, for example, reading the background image stored in the image storage unit 27 and obtaining the difference from the image constructed by the pixel signal supplied from the TOF sensor 26. It is possible to generate a pixel signal consisting only of an object with a certain object.
- the synchronization processing unit 28 is not an essential configuration, and the pixel signal supplied from the TOF sensor 26 may be directly supplied to the calculation unit 29.
- the calculation unit 29 calculates the distance to the object in the detection area for each pixel based on the pixel signal supplied from the synchronization processing unit 28 or the TOF sensor 26, and calculates the distance obtained by the calculation.
- the indicated depth signal is supplied to the depth image generation unit 30.
- the calculation unit 29 has a phase difference between the phase of the light emitted by the light emitting diode 22 and the phase of the light emitted from the light emitting diode 22 reflected by an object and incident on the pixels of the TOF sensor 26. Based on, the calculation for finding the distance to the object in the detection area is performed.
- the depth image generation unit 30 generates a depth image in which the distances to the subject are arranged according to the arrangement of pixels from the depth signal supplied from the calculation unit 29, and the depth image is sent to a subsequent processing device (not shown). Output.
- the control unit 31 is composed of a processor and a memory, and controls the entire operation of the depth sensor 11.
- the control unit 31 controls the optical modulation unit 21 according to the operation mode, and controls the light emission timing of the light emitting diode 22 by controlling the modulation method.
- the control unit 31 controls the TOF sensor 26 according to the operation mode to control, for example, the operation timings of TapA and TapB.
- the control unit 31 controls the calculation content of the calculation unit 29 according to the operation mode.
- the operation mode referred to here is, for example, the above-mentioned normal mode and correction mode.
- control unit 31 performs the normal mode processing described with reference to FIG. Various configurations are controlled so as to.
- control unit 31 sets the operation mode to the correction mode processing described with reference to FIG. Control various configurations.
- the light emitting diode 22 and the light projecting lens 23 in FIG. 6 constitute a light source 32 corresponding to the light source 2 in FIG. Further, the light receiving lens 24, the filter 25, and the TOF sensor 26 constitute a distance measuring device 33 corresponding to the distance measuring device 3 of FIG.
- the depth sensor 11 configured in this way employs a narrow band filter 25 in which the pass band is limited to a narrow band corresponding to the wavelength of the light emitted from the light emitting diode 22.
- the filter 25 can remove noise components due to ambient light and allow a large amount of signal components required for measurement to pass through. That is, in the depth sensor 11, the TOF sensor 26 can receive more light, which is a signal component necessary for measurement, as compared with ambient light, which is a noise component, and the SN ratio (Signal to) of the acquired depth image. Noise Ratio) can be improved.
- the depth sensor 11 can increase the acquisition distance capable of acquiring a more accurate depth image even in an environment affected by ambient light, and can improve the depth image acquisition performance. Can be improved.
- the TOF sensor 26 shown in FIG. 7 is composed of, for example, a back-illuminated type sensor, but may be a front-illuminated type sensor.
- the TOF sensor 26 has a configuration including a pixel array unit 40 formed on a semiconductor substrate (not shown) and a peripheral circuit unit integrated on the same semiconductor substrate as the pixel array unit 40.
- the peripheral circuit unit is composed of, for example, a tap drive unit 41, a vertical drive unit 42, a column processing unit 43, a horizontal drive unit 44, and a system control unit 45.
- the TOF sensor 26 is also provided with a signal processing unit 49 and a data storage unit 50.
- the signal processing unit 49 and the data storage unit 50 may be mounted on the same substrate as the TOF sensor 26, or may be arranged on a substrate different from the TOF sensor 26 in the imaging device.
- the pixel array unit 40 has a configuration in which pixels 51 that generate an electric charge according to the amount of received light and output a signal corresponding to the electric charge are two-dimensionally arranged in a matrix in the row direction and the column direction. That is, the pixel array unit 40 has a plurality of pixels 51 that photoelectrically convert the incident light and output a signal corresponding to the electric charge obtained as a result.
- the row direction refers to the arrangement direction of the pixels 51 in the horizontal direction
- the column direction refers to the arrangement direction of the pixels 51 in the vertical direction.
- the row direction is the horizontal direction in the figure
- the column direction is the vertical direction in the figure.
- the pixel 51 receives light incident from the outside, particularly infrared light, and performs photoelectric conversion, and outputs a pixel signal corresponding to the electric charge obtained as a result.
- the pixel 51 applies a predetermined voltage MIX0 (first voltage) to detect the photoelectrically converted charge, and has a first tap TA corresponding to the above-mentioned TapA and a predetermined voltage MIX1 (second voltage). ) Is applied to detect the photoelectrically converted charge, and has a second tap TB corresponding to the above-mentioned Tap B.
- MIX0 first voltage
- MIX1 second voltage
- the tap drive unit 41 supplies a predetermined voltage MIX0 to the first tap TA (TapA) of each pixel 51 of the pixel array unit 40 via a predetermined voltage supply line 48, and supplies a predetermined voltage MIX0 to the second tap TB (TapB). Is supplied with a predetermined voltage MIX1 via a predetermined voltage supply line 48. Therefore, two voltage supply lines 48, a voltage supply line 48 for transmitting the voltage MIX0 and a voltage supply line 48 for transmitting the voltage MIX1, are wired in one pixel array of the pixel array unit 40.
- a pixel drive line 46 is wired along the row direction for each pixel row with respect to the matrix-like pixel array, and two vertical signal lines 47 are wired along the column direction in each pixel row. ing.
- the pixel drive line 46 transmits a drive signal for driving when reading a signal from a pixel.
- the pixel drive line 46 is shown as one wiring, but the wiring is not limited to one.
- One end of the pixel drive line 46 is connected to the output end corresponding to each line of the vertical drive unit 42.
- the vertical drive unit 42 is composed of a shift register, an address decoder, and the like, and drives each pixel of the pixel array unit 40 at the same time for all pixels or in line units. That is, the vertical drive unit 42 constitutes a drive unit that controls the operation of each pixel of the pixel array unit 40 together with the system control unit 45 that controls the vertical drive unit 42.
- the signal output from each pixel 51 of the pixel row according to the drive control by the vertical drive unit 42 is input to the column processing unit 43 through the vertical signal line 47.
- the column processing unit 43 performs predetermined signal processing on the pixel signal output from each pixel 51 through the vertical signal line 47, and temporarily holds the pixel signal after the signal processing.
- the column processing unit 43 performs noise removal processing, AD (Analog to Digital) conversion processing, and the like as signal processing.
- AD Analog to Digital
- the horizontal drive unit 44 is composed of a shift register, an address decoder, and the like, and sequentially selects unit circuits corresponding to the pixel strings of the column processing unit 43. By the selective scanning by the horizontal drive unit 44, the pixel signals processed by the column processing unit 43 for each unit circuit are sequentially output.
- the system control unit 45 is composed of a timing generator or the like that generates various timing signals, and based on the various timing signals generated by the timing generator, the tap drive unit 41, the vertical drive unit 42, the column processing unit 43, And the drive control of the horizontal drive unit 44 and the like is performed.
- the signal processing unit 49 has at least an arithmetic processing function, and performs various signal processing such as arithmetic processing based on the pixel signal output from the column processing unit 43.
- the data storage unit 50 temporarily stores data necessary for signal processing by the signal processing unit 49.
- step S11 the control unit 31 controls the optical modulation unit 21 to cause the light emitting diode 22 to emit light at a predetermined modulation frequency to irradiate the irradiation light.
- the irradiation area is irradiated with the irradiation light composed of the modulated light modulated at a predetermined modulation frequency from the light source 32, and the reflected light reflected by the object in the irradiation area is incident on the TOF sensor 26. ..
- step S12 the control unit 31 controls the first tap TA (TapA) of each of the pixels 51 of the TOF sensor 26 to expose the light emitting diode 22 in synchronization with the light emitting timing, and has the same phase (that is, that is). Phase0) and 180-degree shifted phase (Phase180) pixel signals are continuously detected and output.
- step S13 the control unit 31 controls the second tap TB (TapB) of each of the pixels 51 of the TOF sensor 26 to expose the light emitting diode 22 in a phase shifted by 90 degrees from the light emitting timing of the light emitting diode 22 to 90 degrees.
- Pixel signals having a shifted phase (Phase90) and a shifted phase (Phase270) of 270 degrees are continuously detected and output to the calculation unit 29 via the synchronization processing unit 28.
- step S14 the calculation unit 29 calculates the distance measurement calculation for each pixel 51 by calculating the equation (1), and outputs the depth signal as the calculation result to the depth image generation unit 30.
- step S15 the depth image generation unit 30 generates and outputs a depth image based on the depth signal of each pixel 51.
- step S16 the control unit 31 determines whether or not the end of the process is instructed, and if the end is not instructed, the process returns to step S11. That is, the processes of steps S11 to S16 are repeated until the end of the process is instructed, and the depth image is generated and continues to be output.
- step S16 when the end of the process is instructed, the process ends.
- distance measurement processing is performed by a single depth sensor 11, and a depth image is generated and output based on the distance measurement result.
- signal values for four phases are obtained in one frame and a depth image is generated, so that high-speed processing can be realized.
- the correction mode processing is a processing assuming that distance measurement processing is performed on the same object by a plurality of depth sensors 11.
- step S31 the control unit 31 controls the optical modulation unit 21 to emit light from the light emitting diode 22 at a predetermined modulation frequency and at a predetermined interval to irradiate the irradiation light.
- the light emitting diode 22 emits light at a period TL sufficiently longer than the length Td at which the light emitted at a predetermined modulation frequency is sufficiently attenuated. It is controlled to repeat turning off.
- the irradiation light composed of the modulated light modulated at a predetermined modulation frequency from the light source 32 is irradiated to the irradiation region in the periodic TL, and the reflected light reflected by the object in the irradiation region is applied to the TOF sensor 26. It will be in a state of being incident.
- step S32 the control unit 31 initializes the counter ⁇ for counting the phase shift to 0.
- This counter ⁇ is a counter that counts four types of phases of 0, 90, 180, and 270 degrees in 90 degree increments.
- step S33 the control unit 31 controls the first tap TA (TapA) of each of the pixels 51 of the TOF sensor 26, and exposes the light at a timing that is out of phase by ⁇ degrees with respect to the light emission timing of the light emitting diode 22.
- the pixel signal is detected and output to the calculation unit 29 via the synchronization processing unit 28.
- step S34 the calculation unit 29 integrates and stores the pixel signal which is the exposure result of TapA in association with the counter ⁇ .
- step S35 the control unit 31 controls the first tap TB (TapB) of each of the pixels 51 of the TOF sensor 26, and ⁇ with respect to the timing when the light emitting diode 22 is turned off and the light is sufficiently attenuated.
- the pixels are exposed at a timing that is out of phase by a degree, and the pixel signal is detected and output to the calculation unit 29 via the synchronization processing unit 28.
- step S36 the calculation unit 29 integrates and stores the pixel signal which is the exposure result of TapB in association with the counter ⁇ .
- step S37 the control unit 31 determines whether or not the counter ⁇ is 270, and if it is not 270, the process proceeds to step S38.
- step S38 the control unit 31 increments the counter ⁇ by 90, and the process returns to step S33.
- step S37 the processes of steps S33 to S38 are repeated until it is determined to be 270, and pixel signals corresponding to four types of phase shifts of 0, 90, 180, and 270 degrees are sequentially integrated.
- step S37 determines whether the counter ⁇ is 270. If it is determined in step S37 that the counter ⁇ is 270, the process proceeds to step S39.
- step S39 the control unit 31 determines whether or not the processes of steps S32 to S38 have been repeated a predetermined number of times, and if the processes have not been repeated a predetermined number of times, the process returns to step S32.
- the pixel signals of the reflected light WRo with respect to the other irradiation light in the frames N to N + 3 of FIG. 5 corresponding to the counters ⁇ corresponding to 0, 90, 180, and 270 are repeatedly detected and integrated into each. It is a process.
- the pixel signals of the reflected light WR and the reflected light WRo are repeatedly integrated a predetermined number of times in association with the four types of phase shifts of 0,90,180,270.
- step S39 If it is determined in step S39 that the process has been repeated a predetermined number of times, the process proceeds to step S40.
- step S40 the calculation unit 29 performs a distance measurement calculation for each pixel 51 by calculating the formula (1) using ⁇ 0 to ⁇ 3 obtained by the above formula (2), and the depth signal which is the calculation result. Is output to the depth image generation unit 30.
- step S41 the depth image generation unit 30 generates and outputs a depth image based on the depth signal of each pixel 51.
- step S42 the control unit 31 determines whether or not the end of the process is instructed, and if the end is not instructed, the process returns to step S32. That is, the processes of steps S32 to S42 are repeated until the end of the process is instructed, and the depth image is generated and continues to be output.
- step S42 when the end of the process is instructed, the process ends.
- the subtraction results of ⁇ 0 to ⁇ 3 represented by the equation (2) are substantially the reflected light WR (necessary reflected light) based on the own light source 2 in each phase and the other. From the integrated value of the average value of the pixel signals detected by the reflected light including the reflected light WRo (unnecessary reflected light) based on the light source 2, the reflected light WRo (unwanted reflected light) based only on the other light source 2 in each phase. ), The integrated value of the average value of the pixel signals is removed.
- ⁇ 0 to ⁇ 3 represented by the equation (2) are detected by the reflected light WR based on the own light source 2 in each phase, which does not include the pixel signal detected by the reflected light WRo based on the other light source 2. It is the integrated value of the average value of the pixel signals to be generated.
- Second Embodiment >> ⁇ Collision detection>
- a single depth sensor 11 independently generates a depth image of a predetermined region
- a depth image is generated by normal mode processing
- a plurality of depth sensors 11 generate a depth image of the same predetermined region.
- generation an example of generating a depth image by correction mode processing has been described.
- TapB among the pixels 51 detects a pixel signal of four phases synchronized with the light emission timing and measures the distance with only TapA.
- a plurality of different integration times are set in synchronization with the timing when the light is turned off, and the integrated value of the pixel signal is detected.
- TapB detects only a certain level of disturbing light such as sunlight, so it detects an integrated value that changes in proportion to the length of the integrated time.
- TapB detects a pixel signal of reflected light consisting of modulated light in addition to a certain level of interfering light such as sunlight. , An integrated value that changes regardless of the length of the integrated time is detected.
- the upper waveform in each frame is the waveform Wm of the modulated light, and the lower waveform is the waveform Ws of sunlight.
- the waveform Ws of sunlight is constant to some extent, when pixel signals are detected at the integration times Te1 to Te4 having different lengths in the Nth frame to the N + 3rd frame, the length of the integration time is determined. Correlated integrated values are detected.
- the standard deviation of the integrated value considered to be background light such as sunlight at different integrated times is obtained in advance as a threshold value, and the standard deviation when the integrated value is detected a predetermined number of times at different integrated times and the standard deviation in advance.
- the presence or absence of collision may be determined based on the comparison with the obtained threshold value.
- the influence of the modulated light is It can be determined that there is no collision with other light sources.
- the standard deviation of the integrated value is larger than the threshold value obtained in advance, the variance of the integrated value is large, and it can be considered that there is a variation, it is determined that there is an influence of the modulated light and there is a collision with another light source. be able to.
- TapA will perform distance measurement processing while detecting a collision.
- the distance measuring process using both TapA and TapB cannot be performed, but the distance measuring process may be performed although the frame rate is reduced while the collision is detected by TapB.
- the blank period Tb1 In the case of the upper part of FIG. 11, a collision will be detected. Therefore, by changing the blank period Tb1 to a length based on a random number, for example, as shown in the lower part of FIG. 11, a blank period is used.
- the period Tb2 may be changed.
- the light emission period TLo in the waveform WLo of another light source and the light emission period TL do not overlap, so that collision is avoided.
- the length of the blank period is randomly changed by a random number, there is a risk that collision will not be avoided even if the length of the blank period is changed.
- the length of the repeating blank period may be randomly changed until a collision is avoided.
- the lengths of the blank period Tb1 in the upper part of FIG. 11 and the blank period Tb2 in the lower part of FIG. 11 are randomly changed, the lengths of the exposure period Te and the read period Tr remain the same. ..
- the frame rate is reduced by measuring the distance only with TapA. Therefore, normally, the operation is performed in the normal mode processing, and the collision is detected every time a predetermined time elapses. Therefore, when a collision is detected, the length of the blank period may be changed at random.
- the operation mode in which the collision is detected as described above and the collision is avoided when the collision is detected is hereinafter referred to as the collision avoidance mode processing.
- step S51 the control unit 31 performs distance measurement processing in pixel units in the normal mode described with reference to the flowchart of FIG. 8, and generates and outputs a depth image based on the depth signal which is the distance measurement result. To do.
- the processing is only the processing of steps S11 to S15 in the flowchart of FIG.
- step S52 the control unit 31 determines whether or not a predetermined time has elapsed since the normal mode processing was performed, and the processing of steps S51 and S52 is repeated until the predetermined time elapses, and the depth image is obtained by the normal mode processing. Continues to be generated. Then, if it is determined in step S52 that the predetermined time has elapsed, the process proceeds to step S53.
- step S53 the control unit 31 executes the collision detection process to obtain the integrated value of the predetermined number of times of the TapB pixel signal in the four types of integrated times.
- the details of the collision detection process will be described later with reference to FIG.
- step S54 the control unit 31 determines whether or not the standard deviation of the predetermined number of the integrated values of the TapB pixel signals at the four types of integration times is less than the predetermined threshold value, and the variation of the integrated values varies. It is determined whether or not modulated light from another light source is included.
- step S54 at least one of the standard deviations of a predetermined number of the integrated values of the TapB pixel signals at the four types of integration times is not less than a predetermined threshold value, and a collision due to modulated light from some other light source occurs. If it is determined, the process proceeds to step S55.
- step S55 the control unit 31 controls the optical modulation unit 21 to change the length of the blank period of the light emitting period of the light emitting diode 22 based on a random number. That is, this process changes the length of the blank period of the light emitting period of the light emitting diode 22 in the normal mode process.
- step S54 it is determined that the standard deviations of the predetermined numbers of the integrated values of the TapB pixel signals at the four types of integration times are all less than the predetermined threshold values, and there is no collision due to the modulated light from other light sources. If so, the process of step S55 is skipped.
- step S56 the control unit 31 determines whether or not the end of the process is instructed, and if the end is not instructed, the process returns to step S51, and the subsequent processes are repeated.
- step S56 when the end of the process is instructed, the process ends.
- step S71 the control unit 31 initializes the counter N for counting the phase shift to 0.
- This counter N is a counter for distinguishing four types of integrated times.
- step S72 the control unit 31 controls the second tap TB (TapB) of each of the pixels 51 of the TOF sensor 26 to expose within the integration time of the length set corresponding to the counter N.
- the pixel signal is detected and output to the calculation unit 29 via the synchronization processing unit 28.
- step S73 the calculation unit 29 integrates the pixel signals, which are the exposure results of TapB, for the integration time set in association with the counter N, obtains and stores the integrated value. Therefore, the processes of steps S72 and S73 are repeated within the integration time.
- step S74 the control unit 31 determines whether or not the counter N is 3, and if N is not 3, the process proceeds to step S75.
- step S75 the control unit 31 increments the counter N by 1, and the process returns to step S72.
- step S74 the processes of steps S72 to S75 are repeated until the counter N is determined to be 3.
- step S74 when it is determined in step S74 that the counter N is 3, that is, when the integrated values of the pixel signals in the four types of integration periods are obtained, the process proceeds to step S76.
- step S76 the control unit 31 determines whether or not the process has been repeated a predetermined number of times, and if the process has not been repeated a predetermined number of times, the process returns to step S71.
- the process in steps S72 and S73 is a process in which the integrated value of the pixel signal detected in the four types of integrated times of different lengths distinguished by the counter N is obtained and stored.
- step S76 when it is determined that a predetermined number of integrated values corresponding to the four types of integrated times have been obtained after being repeated a predetermined number of times, the process proceeds to step S77.
- step S77 the calculation unit 29 obtains the standard deviation of each of the four types of integrated values in the integrated period having different lengths, and outputs the standard deviation to the control unit 31.
- TapB obtains the integrated value of the pixel signals at multiple integrated times of different lengths and their standard deviations.
- the integrated values of the four types of integrated times detected by TapB have small variations when only background light such as sunlight is included, so that the standard deviation is smaller than a predetermined threshold value obtained in advance. It can be considered that the collision due to the modulated light from another light source has not occurred.
- the integrated values of the four types of integrated times detected by TapB have a large variation when modulated light from another light source is included, so that the standard deviation is determined in advance. Since it is larger than the threshold value, it can be considered that a collision due to the modulated light from another light source has occurred.
- the distance measurement process and the depth image generation can be repeated while reducing the frame rate by TapA, the distance measurement can be performed in parallel in TapA even during the collision detection process. This can be achieved, and collision detection can be performed while continuously generating depth images.
- the difference frequency is 16.7MHz
- the period during which light is emitted in the waveform WLo of another light source in the figure is the period H1 to H5.
- the measurement results in the us order is not realistic, for example, the measurement results of multiple cycles such as 1000 cycles are integrated and used. As a result, the measurement results in the us order will be used, but the ratio of the pixel signals of TapA and TapB is the same even if the integration time is increased from 60ns or 110ns, for example, 60us or 110us multiplied by 1000.
- the difference frequency from the own modulation frequency can be obtained. It becomes possible to obtain it, and it is possible to obtain the approximate modulation frequency of another light source from its own modulation frequency and the difference frequency.
- the difference I detected by Tap A and Tap B is expressed as a waveform c1 having a period of 10 us.
- the difference I of the difference detected by TapA and TapB is expressed as a waveform c2 having a period of 11.1us.
- the difference I of the integrated values detected by TapA and TapB is expressed as a waveform c3 having a period of 12.5us.
- the modulation frequency of one's own light source is 100 MHz
- the modulation cycle is 10 ns
- the modulation frequency of another light source is 99.9 MHz
- the modulation cycle is 10.01 ns
- the difference I detected by Tap A and Tap B. Will change at 100kHz.
- the difference I of the pixel signals detected by Tap A and Tap B changes periodically in a cycle of 10 us.
- the difference frequency of 100 kHz or less is specified by sampling the difference I detected by Tap A and Tap B for which the integration time consisting of multiples of 2.5 us divided into four equal parts is set for 10 us, which is the period. ..
- the difference between the integrated values detected by TapA and TapB at the integrated times of 2.5us, 5.0us, 7.5us, and 10us which is a multiple of 2.5us obtained by dividing the period into four equal parts. If the sample of I is plotted on the waveform c1, it is specified as the waveform c1 having a difference frequency of 100 kHz.
- the difference I of the integrated values detected by TapA and TapB for the integrated times 2.5us, 5.0us, 7.5us, and 10us consisting of multiples of 2.5us divided into four equal parts is the value on the waveform c2
- the difference frequency is specified as the waveform c2 of 90 kHz.
- the difference I of the integrated values detected by TapA and TapB of the integrated times 2.5us, 5.0us, 7.5us, and 10us consisting of multiples of 2.5us divided into four equal parts is the value on the waveform c3, the difference. It is specified as a waveform c3 having a frequency of 80 kHz.
- FIG. 15 describes an example of plotting on the waveforms c1 to c3 of the difference frequency, but the integrated values detected by TapA and TapB at the integrated times of 2.5us, 5.0us, 7.5us, and 10us.
- the modulation frequency of another light source is estimated from the modulation frequency and difference frequency of its own light source.
- the distance and position to the other light source are specified from the depth image obtained by the distance measurement result using the own light source and the depth image obtained by the distance measurement result using the other light source.
- the distance to the object can be measured by receiving the reflected light generated by the irradiation light from the other light source being reflected by the object.
- the other light source is, for example, a fast blinking ceiling light installed on the ceiling
- the modulation frequency of the other light source is estimated, and the own light source and the other light source are used to obtain the other light source.
- the distance and position can be specified. Then, as long as the positional relationship between its own light source and another light source composed of ceiling lighting does not change, it is possible to continue to realize distance measurement using only the other light source.
- the ceiling illumination 81 emits light with a waveform pattern WLp as shown in the upper part of FIG.
- the waveform pattern WLp the timing at which light is emitted at a modulation frequency composed of a rectangular waveform and the timing at which the light is turned off without a rectangular waveform are alternately set, but even if the timing without a rectangular waveform is straddled. , It is assumed that there is no phase shift in the modulation frequency pattern.
- the depth sensor 11 measures the distance to the objects 71 to 73 by causing the light emitting diode 22 which is its own light source to emit light at its own modulation frequency.
- the distance from the depth sensor 11 to the object 71 is 1 m
- the distance to the object 72 is 1.5 m
- the distance to the object 73 is 0.5 m.
- the positional relationship between the depth sensor 11 and the objects 71 to 73 is also specified.
- the depth sensor 11 estimates the modulation frequency of the ceiling illumination 81, stops the light emission of the light emitting diode 22 which is its own light source, and tapA at a modulation frequency as close as possible to the modulation frequency of the ceiling illumination 81 which is the estimation result. And, by adjusting the exposure period of Tap B, calibration is performed using the distance measurement result when the light emitting diode 22 emits light at its own modulation frequency, and the distance measurement by the ceiling illumination 81 is enabled. Then, the depth sensor 11 adjusts the exposure periods of TapA and TapB at the modulation frequency of the ceiling illumination 81, and the ceiling illumination 81 via the objects 71 to 73 is stopped while the light emitting diode 22 which is its own light source is stopped. Measure the distance to and generate the corresponding depth image.
- the distance from the depth sensor 11 to the ceiling illumination 81 via the object 71 is 1.5 m
- the distance from the ceiling illumination 81 via the object 72 is 2.0 m
- the distance to the ceiling illumination 81 via the object 73 is It is assumed that the distance is 1.0 m.
- the depth sensor 11 obtains the position and distance of the ceiling illumination 81 from the correspondence between the depth image using its own light source and the depth image using the ceiling illumination 81 as the light source.
- the depth sensor 11 determines the position of the ceiling illumination 81 by obtaining the intersection of the spherical surfaces K1 to K3.
- the depth sensor 11 measures the distance from the light emitting diode 22 to the objects 71 to 73 by using 1/2 of the round trip time of the optical path.
- the distance is measured in the time required for the return route from the ceiling illumination 81 to the objects 71 to 73.
- the depth sensor 11 is calibrated based on the distance measurement result when the light emitting diode 22 which is the self-light source is used, the distance measurement result when the ceiling illumination 81 is used is 1/2 of the actual measurement result. turn into. Therefore, when the ceiling illumination 81 is used as the light source, the depth sensor 11 needs to double the distance measurement result (make it ⁇ 2).
- the depth sensor 11 can realize distance measurement using the ceiling illumination 81 as a light source, and the light emitting diode 22 which is its own light source does not need to emit light. Therefore, it is possible to reduce the power consumption of the light emitting diode 22.
- step S111 the control unit 31 measures the distance to the surrounding objects only by the irradiation light of its own light source by the correction mode processing, and generates a depth image. That is, by this processing, for example, the distance from the objects 71 to 73 in FIG. 17 is measured by the correction mode processing (FIG. 9) when the irradiation light of the own light source is used, and a depth image is generated.
- the correction mode processing FIG. 9
- step S112 the control unit 31 executes the modulation frequency estimation process and estimates the modulation frequency of another light source.
- the modulation frequency of the irradiation light of another light source such as the ceiling illumination 81 described with reference to FIG. 17 is estimated.
- the details of the modulation frequency estimation process will be described later with reference to the flowchart of FIG.
- step S113 the control unit 31 irradiates the distance of the same object as the object measured by its own light source by the normal mode processing (FIG. 8) by another light source with another light source such as the ceiling illumination 81 in FIG. The distance is measured by light and a depth image is generated.
- the exposure periods of TapA and TapB are set based on the estimation results of the modulation frequencies of other light sources.
- step S114 the calculation unit 29 calculates the position and distance of another light source from the depth signal obtained by distance measurement based on its own light source and the depth signal obtained by distance measurement based on another light source. ..
- step S115 the control unit 31 measures the distance to the object based on the other light source by the normal mode processing by the other light source in consideration of the position and distance of the other light source, and generates a depth image.
- the calculation unit 29 considers the distance and position from the depth sensor 11 to another light source, measures the distance to the object by the irradiation light of the other light source, and generates a depth image. Further, since the depth sensor 11 does not require the irradiation light of its own light source, it is possible to turn off the light source of its own, and as a result, it is possible to reduce the power consumption.
- step S116 the control unit 31 determines whether or not the end of the distance measurement process by another light source is instructed, and if the end of the process is not instructed, the process returns to step S115.
- step S116 the same process is repeated until the end of the process is instructed and the end of the normal mode process by another light source is instructed.
- step S116 when the end is instructed, the process ends.
- the modulation frequency of the other light source is estimated, and the position and distance of the other light source are specified, so that the irradiation light of the other light source is not used without using the irradiation light of the other light source. It is possible to realize distance measurement and generate a depth image only by itself.
- the depth sensor No. 11 can realize distance measurement by using the irradiation light of the ceiling illumination 81 without providing its own light source.
- step S151 the control unit 31 initializes the counter N for identifying each integrated time (IntegTime) when the period of the difference frequency is divided into four equal parts.
- the period of the difference frequency when the period of the difference frequency is 10us, it becomes 2.5us when divided into four equal parts, so four types of exposure consisting of 2.5us, 5.0us, 7.5us, and 10us.
- the time (IntegTime) is set.
- step S152 the control unit 31 controls the optical modulation unit 21 to stop the light emission of the light emitting diode 22.
- step S153 the control unit 31 controls the first tap TA (TapA) of each of the pixels 51 of the TOF sensor 26 to expose the light emitting diode 22 which is its own light source at a timing synchronized with the light emitting timing.
- the pixel signal is detected and output to the calculation unit 29 via the synchronization processing unit 28.
- step S154 the calculation unit 29 integrates and stores the pixel signal which is the exposure result of TapA.
- step S155 the control unit 31 controls the second tap TB (TapB) of each of the pixels 51 of the TOF sensor 26 and exposes the light emitting diode 22 as its own light source at the timing when the light emitting diode 22 is turned off to obtain a pixel signal. Is detected and output to the calculation unit 29 via the synchronization processing unit 28.
- step S156 the calculation unit 29 integrates and stores the pixel signal which is the exposure result of TapB.
- step S157 the calculation unit 29 determines whether or not the integration time corresponding to the counter N has elapsed, and if the integration time corresponding to the counter N has not elapsed, the process returns to step S153. That is, the pixel signals, which are the exposure results by Tap A and Tap B, are sequentially integrated until the integration time corresponding to the counter N elapses.
- step S157 If it is determined in step S157 that the exposure time corresponding to the counter N has elapsed, the process proceeds to step S158.
- step S158 the calculation unit 29 calculates the difference I between the pixel signal integration result, which is the stored exposure result of TapA, and the pixel signal integration result, which is the exposure result of TapB, and integrates corresponding to the counter N. Store as the time sampling result.
- step S159 the control unit 31 determines whether or not the counter N is 3, that is, whether or not the difference I of all exposure times has been sampled.
- step S159 If the counter N is not 3 in step S159, the process proceeds to step S160.
- step S160 the control unit 31 increments the counter N by 1, and the process returns to step S153. That is, by repeating the processes of steps S153 to S159, the difference I between the pixel signal integration result, which is the exposure result of TapA, and the pixel signal integration result, which is the exposure result of TapB, is associated with the counter N. The exposure time is sequentially calculated and repeated until the difference I of the four types of integration time is sampled.
- step S159 it is determined that the difference between the pixel signal integration result, which is the exposure result of TapA, and the pixel signal integration result, which is the exposure result of TapB, has been calculated and sampled for all four types of integration times. If, that is, if the counter N is 3, the process proceeds to step S160.
- step S160 the calculation unit 29 uses the difference I between the pixel signal integration result, which is the exposure result of TapA, and the pixel signal integration result, which is the exposure result of TapB, sampled at four types of integration times. , Estimate the modulation frequencies of other light sources.
- the calculation unit 29 uses the integration result of the pixel signal, which is the exposure result of TapA, and the exposure result of TapB, which are sampled at four types of integration times (IntegTime).
- the modulation frequency of another light source is estimated using the difference I from the integration result of a certain pixel signal.
- the calculation unit 29 estimates the waveform of the difference frequency between the modulation frequency of its own light source and the modulation frequency of another light source by using the sampling result of the difference I.
- the calculation unit 29 estimates the modulation frequency of another light source based on the estimated difference frequency and the modulation frequency of its own light source.
- the period of the difference frequency is set to 4 or more. It may be divided and the number of samples of the difference I may be increased. By increasing the number of samplings in this way, it is possible to estimate the modulation frequencies of other light sources with high accuracy.
- the technology according to the present disclosure can be applied to various products.
- the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
- FIG. 20 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a moving body control system to which the technique according to the present disclosure can be applied.
- the vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001.
- the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050.
- a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are shown as a functional configuration of the integrated control unit 12050.
- the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
- the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating braking force of the vehicle.
- the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
- the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps.
- the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches.
- the body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
- the vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000.
- an imaging unit 12031 is connected to the vehicle exterior information detection unit 12030.
- the vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image.
- the vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or characters on the road surface based on the received image.
- the imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received.
- the image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
- the in-vehicle information detection unit 12040 detects the in-vehicle information.
- a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040.
- the driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
- the microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit.
- a control command can be output to 12010.
- the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
- ADAS Advanced Driver Assistance System
- the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving that runs autonomously without depending on the operation.
- the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030.
- the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs coordinated control for the purpose of anti-glare such as switching the high beam to the low beam. It can be carried out.
- the audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger of the vehicle or the outside of the vehicle.
- an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices.
- the display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
- FIG. 21 is a diagram showing an example of the installation position of the imaging unit 12031.
- the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, 12105 as imaging units 12031.
- the imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as, for example, the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100.
- the imaging unit 12101 provided on the front nose and the imaging unit 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
- the imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100.
- the imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100.
- the images in front acquired by the imaging units 12101 and 12105 are mainly used for detecting the preceding vehicle, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
- FIG. 21 shows an example of the photographing range of the imaging units 12101 to 12104.
- the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
- the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
- the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103.
- the imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
- At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
- at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
- the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100).
- a predetermined speed for example, 0 km / h or more.
- the microcomputer 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic braking control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform coordinated control for the purpose of automatic driving or the like in which the vehicle runs autonomously without depending on the operation of the driver.
- the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be seen by the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
- At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
- the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104.
- pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine.
- the audio image output unit 12052 When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian.
- the display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
- the above is an example of a vehicle control system to which the technology according to the present disclosure can be applied.
- the technique according to the present disclosure can be applied to, for example, the imaging unit 12031 among the configurations described above.
- the TOF sensor 26 of FIG. 6 can be applied to the imaging unit 12031.
- the present disclosure may also have the following configuration.
- a light receiving unit that receives the reflected light generated by the reflection of the irradiation light by an object in the detection area, and a light receiving portion.
- An arithmetic unit that calculates the distance to the object based on the phase difference between the irradiation light and the reflected light. Suppresses the influence of collision between the required irradiation light, which is the irradiation light used for calculating the distance to the object, and the unnecessary irradiation light, which is the irradiation light not used for calculating the distance to the object, in the light receiving unit.
- a distance measuring device including the light receiving unit and the control unit that controls the calculation unit.
- ⁇ 2> The distance measuring device according to ⁇ 1>, wherein the required irradiation light used for calculating the distance to the object is the irradiation light for which the information required for calculating the distance to the object is known.
- the information required for calculating the distance to the object is information on the modulation frequency and phase of the irradiation light.
- An irradiation unit that irradiates the detection region with the pulsed irradiation light as the necessary irradiation light is further included.
- the control unit The light receiving unit is made to receive the required reflected light, which is the reflected light generated when the required irradiation light irradiated by the irradiation unit is reflected by an object in the detection region.
- the distance measuring device according to ⁇ 1>, wherein the calculation unit is controlled to calculate the distance to the object based on the phase difference between the required irradiation light and the required reflected light irradiated by the irradiation unit.
- the control unit controls the light receiving unit to receive the necessary reflected light required for distance measurement with a part of the pixels and to receive the unnecessary irradiation light with the pixels other than the part. 4> The ranging device.
- the control unit The irradiation unit is operated by setting a period for irradiating the required irradiation light and a period for turning off the light. A part of the pixel is received by the light receiving unit at a phase corresponding to the timing when the required irradiation light is irradiated by the irradiation unit, and the irradiation unit is turned off by pixels other than the part. Receive light in the phase corresponding to the timing of the light.
- the pixel signal generated by receiving the required reflected light in the phase corresponding to the timing when the irradiation unit irradiates the required irradiation light to the calculation unit corresponds to the timing when the irradiation unit is turned off.
- the method according to ⁇ 5> wherein the distance to the object is calculated by the phase difference between the required irradiation light and the required reflected light based on the difference from the pixel signal generated by receiving the light in phase.
- Distance measuring device. ⁇ 7> The control unit The integrated value of the pixel signal generated by receiving the required reflected light in the phase corresponding to the timing when the irradiation unit irradiates the calculation unit with the required irradiation light, and the timing at which the irradiation unit is turned off.
- the distance measuring device according to ⁇ 6> which controls to calculate the distance to the object based on the difference from the integrated value of the pixel signal generated by receiving light in the phase corresponding to.
- the control unit A part of the pixel receives the required reflected light on the light receiving unit at a phase corresponding to the timing when the required irradiation light is irradiated by the irradiation unit, and the phase corresponds to the timing when the irradiation unit is turned off. So, control so that the light is received at multiple integration times, The calculation unit is made to determine the presence or absence of the collision based on the integrated value of the pixel signals generated by receiving light at the plurality of integrated times. A period for irradiating the required irradiation light to the irradiation unit and a period for turning off the light are set, and when it is determined that the collision has occurred, the length of the period for turning off the light is changed.
- the ranging device according to ⁇ 4>. ⁇ 9> The measurement according to ⁇ 8>, wherein the control unit controls the irradiation unit to randomly change the length of the extinguishing period when it is determined that the collision has occurred.
- Distance device. ⁇ 10> The calculation unit determines the presence or absence of the collision based on a comparison between the standard deviation of the integrated value of the pixel signal generated by receiving light at the plurality of integrated times and a predetermined threshold value. 8> The distance measuring device. ⁇ 11> The control unit A part of the pixel and a part other than the part of the pixel are alternately exposed to the light receiving unit for half a cycle of the modulation cycle of the modulation frequency of the required irradiation light in the irradiation unit.
- the calculation unit is made to estimate the modulation frequency of another light source different from the irradiation unit based on the integrated values of the pixel signals detected by the light receiving unit at a plurality of integration times. A part of the pixel and a part other than the part of the pixel are alternately exposed to the light receiving portion for half a cycle of the modulation cycle of the modulation frequency of the other light source estimated.
- the distance measuring device according to ⁇ 4>, wherein the calculation unit is controlled to calculate the distance to an object by the phase difference between the irradiation light of the other light source and the reflected light of the irradiation light of the other light source.
- the control unit detects the integrated value of the pixel signal detected by a part of the pixels in the light receiving unit and the integrated value of the pixel signal other than a part of the pixels in the light receiving unit during the plurality of integration times.
- the distance measuring device which controls to estimate the modulation frequency of the other light source based on the difference from the integrated value of the pixel signal.
- the control unit detects the integrated value of the pixel signal detected by a part of the pixels in the light receiving unit and the integrated value of the pixel signal other than a part of the pixels in the light receiving unit during the plurality of integration times.
- the difference frequency between the modulation frequency of the other light source and the modulation frequency of the required irradiation light irradiated by the irradiation unit is estimated, and based on the difference frequency.
- the distance measuring device according to ⁇ 12> which controls the modulation frequency of the other light source to be estimated.
- the control unit In the depth image generation unit The required reflected light corresponding to the required irradiation light of the irradiation unit, which is controlled based on the modulation frequency of the required irradiation light emitted by the irradiation unit, is received by the light receiving unit and is detected by the pixel signal.
- a first depth image based on the calculated distance information to the object The irradiation light of the other light source, which is controlled based on the modulation frequency of the other light source, is calculated by the pixel signal detected when the required reflected light corresponding to the required irradiation light is received by the light receiving unit.
- a second depth image based on the information on the distance to the object is generated.
- the position and distance of the other light source are calculated.
- the distance measuring device according to ⁇ 11>, wherein the distance to the object is calculated based on the position and distance of the other light source.
- the light receiving unit is a TOF (Time of Flight) sensor.
- a light receiving process that receives the reflected light generated by the reflection of the irradiation light by an object in the detection area. Arithmetic processing that calculates the distance to the object by the phase difference between the irradiation light and the reflected light, In the light receiving process, the influence of collision between the required irradiation light, which is the irradiation light used for calculating the distance to the object, and the unnecessary irradiation light, which is the irradiation light not used for calculating the distance to the object, is suppressed.
- a distance measuring method including a control process for controlling the light receiving process and the arithmetic process.
- 11 depth sensor 21 optical modulator, 22 light emitting diode, 23 floodlight lens, 24 light receiving lens, 25 filter, 26 TOF sensor, 27 image storage unit, 28 synchronization processing unit, 29 calculation unit, 30 depth image generation unit, 31 control Unit, 40 pixel array unit, 41 tap drive unit, 42 vertical drive unit, 47 vertical signal line, 48 voltage supply line, 51 pixels
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
The present disclosure relates to a distance measuring device and a distance measuring method which enable highly accurate distance measurement even in an environment in which a plurality of distance measuring devices are present. An interval at which a light source of a device itself emits light is increased, an exposure period is set in synchronism with the light emission timing, a pixel signal is detected by means of reflected light from the light source of the device itself and from other light sources, and integration is performed a plurality of times. Further, the exposure period is set to a period in which the light source of the device itself is extinguished, the pixel signal is detected by means of the reflected light from the other light sources, and integration is performed a plurality of times. A distance measurement calculation is then performed by subtracting the integrated value of the pixel signal for which the exposure period was set in synchronism with the timing at which the light source was extinguished, from the integrated value of the pixel signal for which the exposure period was set in synchronism with the light emission timing. This disclosure is applicable to ToF sensors.
Description
本開示は、測距装置および測距方法に関し、特に、複数の測距装置が存在する環境においても、高精度に測距できるようにした測距装置および測距方法に関する。
The present disclosure relates to a distance measuring device and a distance measuring method, and more particularly to a distance measuring device and a distance measuring method capable of measuring a distance with high accuracy even in an environment where a plurality of distance measuring devices exist.
従来、ToF(Time of Flight)方式などのアクティブ型の測距装置が知られている。このような測距装置では、所定のパルス幅で繰り返し発光するレーザ光を照射し、照射したレーザ光が対象物に当たって反射した反射光を受光することで、レーザ光の往復時間(往復に係るレーザ光の位相差)に基づいて測距する。
Conventionally, active type ranging devices such as the ToF (Time of Flight) method have been known. In such a ranging device, a laser beam that repeatedly emits light with a predetermined pulse width is irradiated, and the irradiated laser beam hits an object and receives the reflected light, so that the reciprocating time of the laser beam (laser related to the reciprocation). Distance measurement is performed based on the phase difference of light).
ところで、複数の測距装置を用いて同一の対象物までの測距を行う場合、複数の測距装置のそれぞれは、他の測距装置から照射されるレーザ光の反射光の影響を受けることがあるため、適切な測距を実現することができない恐れがあった。
By the way, when performing distance measurement to the same object using a plurality of distance measuring devices, each of the plurality of distance measuring devices is affected by the reflected light of the laser beam emitted from the other distance measuring devices. Therefore, there is a risk that proper distance measurement cannot be achieved.
そこで、他の測距装置からの影響を受けるような状態が感知されると、その間の測距結果を採用しないようにする技術が提案されている(特許文献1参照)。
Therefore, when a state that is affected by other ranging devices is detected, a technique has been proposed in which the ranging results during that period are not adopted (see Patent Document 1).
また、レーザ光の発光パルスの間隔を十分に長くすることで、他の測距装置の反射光の影響を受けないように調整する技術が提案されている(特許文献2参照)。
Further, a technique has been proposed in which the interval between the emission pulses of the laser beam is sufficiently long so that it is not affected by the reflected light of another ranging device (see Patent Document 2).
しかしながら、特許文献1に記載の技術においては、他の測距装置からのレーザ光の影響を受けている間は、適切な測距を実現することができない。
However, in the technique described in Patent Document 1, appropriate distance measurement cannot be realized while being affected by the laser beam from another distance measuring device.
また、特許文献2に記載の技術においては、発光パルスの間隔が長くされることにより、測距に時間が掛かり、高フレームレートでの測距ができなくなるため、例えば、高速で移動する物体の測距を行うような場合、測距結果にタイムラグが発生して、リアルタイムで適切な測距を実現できない恐れがあった。
Further, in the technique described in Patent Document 2, since the interval between the emission pulses is long, the distance measurement takes time and the distance measurement at a high frame rate becomes impossible. Therefore, for example, an object moving at high speed When performing distance measurement, there is a risk that a time lag will occur in the distance measurement results and it will not be possible to achieve appropriate distance measurement in real time.
本開示は、このような状況に鑑みてなされたものであり、特に、複数の測距センサが存在する環境においても、高精度に測距できるようにするものである。
This disclosure has been made in view of such a situation, and in particular, it enables highly accurate distance measurement even in an environment where a plurality of distance measurement sensors exist.
本開示の一側面の測距装置は、検出領域内の物体により照射光が反射されることにより生じる反射光を受光する受光部と、前記照射光と前記反射光との位相差により前記物体までの距離を演算する演算部と、前記受光部における、前記物体までの距離の演算に用いられる照射光である必要照射光と、前記物体までの距離の演算に用いられない照射光である不要照射光との衝突の影響を抑制するように、前記受光部および前記演算部を制御する制御部とを含む測距装置である。
The ranging device on one side of the present disclosure includes a light receiving portion that receives reflected light generated by reflection of irradiation light by an object in the detection region, and the object due to a phase difference between the irradiation light and the reflected light. The calculation unit that calculates the distance to the object, the required irradiation light that is the irradiation light used to calculate the distance to the object in the light receiving unit, and the unnecessary irradiation that is not used to calculate the distance to the object. It is a distance measuring device including a light receiving unit and a control unit that controls the calculation unit so as to suppress the influence of collision with light.
本開示の一側面の測距方法は、測距装置に対応する。
The distance measuring method on one aspect of the present disclosure corresponds to the distance measuring device.
本開示の一側面においては、検出領域内の物体により照射光が反射されることにより生じる反射光が受光され、前記物体までの距離の演算に用いられる照射光である必要照射光と、前記物体までの距離の演算に用いられない照射光である不要照射光との衝突の影響を抑制するように制御される。
In one aspect of the present disclosure, the required irradiation light, which is the irradiation light generated by the reflection of the irradiation light by the object in the detection region and used for calculating the distance to the object, and the object. It is controlled so as to suppress the influence of collision with unnecessary irradiation light, which is irradiation light that is not used for calculating the distance to.
以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。
The preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings below. In the present specification and the drawings, components having substantially the same functional configuration are designated by the same reference numerals, so that duplicate description will be omitted.
以下、本開示を実施するための形態(以下、実施の形態という)について説明する。なお、説明は以下の順序で行う。
1.本開示の概要
2.第1の実施の形態
3.第2の実施の形態
4.第3の実施の形態
5.移動体への応用例 Hereinafter, embodiments for carrying out the present disclosure (hereinafter referred to as embodiments) will be described. The explanation will be given in the following order.
1. 1. Outline of thepresent disclosure 2. First Embodiment 3. Second embodiment 4. Third embodiment 5. Application example to moving body
1.本開示の概要
2.第1の実施の形態
3.第2の実施の形態
4.第3の実施の形態
5.移動体への応用例 Hereinafter, embodiments for carrying out the present disclosure (hereinafter referred to as embodiments) will be described. The explanation will be given in the following order.
1. 1. Outline of the
以下、図面を参照して、本技術を適用した実施の形態について説明する。
Hereinafter, embodiments to which the present technology is applied will be described with reference to the drawings.
<<1.本開示の概要>
<複数の測距装置による影響>
本開示は、複数の測距装置を用いる場合でも、高精度に測距を実現するものである。 << 1. Outline of this disclosure>
<Effects of multiple ranging devices>
The present disclosure realizes distance measurement with high accuracy even when a plurality of distance measuring devices are used.
<複数の測距装置による影響>
本開示は、複数の測距装置を用いる場合でも、高精度に測距を実現するものである。 << 1. Outline of this disclosure>
<Effects of multiple ranging devices>
The present disclosure realizes distance measurement with high accuracy even when a plurality of distance measuring devices are used.
例えば、図1で示されるように、複数のデプスセンサ(測距装置を含む)1-1,1-2を用いて、それぞれが物体4までのデプス画像(距離画像)を撮像する場合について想定する。
For example, as shown in FIG. 1, it is assumed that a plurality of depth sensors (including a distance measuring device) 1-1 and 1-2 are used to capture a depth image (distance image) up to the object 4 respectively. ..
図1のデプスセンサ1-1,1-2は、それぞれ光源2-1,2-2および測距装置3-1,3-2を備えている。
The depth sensors 1-1 and 1-2 in FIG. 1 are provided with a light source 2-1 and 2-2 and a distance measuring device 3-1 and 3-2, respectively.
光源2-1,2-2は、それぞれ所定のパルス幅のレーザ光を物体4に照射する。
The light sources 2-1 and 2-2 irradiate the object 4 with a laser beam having a predetermined pulse width.
測距装置3-1,3-2は、光源2-1,2-2のそれぞれより照射された照射光L1,L2が物体4で反射されることにより生じる反射光R1,R2をそれぞれ受光し、照射光の往復時間に基づいて画素単位で物体4までの距離を測距して、画素単位の測距結果からなるデプス画像を生成する。
The distance measuring devices 3-1 and 3-2 receive the reflected light R1 and R2 generated by the irradiation light L1 and L2 emitted from the light sources 2-1 and 2-2 being reflected by the object 4, respectively. The distance to the object 4 is measured in pixel units based on the round-trip time of the irradiation light, and a depth image composed of the distance measurement results in pixel units is generated.
尚、デプスセンサ1-1,1-2、光源2-1,2-2、および測距装置3-1,3-2は、それぞれ特に区別する必要がない場合、単に、デプスセンサ1、光源2、および測距装置3と称するものとし、その他の構成についても同様に称する。
The depth sensors 1-1, 1-2, the light sources 2-1 and 2, and the distance measuring devices 3-1 and 3-2 are simply referred to as the depth sensor 1, the light source 2, and the distance measuring devices 3-1 and 3-2, respectively, unless it is necessary to distinguish them. And the distance measuring device 3, and other configurations are also referred to in the same manner.
<測距の原理>
測距装置3は、光源2より照射される所定のパルス幅のレーザ光からなる照射光が物体4により反射することで生じる反射光を受光し、照射光と反射光との位相差に基づいて、レーザ光の往復時間を求めて、物体4までの距離を測距する。 <Principle of distance measurement>
The rangingdevice 3 receives the reflected light generated by reflecting the irradiation light composed of the laser light having a predetermined pulse width emitted from the light source 2 by the object 4, and based on the phase difference between the irradiation light and the reflected light. , The distance to the object 4 is measured by obtaining the round-trip time of the laser beam.
測距装置3は、光源2より照射される所定のパルス幅のレーザ光からなる照射光が物体4により反射することで生じる反射光を受光し、照射光と反射光との位相差に基づいて、レーザ光の往復時間を求めて、物体4までの距離を測距する。 <Principle of distance measurement>
The ranging
より具体的には、光源2は、例えば、図2の最上段で示されるようなパルス幅のレーザ光を照射光として照射する。
More specifically, the light source 2 irradiates, for example, a laser beam having a pulse width as shown in the uppermost stage of FIG. 2 as irradiation light.
測距装置3は、図2の最上段で示されるような照射光が物体4により反射することで生じる、図2の上から2段目で示されるような反射光を受光し、受光した光量に応じた画素信号を発生する受光素子を備えている。反射光の位相は、照射光の位相に対して、物体までの距離に応じた位相差による遅延時間ΔTが生じる。
The distance measuring device 3 receives the reflected light as shown in the second stage from the top of FIG. 2 generated by the irradiation light reflected by the object 4 as shown in the uppermost stage of FIG. 2, and the received light amount. It is provided with a light receiving element that generates a pixel signal according to the above. As for the phase of the reflected light, a delay time ΔT occurs due to the phase difference according to the distance to the object with respect to the phase of the irradiation light.
測距装置3は、この位相差により生じる遅延時間ΔTに基づいて、物体4までの距離を測距する。この際、測距装置3は、受光素子の露光時間を、照射光の位相と同期して、例えば、2つの期間に分けて受光する。
The distance measuring device 3 measures the distance to the object 4 based on the delay time ΔT caused by this phase difference. At this time, the distance measuring device 3 synchronizes the exposure time of the light receiving element with the phase of the irradiation light, and receives light, for example, by dividing it into two periods.
より具体的には、測距装置3は、図2の上から3段目で示される、照射光のパルスがHi信号の時の露光時間となる露光Aと、図2の上から4段目で示される、照射光のパルスがLow信号の時の露光時間となる露光Bとの2つの同じ長さの期間に分けて受光する。
More specifically, the distance measuring device 3 includes exposure A, which is the exposure time when the pulse of the irradiation light is a Hi signal, and the fourth stage from the top of FIG. 2, which is shown in the third stage from the top of FIG. The light is received by being divided into two periods of the same length as the exposure B, which is the exposure time when the pulse of the irradiation light is a Low signal, which is indicated by.
尚、以降においては、測距装置3の受光素子のうち、図2の露光Aで示される露光期間で露光する受光素子をTapAと称し、図2の露光Bで示される露光期間で露光する受光素子をTapBと称する。
In the following, among the light receiving elements of the ranging device 3, the light receiving element exposed in the exposure period shown by the exposure A in FIG. 2 is referred to as Tap A, and the light receiving element exposed in the exposure period shown in the exposure B in FIG. The element is called TapB.
TapAとTapBのそれぞれ露光期間で光電変換により発生する画素信号の和は、常に、照射光の位相がHi信号とされる、光源2の発光期間に相当する露光期間で検出される画素信号となる。
The sum of the pixel signals generated by photoelectric conversion in each of the exposure periods of TapA and TapB is always the pixel signal detected in the exposure period corresponding to the light emission period of the light source 2, where the phase of the irradiation light is the Hi signal. ..
これに対して、TapBで発生する画素信号は、照射光におけるHi信号とされる期間において発生される画素信号に対して、遅延時間ΔTの分だけ減少した露光期間における画素信号となる。
On the other hand, the pixel signal generated by TapB is a pixel signal in the exposure period that is reduced by the delay time ΔT with respect to the pixel signal generated in the period that is regarded as the Hi signal in the irradiation light.
そこで、測距装置3は、TapAとTapBの画素信号の和に対するTapBの画素信号の割合から遅延時間ΔTを求めて、求められた遅延時間ΔTを測距装置3と物体4との光の往復時間とみなし、往復時間に基づいて、測距装置3から物体4までの距離を測距する。
Therefore, the distance measuring device 3 obtains the delay time ΔT from the ratio of the pixel signals of Tap B to the sum of the pixel signals of Tap A and Tap B, and uses the obtained delay time ΔT to reciprocate the light between the distance measuring device 3 and the object 4. It is regarded as time, and the distance from the distance measuring device 3 to the object 4 is measured based on the round-trip time.
<複数の測距装置を用いる場合の影響>
デプスセンサ1は、図2を参照して説明した測距の原理により測距を行うが、複数のデプスセンサ1が同一の物体4までの距離を測距する場合、自らの照射光に対する物体4からの反射光を受光しなければ適切な測距を実現することができない。 <Effect of using multiple ranging devices>
Thedepth sensor 1 measures the distance according to the principle of distance measurement described with reference to FIG. 2, but when a plurality of depth sensors 1 measure the distance to the same object 4, the depth sensor 1 from the object 4 with respect to its own irradiation light. Appropriate distance measurement cannot be achieved unless the reflected light is received.
デプスセンサ1は、図2を参照して説明した測距の原理により測距を行うが、複数のデプスセンサ1が同一の物体4までの距離を測距する場合、自らの照射光に対する物体4からの反射光を受光しなければ適切な測距を実現することができない。 <Effect of using multiple ranging devices>
The
例えば、図1で示されるような場合、デプスセンサ1-1,1-2における光源2-1,2-1により照射された照射光L1,L2が、それぞれ物体4により反射され、反射光R1,R2として、測距装置3-1,3-2がそれぞれ受光して測距する限り、適切な測距を実現することができる。
For example, in the case shown in FIG. 1, the irradiation lights L1 and L2 emitted by the light sources 2-1 and 2-1 in the depth sensors 1-1 and 1-2 are reflected by the object 4, respectively, and the reflected light R1 and R1 As long as the distance measuring devices 3-1 and 3-2 receive light and measure the distance as R2, appropriate distance measurement can be realized.
しかしながら、例えば、測距装置3-2が、光源2-2の照射光L2に対する反射光R2と共に、光源2-1の照射光L1に対応する反射光R1を併せて受光すると、反射光R1の影響により、適切な測距ができないことになる。すなわち、測距装置3-2にとって、照射光L2に対応する反射光R2は、測距に必要な反射光であるが、照射光L1に対応する反射光R1は、測距に不要な反射光となる。尚、以降において、測距に必要な反射光は、必要反射光とも称し、必要反射光に対応する照射光は、必要照射光とも称する。同様に、測距に不要な照射光は不要照射光とも称し、不要照射光に対応する反射光は、不要反射光とも称する。従って、測距装置3-2にとって、照射光L2は、必要照射光であり、反射光R2は、必要反射光であるが、照射光L1は、不要照射光であり、反射光R1は、不要反射光である。これは、測距装置3-1においても同様の影響を受けることになる。
However, for example, when the ranging device 3-2 receives the reflected light R2 with respect to the irradiation light L2 of the light source 2-2 and the reflected light R1 corresponding to the irradiation light L1 of the light source 2-1 together, the reflected light R1 Due to the influence, proper distance measurement will not be possible. That is, for the distance measuring device 3-2, the reflected light R2 corresponding to the irradiation light L2 is the reflected light necessary for distance measurement, but the reflected light R1 corresponding to the irradiation light L1 is the reflected light unnecessary for the distance measurement. It becomes. Hereinafter, the reflected light required for distance measurement is also referred to as a necessary reflected light, and the irradiation light corresponding to the required reflected light is also referred to as a necessary irradiation light. Similarly, the irradiation light unnecessary for distance measurement is also referred to as unnecessary irradiation light, and the reflected light corresponding to the unnecessary irradiation light is also referred to as unnecessary reflected light. Therefore, for the ranging device 3-2, the irradiation light L2 is the necessary irradiation light and the reflected light R2 is the necessary reflected light, but the irradiation light L1 is the unnecessary irradiation light and the reflected light R1 is unnecessary. It is reflected light. This will be similarly affected by the distance measuring device 3-1.
このように、複数のデプスセンサ1-1,1-2を用いる場合については、測距装置3-1,3-2は、必要反射光と不要反射光との相互の影響(または必要照射光と不要照射光との相互の影響)を受けることにより、適切な測距を実現できない恐れがある。すなわち、必要照射光および必要反射光については、変調周波数や位相が既知であるが、変調周波数や位相が未知の不要照射光および不要反射光の影響により、変調周波数や位相が変化して未知のものに変化し、適切な測距ができなくなる恐れがある。以降、既知の変調周波数や位相からなる必要照射光や必要反射光に対して、変調周波数や位相を変化させてしまうような影響を衝突とも称する。すなわち、必要照射光や必要反射光と、不要照射光や不要反射光との衝突により、既知の変調周波数や位相に変化が生じて適切な測距が実現できなくなる恐れがある。このように複数のデプスセンサ1-1,1-2を用いる場合でも適切な測距を実現するには、デプスセンサ1-1,1-2が、反射光R1,R2をそれぞれ区別して受光できるようにして、衝突による影響を抑制して、必要反射光のみを測距に使用できるようにする必要がある。反射光R1,R2を区別して受光できるようにするには、いくつかの方法が考えられる。
As described above, when a plurality of depth sensors 1-1 and 1-2 are used, the distance measuring device 3-1 and 3-2 have mutual influences between the required reflected light and the unnecessary reflected light (or the required irradiation light and the light). There is a risk that proper distance measurement cannot be achieved due to mutual influence with unnecessary irradiation light). That is, the modulation frequency and phase of the required irradiation light and the required reflected light are known, but the modulation frequency and phase are unknown due to the influence of the unnecessary irradiation light and the unnecessary reflected light whose modulation frequency and phase are unknown. There is a risk that it will change to something and it will not be possible to measure the distance properly. Hereinafter, the effect of changing the modulation frequency or phase on the required irradiation light or required reflected light having a known modulation frequency or phase is also referred to as collision. That is, the collision between the required irradiation light or the required reflected light and the unnecessary irradiation light or the unnecessary reflected light may cause a change in the known modulation frequency or phase, making it impossible to realize appropriate distance measurement. In order to realize appropriate distance measurement even when a plurality of depth sensors 1-1 and 1-2 are used in this way, the depth sensors 1-1 and 1-2 should be able to receive the reflected light R1 and R2 separately. Therefore, it is necessary to suppress the influence of collision so that only the necessary reflected light can be used for distance measurement. Several methods can be considered in order to distinguish the reflected light R1 and R2 so that they can be received.
反射光R1,R2を区別して受光する第1の方法としては、例えば、図3の上段で示されるように、光源2-1,2-2の発光タイミングが重ならず、等間隔で発光するようにタイミングを制御する方式が考えられる。
As a first method of distinguishing and receiving the reflected light R1 and R2, for example, as shown in the upper part of FIG. 3, the light sources 2-1 and 2-2 do not overlap with each other and emit light at equal intervals. A method of controlling the timing can be considered.
尚、図3においては、上段、中段、および下段のそれぞれの上部の波形においては、光源2-1により照射される照射光L1の発光タイミングの波形が示されており、下部の波形においては、光源2-2により照射される照射光L2の発光タイミングの波形が示されている。
In addition, in FIG. 3, the waveform of the emission timing of the irradiation light L1 irradiated by the light source 2-1 is shown in the upper waveforms of the upper, middle, and lower stages, and in the lower waveform, the emission timing waveform is shown. The waveform of the emission timing of the irradiation light L2 emitted by the light source 2-2 is shown.
いずれの波形も、矩形状のパルス波形が発生しているタイミングが所定の変調光を発光させているタイミングであることを示している。
Both waveforms indicate that the timing at which the rectangular pulse waveform is generated is the timing at which the predetermined modulated light is emitted.
また、反射光R1,R2を区別して受光する第2の方法としては、例えば、図3の中段で示されるように、光源2-1,2-2の発光タイミングが重ならないように、発光するタイミング(間隔)を乱数等によりランダムに制御する方式が考えられる。図3の中段においては、照射光L1の発光タイミングが周期T1とされ、照射光L2の発光タイミングが周期T2とされている。
Further, as a second method of receiving the reflected light R1 and R2 separately, for example, as shown in the middle part of FIG. 3, light is emitted so that the emission timings of the light sources 2-1 and 2-2 do not overlap. A method of randomly controlling the timing (interval) by a random number or the like can be considered. In the middle stage of FIG. 3, the emission timing of the irradiation light L1 is set to the cycle T1, and the emission timing of the irradiation light L2 is set to the cycle T2.
さらに、反射光R1,R2を区別して受光する第3の方法としては、例えば、図3の下段で示されるように、光源2-1,2-2の発光に係る変調方式をランダムに変化させるように制御する方式が考えられる。この方式の場合、測距装置3-1,3-2は、光源2-1,2-2の発光に係る変調方式を把握しているので、照射光の発光タイミングが重なることがあっても背景光として分離することが可能である。
Further, as a third method of distinguishing and receiving the reflected light R1 and R2, for example, as shown in the lower part of FIG. 3, the modulation method related to the light emission of the light sources 2-1 and 2-2 is randomly changed. A method of controlling such as this can be considered. In the case of this method, the distance measuring devices 3-1 and 3-2 know the modulation method related to the light emission of the light sources 2-1 and 2, so that even if the light emission timings of the irradiation light overlap. It can be separated as background light.
しかしながら、第1の方式を実現するにあたっては、デプスセンサ1-1,1-2の光源2-1,2-2の発光タイミングが重ならないように制御する構成をデプスセンサ1-1,1-2とは別に設けるか、または、デプスセンサ1-1,1-2のいずれかがタイミングを制御する必要があり、装置構成に係るコストが増大すると共に、制御が複雑になる。
However, in order to realize the first method, the depth sensors 1-1, 1-2 are configured to control the light emission timings of the light sources 2-1 and 2-2 of the depth sensors 1-1 and 1-2 so as not to overlap with each other. It is necessary to separately provide the device or one of the depth sensors 1-1 and 1-2 to control the timing, which increases the cost related to the device configuration and complicates the control.
また、第2の方式においては、光源2-1,2-2の発光タイミングがランダムに変化するので、照射光L1,L2が重なったタイミングで発光することもある。測距装置3-1,3-2は、重なったタイミングであるのか否かについて判断が付かず、照射光L1,L2に対応する反射光R1,R2を分離することもできないので、必ずしも適切な測距ができない恐れがある。
Further, in the second method, since the light emission timing of the light sources 2-1 and 2-2 changes randomly, the light may be emitted at the timing when the irradiation lights L1 and L2 overlap. The ranging devices 3-1, 3-2 cannot determine whether or not the timings overlap, and cannot separate the reflected lights R1 and R2 corresponding to the irradiation lights L1 and L2, so that they are not always appropriate. Distance measurement may not be possible.
さらに、第3の方式においては、光源2-1,2-2の発光に係る変調方式がランダムに変化しても一致する可能性は否定できないため、必ずしも適切な測距を実現できない可能性がある。
Further, in the third method, even if the modulation method related to the light emission of the light sources 2-1 and 2-2 changes randomly, the possibility of matching cannot be denied, so that it may not always be possible to realize appropriate distance measurement. is there.
そこで、本開示においては、自らの光源2の発光タイミングの間隔を所定時間だけ広く設定すると共に、測距装置3における自らの光源2からの照射光に対する反射光により得られた画素信号と、他の光源2からの照射光に対する反射光により得られた画素信号とを複数回に渡って積算し、積算結果の差分から、4phase方式の測距演算により、自らの光源2による照射光と反射光との位相差を求めることで、物体4までの距離を測距する。
Therefore, in the present disclosure, the interval between the emission timings of the own light source 2 is set wide by a predetermined time, and the pixel signal obtained by the reflected light with respect to the irradiation light from the own light source 2 in the distance measuring device 3 and others. The pixel signal obtained by the reflected light with respect to the irradiation light from the light source 2 of the above is integrated multiple times, and the irradiation light and the reflected light by the own light source 2 are integrated by the distance measurement calculation of the 4-phase method from the difference of the integration result. The distance to the object 4 is measured by obtaining the phase difference with.
尚、ここでいう発光のタイミングの間隔とは、パルス状の波形の1つ1つの間隔ではなく、所定の変調周波数で変調された状態で光源が点滅して発光する発光状態と、光源が全く発光していない消灯状態との間隔を示している。
The interval of the light emission timing referred to here is not the interval of each pulse waveform, but the light source in which the light source blinks and emits light in a state of being modulated at a predetermined modulation frequency, and the light source is completely. Indicates the interval from the non-lighting state.
ここで、4phase方式の測距演算方式について説明する。
Here, the distance measurement calculation method of the 4-phase method will be described.
図2を参照して説明した測距演算方式は、4phase方式の測距演算方式に対して、2phase方式の測距演算方式であり、照射光の位相に同期して照射光が発光しているタイミングと、消灯しているタイミングとに同期してTapAとTapBとで1フレーム中(変調に係る1周期中)に2phaseの画素信号を検出し、TapAとTapBの画素信号の和に対するTapBの画素信号の比により表現される位相差に基づいて測距演算する方式である。
The ranging calculation method described with reference to FIG. 2 is a 2-phase ranging calculation method as opposed to the 4-phase ranging calculation method, and the irradiation light is emitted in synchronization with the phase of the irradiation light. Two-phase pixel signals are detected in one frame (during one cycle related to modulation) by TapA and TapB in synchronization with the timing and the timing when the lights are turned off, and the pixels of TapB with respect to the sum of the pixel signals of TapA and TapB. This is a method of performing distance measurement calculation based on the phase difference expressed by the ratio of signals.
これに対して4phase方式とは以下のような測距演算方式である。
On the other hand, the 4-phase method is the following distance measurement calculation method.
ここで、4phaseの測距演算方式を説明するにあたっては、図4の最上段(Emitted Light)で示されるように、照射時間Tで照射のオン/オフを繰り返すように変調(1周期=2T)された照射光が出力される。また、測距装置3の受光素子では、図4の2段目(Reflected Light)で示されるように、照射光の物体での反射により生じる反射光が、物体までの距離に応じた遅延時間ΔTだけ遅れて、受光されるものとする。
Here, in explaining the 4-phase ranging calculation method, as shown in the uppermost stage (Emitted Light) of FIG. 4, modulation is performed so that irradiation is repeatedly turned on / off at the irradiation time T (1 cycle = 2T). The emitted irradiation light is output. Further, in the light receiving element of the distance measuring device 3, as shown in the second stage (Reflected Light) of FIG. 4, the reflected light generated by the reflection of the irradiation light by the object has a delay time ΔT according to the distance to the object. It is assumed that the light is received with a delay.
4Phase方式では、図4の上から3段目乃至7段目のφ0乃至φ3で示されるように、TapAおよびTapBの露光タイミングが制御されることにより、照射光と同一の位相(即ちPhase0)、90度ずらした位相(Phase90)、180度ずらした位相(Phase180)、270度ずらした位相(Phase270)の4つのタイミングで露光され、それぞれの露光期間において画素信号が検出される。
In the 4Phase method, as shown by φ0 to φ3 in the third to seventh stages from the top of FIG. 4, the exposure timings of TapA and TapB are controlled so that the phase is the same as that of the irradiation light (that is, Phase0). The exposure is performed at four timings of a phase shifted by 90 degrees (Phase90), a phase shifted by 180 degrees (Phase180), and a phase shifted by 270 degrees (Phase270), and a pixel signal is detected in each exposure period.
尚、同一位相(Phase0)と位相(Phase180)におけるタイミングは、同一フレーム中に連続して設定することができ、また、位相(Phase90)と位相(Phase270)についても同一フレーム中に連続して設定することができる。このため、位相(Phase0)と位相(Phase180)については、連続したタイミングでTapAが受光し、位相(Phase90)と位相(Phase270)については、連続したタイミングでTapBが受光することで、1フレームで、同時に4位相(Phase)分の露光時間を設定することができる。
The timings of the same phase (Phase0) and the phase (Phase180) can be set continuously in the same frame, and the phase (Phase90) and the phase (Phase270) can also be set continuously in the same frame. can do. Therefore, for phase (Phase0) and phase (Phase180), TapA receives light at continuous timing, and for phase (Phase90) and phase (Phase270), TapB receives light at continuous timing, so that in one frame. At the same time, the exposure time for 4 phases can be set.
また、4Phase方式においてTapAおよびTapBのPhase0、Phase90、Phase180、Phase270で検出されたシグナル値は、それぞれ、q0A、q1A、q2A、q3Aとも称する。
The signal values detected in Phase 0, Phase 90, Phase 180, and Phase 270 of Tap A and Tap B in the 4 Phase method are also referred to as q 0A , q 1A , q 2A , and q 3A , respectively.
このシグナル値q0A、q1A、q2A、q3Aの配分比で遅延時間ΔTに対応する位相ずれ量θを検出することができる。すなわち、位相ずれ量θに基づいて遅延時間ΔTが求められるので、遅延時間ΔTにより対象物までの距離が求められる。
The phase shift amount θ corresponding to the delay time ΔT can be detected by the distribution ratio of the signal values q 0A , q 1A , q 2A , and q 3A . That is, since the delay time ΔT is obtained based on the phase shift amount θ, the distance to the object can be obtained from the delay time ΔT.
物体4までの距離は、例えば、以下の式(1)により演算される。
The distance to the object 4 is calculated by, for example, the following equation (1).
ここで、Cは光速であり、ΔTは、遅延時間であり、fmodは光の変調周波数であり、φ0乃至φ3は、図4の3段目乃至7段目で示される、それぞれ位相Phase0、Phase90、Phase180、Phase270のそれぞれで検出されたシグナル値q0A、q1A、q2A、q3Aであり、Distanceは、測距装置3から物体4までの距離である。
Here, C is the speed of light, ΔT is the delay time, f mod is the modulation frequency of light, and φ0 to φ3 are shown in the third to seventh stages of FIG. 4, respectively. The signal values q 0A , q 1A , q 2A , and q 3A detected in Phase 90, Phase 180, and Phase 270, respectively, and Distance is the distance from the distance measuring device 3 to the object 4.
本開示においては、複数のデプスセンサ1による影響を受けない場合、通常においては、4phase方式で測距演算を行うものとし、以降においては、この測距演算による動作モードを通常モードと称する。
In the present disclosure, if it is not affected by a plurality of depth sensors 1, the distance measurement calculation is normally performed by the 4-phase method, and hereinafter, the operation mode by this distance measurement calculation is referred to as a normal mode.
ただし、複数のデプスセンサ1が設けられている場合については、4phase方式の測距演算においても、他のデプスセンサ1の光源2により照射される照射光による影響を受けるため、そのままの処理では適切な測距を実現することができない。
However, when a plurality of depth sensors 1 are provided, even in the distance measurement calculation of the 4-phase method, it is affected by the irradiation light emitted by the light source 2 of the other depth sensor 1, so the measurement as it is is appropriate. Distance cannot be achieved.
そこで、本開示においては、複数のデプスセンサ1により同一の物体4を測距することで、複数のデプスセンサ1に他の照射光の影響を受ける場合、図5で示されるように、自らの照射光の発光間隔を、反射光が十分に減衰して光量が小さくなる時間よりも長く設定する。換言すれば、光源2が変調周波数で変調した照射光を照射する発光期間の間に、照射光の光量が十分に減衰する十分な長さの光源2が消灯した状態となる消灯期間が設定される。
Therefore, in the present disclosure, when the same object 4 is measured by a plurality of depth sensors 1 and the plurality of depth sensors 1 are affected by other irradiation light, as shown in FIG. 5, the own irradiation light is used. The light emission interval of is set longer than the time when the reflected light is sufficiently attenuated and the amount of light is reduced. In other words, during the light emission period in which the light source 2 irradiates the irradiation light modulated at the modulation frequency, a turn-off period is set in which the light source 2 having a sufficient length for sufficiently diminishing the amount of the irradiation light is turned off. To.
図5においては、図中上からNフレーム乃至N+3フレームのそれぞれについて、自らの照射光の波形WL、自らの照射光による物体4からの反射光WRの波形、他の照射光による物体4からの反射光WRoの波形、およびTapAとTapBによる各露光タイミングが示されている。また、TapAとTapBによる各露光タイミングにおいては、それぞれの露光期間のうち、反射光WR,WRoのいずれが受光されているのかが、色分けして示されている。すなわち、図5のTapAおよびTapBにおいては、点描状の領域が、反射光WRoが受光されているタイミングを表しており、格子状の領域が、反射光WRが受光されているタイミングを表している。
In FIG. 5, for each of the N frames to N + 3 frames from the top of the figure, the waveform WL of its own irradiation light, the waveform of the reflected light WR from the object 4 due to its own irradiation light, and the object 4 due to other irradiation light. The waveform of the reflected light WRo from is shown, and the exposure timings of TapA and TapB are shown. Further, at each exposure timing by Tap A and Tap B, which of the reflected light WR and WRo is received during each exposure period is color-coded. That is, in Tap A and Tap B of FIG. 5, the pointillistic region represents the timing at which the reflected light WRo is received, and the grid-like region represents the timing at which the reflected light WR is received. ..
Nフレーム乃至N+3フレームのそれぞれにおいて、照射光の波形WLは、時刻t0乃至t1、t2乃至t3、およびt4乃至t5で示されるように、周期TLで発光している。周期TLは、反射光の発光期間が終了してから十分に明るさが減衰する時間Tdよりも十分に長く設定されている。
In each of the N frames to the N + 3 frames, the waveform WL of the irradiation light emits light with a period TL as shown by the times t0 to t1, t2 to t3, and t4 to t5. The period TL is set to be sufficiently longer than the time Td at which the brightness is sufficiently attenuated after the emission period of the reflected light ends.
また、Nフレーム乃至N+3フレームのそれぞれにおいて、自らの照射光に対応する反射光の波形WRは、自らの照射光の波形WLに対して、物体4までの距離に応じた遅延時間ΔTだけ遅れたタイミングとなる時刻t11乃至t12、t13乃至t14、およびt15乃至t16において発光した状態で受光される。
Further, in each of the N frame to the N + 3 frame, the waveform WR of the reflected light corresponding to the own irradiation light is only the delay time ΔT according to the distance to the object 4 with respect to the waveform WL of the own irradiation light. The light is received in a state of light emission at the delayed timings t11 to t12, t13 to t14, and t15 to t16.
TapAは、Nフレーム乃至N+3フレームのそれぞれについて、自らの照射光の発光している期間に対して、図5中上から、同一の位相(即ちPhase0)、180度ずらした位相(Phase180)、90度ずらした位相(Phase90)、270度ずらした位相(Phase270)で露光され、自らの照射光が物体4で反射する反射光と、他の照射光が物体4で反射する反射光の両方を受光して画素信号を検出する。
TapA has the same phase (that is, Phase 0) and a phase shifted by 180 degrees (Phase180) from the upper part of FIG. 5 with respect to the period during which the irradiation light is emitted for each of the N frame to the N + 3 frame. Both the reflected light that is exposed in the phase shifted by 90 degrees (Phase90) and the phase shifted by 270 degrees (Phase270) and that its own irradiation light is reflected by the object 4 and the reflected light that other irradiation light is reflected by the object 4. Is received and the pixel signal is detected.
一方、TapBは、Nフレーム乃至N+3フレームのそれぞれについて、自らの照射光の発光している期間に同期して、反射光の発光期間が終了してから十分に明るさが減衰する時間Tdが経過するタイミングにおいて、同一の位相(即ちPhase0)、180度ずらした位相(Phase180)、90度ずらした位相(Phase90)、270度ずらした位相(Phase270)で露光され、他の照射光が物体4で反射する反射光のみを受光して画素信号を検出する。
On the other hand, in TapB, for each of the N frames to N + 3 frames, the time Td at which the brightness sufficiently attenuates after the emission period of the reflected light ends in synchronization with the period during which the irradiation light is emitted. Is exposed in the same phase (that is, Phase 0), 180 degree shifted phase (Phase 180), 90 degree shifted phase (Phase 90), and 270 degree shifted phase (Phase 270), and other irradiation light is an object. Only the reflected light reflected in 4 is received to detect the pixel signal.
本開示においては、このようにして4フレーム単位で求められる4位相分の自らの光源2の反射光と他の光源2の反射光とを併せた画素信号、および、4位相分の他の光源2の反射光のみの画素信号とが所定回数だけ繰り返し積算されるようにする。
In the present disclosure, a pixel signal that combines the reflected light of its own light source 2 for four phases and the reflected light of another light source 2 thus obtained in units of four frames, and another light source for four phases. The pixel signal of only the reflected light of 2 is repeatedly integrated a predetermined number of times.
そして、TapAで検出された、自らの照射光に対応する反射光と、他の光源2の反射光とを併せた4位相分の画素信号の積算値、および、TapBで検出された、他の照射光に対応する反射光のみからなる4位相分の画素信号の積算値を用いて、上述した式(1)におけるφ0乃至φ3を以下の式(2)で示されるように表現し、測距を実現する。
Then, the integrated value of the pixel signals for four phases, which is the sum of the reflected light corresponding to the own irradiation light detected by TapA and the reflected light of the other light source 2, and the other ones detected by TapB. Using the integrated value of the pixel signals for four phases consisting only of the reflected light corresponding to the irradiation light, φ0 to φ3 in the above equation (1) are expressed as shown by the following equation (2), and the distance measurement is performed. To realize.
φ0=ΣTapAN-ΣTapBN
φ1=ΣTapAN+2-ΣTapBN+2
φ2=ΣTapAN+1-ΣTapBN+1
φ3=ΣTapAN+3-ΣTapBN+3 φ0 = ΣTapA N -ΣTapB N
φ1 = ΣTapA N + 2 -ΣTapB N + 2
φ2 = ΣTapA N + 1 -ΣTapB N + 1
φ3 = ΣTapA N + 3 -ΣTapB N + 3
φ1=ΣTapAN+2-ΣTapBN+2
φ2=ΣTapAN+1-ΣTapBN+1
φ3=ΣTapAN+3-ΣTapBN+3 φ0 = ΣTapA N -ΣTapB N
φ1 = ΣTapA N + 2 -ΣTapB N + 2
φ2 = ΣTapA N + 1 -ΣTapB N + 1
φ3 = ΣTapA N + 3 -ΣTapB N + 3
・・・(2)
‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥‥
ΣTapAN,ΣTapAN+1,ΣTapAN+2,ΣTapAN+3は、それぞれTapAにおける、N乃至N+3フレームの4フレームの各フレームにおいて検出された画素信号の所定回数分の積算値である。
ΣTapA N , ΣTapA N + 1 , ΣTapA N + 2 , and ΣTapA N + 3 are the integrated values of the pixel signals detected in each of the four frames N to N + 3 in TapA, respectively. ..
また、ΣTapBN,ΣTapBN+1,ΣTapBN+2,ΣTapBN+3は、それぞれTapBにおいて、N乃至N+3フレームの4フレームの各フレームにおいて検出された画素信号の所定回数分の積算値である。
In addition, ΣTapB N , ΣTapB N + 1 , ΣTapB N + 2 , and ΣTapB N + 3 are the integrated values of the predetermined number of pixel signals detected in each of the four frames N to N + 3 in TapB, respectively. Is.
式(2)の場合、φ0乃至φ3は、それぞれ位相Phase0、Phase90、Phase180、Phase270のそれぞれで検出されたシグナル値q0A、q1A、q2A、q3Aに対応するものである。
In the case of the equation (2), φ0 to φ3 correspond to the signal values q 0A , q 1A , q 2A , and q 3A detected in each of the phases Phase 0, Phase 90, Phase 180, and Phase 270, respectively.
尚、遅延時間ΔTは、α/(2πfmod)で表される。
The delay time ΔT is represented by α / (2πf mod ).
すなわち、TapAにおいて、N乃至N+3フレームのそれぞれのフレーム単位における、反射光WRと反射光WRoを併せた反射光により検出される画素信号の積算値から、TapBにおいて、N乃至N+3フレームのそれぞれのフレーム単位における、反射光WRoのみにより検出される画素信号の積算値が減算される。
That is, from the integrated value of the pixel signal detected by the reflected light of the reflected light WR and the reflected light WRo in each frame unit of N to N + 3 frames in TapA, N to N + 3 frames in TapB. The integrated value of the pixel signal detected only by the reflected light WRo in each frame unit is subtracted.
このような演算により、反射光WRと反射光WRoとが併せて検出される画素信号の平均値の積算値(ΣTapAN乃至ΣTapAN+3)から、反射光WRoのみにより検出される画素信号の平均値の積算値(ΣTapBN乃至ΣTapBN+3)が減算されることで、他の光源2からの照射光に対応する反射光の成分が実質的に除去される。
From the integrated value (ΣTapA N to ΣTapA N + 3 ) of the average value of the pixel signals detected by the reflected light WR and the reflected light WRo together by such calculation, the pixel signal detected only by the reflected light WRo By subtracting the integrated value (ΣTapB N to ΣTapB N + 3 ) of the average value, the component of the reflected light corresponding to the irradiation light from the other light source 2 is substantially removed.
結果として、(φ1-φ3)/(φ0-φ2)の演算においては、他の光源2に基づいた反射光WRoによる影響が実質的に除去された値が用いられることになるので、他の光源2による影響を抑制するように補正することが可能となり、適切な測距演算を実現することが可能となる。
As a result, in the calculation of (φ1-φ3) / (φ0-φ2), a value in which the influence of the reflected light WRo based on the other light source 2 is substantially removed is used, so that the other light source is used. It is possible to make corrections so as to suppress the influence of 2, and it is possible to realize an appropriate distance measurement calculation.
なお、図5を参照して説明したように、複数のデプスセンサ1-1,1-2による他の光源2による反射光の影響を抑制するように補正して測距演算を行う処理を、図4を参照して説明した通常モード処理に対して、補正モード処理と称する。
As described with reference to FIG. 5, the process of performing the distance measurement calculation by correcting so as to suppress the influence of the reflected light by the other light sources 2 by the plurality of depth sensors 1-1 and 1-2 is shown in FIG. The normal mode processing described with reference to 4 is referred to as a correction mode processing.
<<2.第1の実施の形態>>
<デプスセンサの構成例>
図6は、本開示を適用したデプスセンサの実施の形態の構成例を示すブロック図である。 << 2. First Embodiment >>
<Configuration example of depth sensor>
FIG. 6 is a block diagram showing a configuration example of an embodiment of a depth sensor to which the present disclosure is applied.
<デプスセンサの構成例>
図6は、本開示を適用したデプスセンサの実施の形態の構成例を示すブロック図である。 << 2. First Embodiment >>
<Configuration example of depth sensor>
FIG. 6 is a block diagram showing a configuration example of an embodiment of a depth sensor to which the present disclosure is applied.
図6において、デプスセンサ11は、光変調部21、発光ダイオード22、投光レンズ23、受光レンズ24、フィルタ25、TOFセンサ26、画像記憶部27、同期処理部28、演算部29、デプス画像生成部30、および制御部31を備えて構成される。
In FIG. 6, the depth sensor 11 includes an optical modulation unit 21, a light emitting diode 22, a light projecting lens 23, a light receiving lens 24, a filter 25, a TOF sensor 26, an image storage unit 27, a synchronization processing unit 28, a calculation unit 29, and a depth image generation. It is configured to include a unit 30 and a control unit 31.
光変調部21は、制御部31により制御され、発光ダイオード22から出力される光を、例えば、10MHz程度の高周波で変調させるための変調信号を、発光ダイオード22に供給する。また、光変調部21は、制御部31により制御され、発光ダイオード22の光が変調するタイミングを示すタイミング信号を、TOFセンサ26および同期処理部28に供給する。
The optical modulation unit 21 is controlled by the control unit 31 and supplies a modulation signal for modulating the light output from the light emitting diode 22 at a high frequency of, for example, about 10 MHz to the light emitting diode 22. Further, the optical modulation unit 21 is controlled by the control unit 31 and supplies a timing signal indicating the timing at which the light of the light emitting diode 22 is modulated to the TOF sensor 26 and the synchronization processing unit 28.
発光ダイオード22は、光変調部21から供給される変調信号に従って、例えば、赤外光などのような不可視域の光を高速で変調させながら発光し、その光を、デプスセンサ11がデプス画像を取得して距離を検出する領域となる検出領域に向かって照射する。なお、本実施の形態では、検出領域に向かって光を照射する光源を、発光ダイオード22として説明するが、レーザダイオードなど他の光源を用いてもよい。
The light emitting diode 22 emits light while modulating light in an invisible region such as infrared light at high speed according to a modulation signal supplied from the light modulation unit 21, and the depth sensor 11 acquires a depth image of the light. Then, the light is emitted toward the detection area, which is the area for detecting the distance. In the present embodiment, the light source that irradiates the light toward the detection region is described as the light emitting diode 22, but another light source such as a laser diode may be used.
また、発光ダイオード22は、後述するTOFセンサ26と隣接して配置されても良い。この構成により、発光した光が物体に反射してデプスセンサ11に戻ってくる際の、往きと帰りの経路差が最少となり、測距誤差を低減することができる。さらに、発光ダイオード22とTOFセンサ26とは、1つの筐体で一体的に形成されても良い。この構成により、発光した光が物体に反射してデプスセンサ11に戻ってくる際の、往きと帰りの経路のばらつきを抑えることができ、測距誤差を低減することができる。また、発光ダイオード22とTOFセンサ26とは、相互の位置関係が把握できている限り、それぞれ異なる筐体から形成されても良い。
Further, the light emitting diode 22 may be arranged adjacent to the TOF sensor 26 described later. With this configuration, when the emitted light is reflected by the object and returned to the depth sensor 11, the path difference between the going and returning is minimized, and the distance measurement error can be reduced. Further, the light emitting diode 22 and the TOF sensor 26 may be integrally formed in one housing. With this configuration, it is possible to suppress variations in the going and returning paths when the emitted light is reflected by the object and returned to the depth sensor 11, and it is possible to reduce the distance measurement error. Further, the light emitting diode 22 and the TOF sensor 26 may be formed from different housings as long as the mutual positional relationship can be grasped.
投光レンズ23は、発光ダイオード22から照射される光が所望の照射角度となるように、光の配光を調節するレンズにより構成される。
The light projecting lens 23 is composed of a lens that adjusts the light distribution so that the light emitted from the light emitting diode 22 has a desired irradiation angle.
受光レンズ24は、発光ダイオード22から光が照射される検出領域を視野に収めるレンズにより構成され、検出領域にある物体で反射した光を、TOFセンサ26のセンサ面に結像させる。
The light receiving lens 24 is composed of a lens that captures the detection region where the light is emitted from the light emitting diode 22 in the field of view, and the light reflected by the object in the detection region is imaged on the sensor surface of the TOF sensor 26.
フィルタ25は、所定の帯域の光のみを通過させるBPF(Band Pass Filter)であり、検出領域にある物体で反射してTOFセンサ26に向かって入射してくる光のうち、所定の通過帯域の光のみを通過させる。例えば、フィルタ25は、通過帯域の中心波長が920nm乃至960nmの間に設定され、その通過帯域の光を、通過帯域以外の波長の光よりも多く通過させる。具体的には、フィルタ25において、通過帯域の波長の光は60%以上透過し、通過帯域以外の波長の光は30%未満透過する。
The filter 25 is a BPF (Band Pass Filter) that allows only light in a predetermined band to pass through, and is of the light in a predetermined pass band among the light reflected by an object in the detection region and incident on the TOF sensor 26. Only light passes through. For example, the filter 25 has a central wavelength of the pass band set between 920 nm and 960 nm, and allows more light in the pass band to pass than light having a wavelength other than the pass band. Specifically, in the filter 25, light having a wavelength in the pass band is transmitted by 60% or more, and light having a wavelength other than the pass band is transmitted by less than 30%.
また、フィルタ25が光を通過させる通過帯域は、従来のTOF方式で用いられていたフィルタの通過帯域よりも狭く、発光ダイオード22から照射される光の波長に対応した狭帯域に限定される。例えば、発光ダイオード22から照射される光の波長が940nmである場合、その波長に連動して、フィルタ25の通過帯域として、940nmを中心とした前後10nmの帯域(930乃至950nm)が採用される。
Further, the pass band through which the filter 25 passes light is narrower than the pass band of the filter used in the conventional TOF method, and is limited to a narrow band corresponding to the wavelength of the light emitted from the light emitting diode 22. For example, when the wavelength of the light emitted from the light emitting diode 22 is 940 nm, a band of 10 nm before and after (930 to 950 nm) centered on 940 nm is adopted as a pass band of the filter 25 in conjunction with the wavelength. ..
このようなフィルタ25の通過帯域により、TOFセンサ26が太陽の外乱光の影響を抑えつつ照射光を検出することができる。なお、フィルタ25の通過帯域はこれに限定されず、所定波長を中心とした前後15nm以下の帯域であってもよい。また、フィルタ25の通過帯域として、TOFセンサ26が最も特性の良い波長帯である850nmを中心とした前後10nmの帯域(840乃至860nm)が採用されても良い。これにより、TOFセンサ26が効果的に照射光を検出することができる。
With such a pass band of the filter 25, the TOF sensor 26 can detect the irradiation light while suppressing the influence of the disturbance light of the sun. The pass band of the filter 25 is not limited to this, and may be a band of 15 nm or less before and after the predetermined wavelength. Further, as the pass band of the filter 25, a band (840 to 860 nm) of 10 nm before and after centering on 850 nm, which is the wavelength band having the best characteristics of the TOF sensor 26, may be adopted. As a result, the TOF sensor 26 can effectively detect the irradiation light.
TOFセンサ26は、発光ダイオード22から照射される光の波長域に感度を有する撮像素子により構成され、受光レンズ24によって集光されてフィルタ25を通過した光を、センサ面にアレイ状に配置される複数の画素により受光する。図示するように、TOFセンサ26は、発光ダイオード22の近傍に配置され、発光ダイオード22により光が照射される検出領域にある物体で反射した光を受光することができる。そして、TOFセンサ26は、それぞれの画素が受光した光の光量を、デプス画像を生成するための画素値とした画素信号を出力する。TOFセンサ26の具体的な構成としては、例えば、SPAD(single photon avalanche diode)、APD(Avalanche Photo Diode)やCAPD(Current Assisted Photonic Demodulator)などの種々のセンサを適用可能である。
The TOF sensor 26 is composed of an image pickup element having sensitivity in the wavelength range of the light emitted from the light emitting diode 22, and the light collected by the light receiving lens 24 and passed through the filter 25 is arranged in an array on the sensor surface. Light is received by a plurality of pixels. As shown in the figure, the TOF sensor 26 is arranged in the vicinity of the light emitting diode 22 and can receive the light reflected by the object in the detection region where the light is irradiated by the light emitting diode 22. Then, the TOF sensor 26 outputs a pixel signal in which the amount of light received by each pixel is used as a pixel value for generating a depth image. As a specific configuration of the TOF sensor 26, for example, various sensors such as SPAD (single photon avalanche diode), APD (Avalanche Photo Diode) and CAPD (Current Assisted Photonic Demodulator) can be applied.
画像記憶部27は、TOFセンサ26から出力される画素信号により構築される画像を記憶する。例えば、画像記憶部27は、検出領域内で変化があったときの最新の画像を記憶したり、検出領域内に物体が存在していない状態の画像を背景画像として記憶したりすることができる。
The image storage unit 27 stores an image constructed by the pixel signal output from the TOF sensor 26. For example, the image storage unit 27 can store the latest image when there is a change in the detection area, or can store an image in a state where no object exists in the detection area as a background image. ..
同期処理部28は、光変調部21から供給されるタイミング信号に同期して、TOFセンサ26から供給される画素信号のうち、発光ダイオード22が照射した変調光に対応する反射光を受光したタイミングの画素信号を抽出する処理を行う。これにより、同期処理部28は、発光ダイオード22から照射され、検出領域にある物体で反射した光を主成分とするタイミングの画素信号のみを演算部29に供給することができる。また、同期処理部28は、例えば、画像記憶部27に記憶されている背景画像を読み出し、TOFセンサ26から供給される画素信号により構築される画像との差分を求めることで、検出領域において動きのある物体のみからなる画素信号を生成することができる。なお、同期処理部28は必須の構成ではなく、TOFセンサ26から供給される画素信号を演算部29に直接供給してもよい。
The synchronization processing unit 28 receives the reflected light corresponding to the modulated light emitted by the light emitting diode 22 among the pixel signals supplied from the TOF sensor 26 in synchronization with the timing signal supplied from the optical modulation unit 21. Performs the process of extracting the pixel signal of. As a result, the synchronization processing unit 28 can supply only the pixel signal of the timing whose main component is the light emitted from the light emitting diode 22 and reflected by the object in the detection region to the calculation unit 29. Further, the synchronization processing unit 28 moves in the detection region by, for example, reading the background image stored in the image storage unit 27 and obtaining the difference from the image constructed by the pixel signal supplied from the TOF sensor 26. It is possible to generate a pixel signal consisting only of an object with a certain object. The synchronization processing unit 28 is not an essential configuration, and the pixel signal supplied from the TOF sensor 26 may be directly supplied to the calculation unit 29.
演算部29は、同期処理部28またはTOFセンサ26から供給される画素信号に基づいて、それぞれの画素ごとに検出領域内にある物体までの距離を求める演算を行い、その演算により求められる距離を示すデプス信号をデプス画像生成部30に供給する。
The calculation unit 29 calculates the distance to the object in the detection area for each pixel based on the pixel signal supplied from the synchronization processing unit 28 or the TOF sensor 26, and calculates the distance obtained by the calculation. The indicated depth signal is supplied to the depth image generation unit 30.
具体的には、演算部29は、発光ダイオード22が発光した光の位相と、発光ダイオード22から照射された光が物体で反射してTOFセンサ26の画素に入射した光の位相との位相差に基づいて、検出領域内にある物体までの距離を求める演算を行う。
Specifically, the calculation unit 29 has a phase difference between the phase of the light emitted by the light emitting diode 22 and the phase of the light emitted from the light emitting diode 22 reflected by an object and incident on the pixels of the TOF sensor 26. Based on, the calculation for finding the distance to the object in the detection area is performed.
デプス画像生成部30は、演算部29から供給されるデプス信号から、被写体までの距離が画素の配置に従って並べられたデプス画像を生成し、そのデプス画像を後段の処理装置(図示せず)に出力する。
The depth image generation unit 30 generates a depth image in which the distances to the subject are arranged according to the arrangement of pixels from the depth signal supplied from the calculation unit 29, and the depth image is sent to a subsequent processing device (not shown). Output.
制御部31は、プロセッサやメモリから構成され、デプスセンサ11の動作の全体を制御する。制御部31は、動作モードに応じて、光変調部21を制御して、変調方式を制御することで発光ダイオード22の発光タイミングを制御する。制御部31は、動作モードに応じて、TOFセンサ26を制御して、例えば、TapAおよびTapBの動作タイミングを制御する。制御部31は、動作モードに応じて、演算部29の演算内容を制御する。
The control unit 31 is composed of a processor and a memory, and controls the entire operation of the depth sensor 11. The control unit 31 controls the optical modulation unit 21 according to the operation mode, and controls the light emission timing of the light emitting diode 22 by controlling the modulation method. The control unit 31 controls the TOF sensor 26 according to the operation mode to control, for example, the operation timings of TapA and TapB. The control unit 31 controls the calculation content of the calculation unit 29 according to the operation mode.
ここでいう動作モードとは、例えば、上述した通常モード、および補正モードである。
The operation mode referred to here is, for example, the above-mentioned normal mode and correction mode.
すなわち、制御部31は、自らであるデプスセンサ11が単独で所定の物体を含む領域のデプス画像を生成するような場合には、動作モードを、図4を参照して説明した通常モード処理となるように各種の構成を制御する。
That is, when the depth sensor 11 itself generates a depth image of a region including a predetermined object, the control unit 31 performs the normal mode processing described with reference to FIG. Various configurations are controlled so as to.
また、制御部31は、複数のデプスセンサ11が同一の所定の物体を含む領域のデプス画像を生成するような場合には、動作モードを、図5を参照して説明した補正モード処理となるように各種の構成を制御する。
Further, when the plurality of depth sensors 11 generate a depth image of a region including the same predetermined object, the control unit 31 sets the operation mode to the correction mode processing described with reference to FIG. Control various configurations.
尚、図6における発光ダイオード22、および投光レンズ23は、図1の光源2に対応する光源32を構成している。また、受光レンズ24、フィルタ25、およびTOFセンサ26は、図1の測距装置3に対応する測距装置33を構成している。
The light emitting diode 22 and the light projecting lens 23 in FIG. 6 constitute a light source 32 corresponding to the light source 2 in FIG. Further, the light receiving lens 24, the filter 25, and the TOF sensor 26 constitute a distance measuring device 33 corresponding to the distance measuring device 3 of FIG.
このように構成されるデプスセンサ11では、上述したように、発光ダイオード22から照射される光の波長に対応して、通過帯域が狭帯域に限定された狭帯域のフィルタ25が採用されている。これにより、フィルタ25は、外乱光によるノイズ成分を除去し、測定に必要な信号成分を多く通過させることができる。即ち、デプスセンサ11では、TOFセンサ26が、ノイズ成分となる外乱光と比較して、測定に必要な信号成分である光を多く受光することができ、取得されるデプス画像のSN比(Signal to Noise Ratio)を改善することができる。
As described above, the depth sensor 11 configured in this way employs a narrow band filter 25 in which the pass band is limited to a narrow band corresponding to the wavelength of the light emitted from the light emitting diode 22. As a result, the filter 25 can remove noise components due to ambient light and allow a large amount of signal components required for measurement to pass through. That is, in the depth sensor 11, the TOF sensor 26 can receive more light, which is a signal component necessary for measurement, as compared with ambient light, which is a noise component, and the SN ratio (Signal to) of the acquired depth image. Noise Ratio) can be improved.
従って、デプスセンサ11は、外乱光の影響を受けるような環境であっても、より高精度なデプス画像を取得することが可能な取得距離を長距離化することができ、デプス画像の取得性能を向上させることができる。
Therefore, the depth sensor 11 can increase the acquisition distance capable of acquiring a more accurate depth image even in an environment affected by ambient light, and can improve the depth image acquisition performance. Can be improved.
<TOFセンサの構成例>
次に、図7を参照して、図6のTOFセンサ26の構成例について説明する。 <Example of TOF sensor configuration>
Next, a configuration example of theTOF sensor 26 of FIG. 6 will be described with reference to FIG. 7.
次に、図7を参照して、図6のTOFセンサ26の構成例について説明する。 <Example of TOF sensor configuration>
Next, a configuration example of the
図7に示すTOFセンサ26は、例えば、裏面照射型のセンサから構成されるものとするが、表面照射型のセンサであってもよい。
The TOF sensor 26 shown in FIG. 7 is composed of, for example, a back-illuminated type sensor, but may be a front-illuminated type sensor.
TOFセンサ26は、図示せぬ半導体基板上に形成された画素アレイ部40と、画素アレイ部40と同じ半導体基板上に集積された周辺回路部とを有する構成となっている。周辺回路部は、例えば、タップ駆動部41、垂直駆動部42、カラム処理部43、水平駆動部44、およびシステム制御部45から構成されている。
The TOF sensor 26 has a configuration including a pixel array unit 40 formed on a semiconductor substrate (not shown) and a peripheral circuit unit integrated on the same semiconductor substrate as the pixel array unit 40. The peripheral circuit unit is composed of, for example, a tap drive unit 41, a vertical drive unit 42, a column processing unit 43, a horizontal drive unit 44, and a system control unit 45.
TOFセンサ26には、さらに信号処理部49およびデータ格納部50も設けられている。なお、信号処理部49およびデータ格納部50は、TOFセンサ26と同じ基板上に搭載してもよいし、撮像装置におけるTOFセンサ26とは別の基板上に配置するようにしてもよい。
The TOF sensor 26 is also provided with a signal processing unit 49 and a data storage unit 50. The signal processing unit 49 and the data storage unit 50 may be mounted on the same substrate as the TOF sensor 26, or may be arranged on a substrate different from the TOF sensor 26 in the imaging device.
画素アレイ部40は、受光した光量に応じた電荷を生成し、その電荷に応じた信号を出力する画素51が行方向および列方向の行列状に2次元配置された構成となっている。すなわち、画素アレイ部40は、入射した光を光電変換し、その結果得られた電荷に応じた信号を出力する画素51を複数有している。ここで、行方向とは、水平方向の画素51の配列方向を言い、列方向とは、垂直方向の画素51の配列方向を言う。行方向は、図中、横方向であり、列方向は、図中、縦方向である。
The pixel array unit 40 has a configuration in which pixels 51 that generate an electric charge according to the amount of received light and output a signal corresponding to the electric charge are two-dimensionally arranged in a matrix in the row direction and the column direction. That is, the pixel array unit 40 has a plurality of pixels 51 that photoelectrically convert the incident light and output a signal corresponding to the electric charge obtained as a result. Here, the row direction refers to the arrangement direction of the pixels 51 in the horizontal direction, and the column direction refers to the arrangement direction of the pixels 51 in the vertical direction. The row direction is the horizontal direction in the figure, and the column direction is the vertical direction in the figure.
画素51は、外部から入射した光、特に赤外光を受光して光電変換し、その結果得られた電荷に応じた画素信号を出力する。画素51は、所定の電圧MIX0(第1の電圧)を印加して、光電変換された電荷を検出する、上述したTapAに対応する第1のタップTAと、所定の電圧MIX1(第2の電圧)を印加して、光電変換された電荷を検出する、上述したTapBに対応する第2のタップTBとを有する。
The pixel 51 receives light incident from the outside, particularly infrared light, and performs photoelectric conversion, and outputs a pixel signal corresponding to the electric charge obtained as a result. The pixel 51 applies a predetermined voltage MIX0 (first voltage) to detect the photoelectrically converted charge, and has a first tap TA corresponding to the above-mentioned TapA and a predetermined voltage MIX1 (second voltage). ) Is applied to detect the photoelectrically converted charge, and has a second tap TB corresponding to the above-mentioned Tap B.
タップ駆動部41は、画素アレイ部40の各画素51の第1のタップTA(TapA)に、所定の電圧供給線48を介して所定の電圧MIX0を供給し、第2のタップTB(TapB)に、所定の電圧供給線48を介して所定の電圧MIX1を供給する。したがって、画素アレイ部40の1つの画素列には、電圧MIX0を伝送する電圧供給線48と、電圧MIX1を伝送する電圧供給線48の2本の電圧供給線48が配線されている。
The tap drive unit 41 supplies a predetermined voltage MIX0 to the first tap TA (TapA) of each pixel 51 of the pixel array unit 40 via a predetermined voltage supply line 48, and supplies a predetermined voltage MIX0 to the second tap TB (TapB). Is supplied with a predetermined voltage MIX1 via a predetermined voltage supply line 48. Therefore, two voltage supply lines 48, a voltage supply line 48 for transmitting the voltage MIX0 and a voltage supply line 48 for transmitting the voltage MIX1, are wired in one pixel array of the pixel array unit 40.
画素アレイ部40において、行列状の画素配列に対して、画素行ごとに画素駆動線46が行方向に沿って配線され、各画素列に2つの垂直信号線47が列方向に沿って配線されている。例えば画素駆動線46は、画素から信号を読み出す際の駆動を行うための駆動信号を伝送する。なお、図1では、画素駆動線46について1本の配線として示しているが、1本に限られるものではない。画素駆動線46の一端は、垂直駆動部42の各行に対応した出力端に接続されている。
In the pixel array unit 40, a pixel drive line 46 is wired along the row direction for each pixel row with respect to the matrix-like pixel array, and two vertical signal lines 47 are wired along the column direction in each pixel row. ing. For example, the pixel drive line 46 transmits a drive signal for driving when reading a signal from a pixel. In FIG. 1, the pixel drive line 46 is shown as one wiring, but the wiring is not limited to one. One end of the pixel drive line 46 is connected to the output end corresponding to each line of the vertical drive unit 42.
垂直駆動部42は、シフトレジスタやアドレスデコーダなどによって構成され、画素アレイ部40の各画素を全画素同時あるいは行単位等で駆動する。すなわち、垂直駆動部42は、垂直駆動部42を制御するシステム制御部45とともに、画素アレイ部40の各画素の動作を制御する駆動部を構成している。
The vertical drive unit 42 is composed of a shift register, an address decoder, and the like, and drives each pixel of the pixel array unit 40 at the same time for all pixels or in line units. That is, the vertical drive unit 42 constitutes a drive unit that controls the operation of each pixel of the pixel array unit 40 together with the system control unit 45 that controls the vertical drive unit 42.
垂直駆動部42による駆動制御に応じて画素行の各画素51から出力される信号は、垂直信号線47を通してカラム処理部43に入力される。カラム処理部43は、各画素51から垂直信号線47を通して出力される画素信号に対して所定の信号処理を行うとともに、信号処理後の画素信号を一時的に保持する。
The signal output from each pixel 51 of the pixel row according to the drive control by the vertical drive unit 42 is input to the column processing unit 43 through the vertical signal line 47. The column processing unit 43 performs predetermined signal processing on the pixel signal output from each pixel 51 through the vertical signal line 47, and temporarily holds the pixel signal after the signal processing.
具体的には、カラム処理部43は、信号処理としてノイズ除去処理やAD(Analog to Digital)変換処理などを行う。
Specifically, the column processing unit 43 performs noise removal processing, AD (Analog to Digital) conversion processing, and the like as signal processing.
水平駆動部44は、シフトレジスタやアドレスデコーダなどによって構成され、カラム処理部43の画素列に対応する単位回路を順番に選択する。この水平駆動部44による選択走査により、カラム処理部43において単位回路ごとに信号処理された画素信号が順番に出力される。
The horizontal drive unit 44 is composed of a shift register, an address decoder, and the like, and sequentially selects unit circuits corresponding to the pixel strings of the column processing unit 43. By the selective scanning by the horizontal drive unit 44, the pixel signals processed by the column processing unit 43 for each unit circuit are sequentially output.
システム制御部45は、各種のタイミング信号を生成するタイミングジェネレータなどによって構成され、そのタイミングジェネレータで生成された各種のタイミング信号を基に、タップ駆動部41、垂直駆動部42、カラム処理部43、および水平駆動部44などの駆動制御を行う。
The system control unit 45 is composed of a timing generator or the like that generates various timing signals, and based on the various timing signals generated by the timing generator, the tap drive unit 41, the vertical drive unit 42, the column processing unit 43, And the drive control of the horizontal drive unit 44 and the like is performed.
信号処理部49は、少なくとも演算処理機能を有し、カラム処理部43から出力される画素信号に基づいて演算処理等の種々の信号処理を行う。データ格納部50は、信号処理部49での信号処理にあたって、その処理に必要なデータを一時的に格納する。
The signal processing unit 49 has at least an arithmetic processing function, and performs various signal processing such as arithmetic processing based on the pixel signal output from the column processing unit 43. The data storage unit 50 temporarily stores data necessary for signal processing by the signal processing unit 49.
<通常モード処理>
次に、図8を参照して、図4を参照して説明した通常モード処理について説明する。通常モード処理においては、単独のデプスセンサ11による測距と、測距結果に基づいたデプス画像の生成処理であるので、他の光源による影響は考慮されない。 <Normal mode processing>
Next, the normal mode processing described with reference to FIG. 4 will be described with reference to FIG. In the normal mode processing, the distance is measured by asingle depth sensor 11 and the depth image is generated based on the distance measurement result, so that the influence of other light sources is not considered.
次に、図8を参照して、図4を参照して説明した通常モード処理について説明する。通常モード処理においては、単独のデプスセンサ11による測距と、測距結果に基づいたデプス画像の生成処理であるので、他の光源による影響は考慮されない。 <Normal mode processing>
Next, the normal mode processing described with reference to FIG. 4 will be described with reference to FIG. In the normal mode processing, the distance is measured by a
ステップS11において、制御部31は、光変調部21を制御して、所定の変調周波数で発光ダイオード22を発光させて、照射光を照射させる。この処理により、光源32より所定の変調周波数で変調された変調光からなる照射光が照射領域に照射され、照射領域内の物体により反射された反射光がTOFセンサ26に入射される状態になる。
In step S11, the control unit 31 controls the optical modulation unit 21 to cause the light emitting diode 22 to emit light at a predetermined modulation frequency to irradiate the irradiation light. By this process, the irradiation area is irradiated with the irradiation light composed of the modulated light modulated at a predetermined modulation frequency from the light source 32, and the reflected light reflected by the object in the irradiation area is incident on the TOF sensor 26. ..
ステップS12において、制御部31は、TOFセンサ26の画素51のそれぞれの第1のタップTA(TapA)を制御して、発光ダイオード22の発光タイミングと同期して露光させて、同一の位相(即ちPhase0)、および180度ずらした位相(Phase180)の画素信号を連続して検出して出力させる。
In step S12, the control unit 31 controls the first tap TA (TapA) of each of the pixels 51 of the TOF sensor 26 to expose the light emitting diode 22 in synchronization with the light emitting timing, and has the same phase (that is, that is). Phase0) and 180-degree shifted phase (Phase180) pixel signals are continuously detected and output.
すなわち、この処理により、各画素51のそれぞれの図4を参照して説明した、式(1)におけるφ0,φ2である位相Phase0、Phase180のそれぞれで検出されたシグナル値q0A、q2Aが連続的に求められることになる。
That is, by this processing, the signal values q 0A and q 2A detected in the phases Phase 0 and Phase 180, which are φ0 and φ2 in the equation (1), which are described with reference to FIG. 4 of each pixel 51, are continuous. Will be required.
ステップS13において、制御部31は、TOFセンサ26の画素51のそれぞれの第2のタップTB(TapB)を制御して、発光ダイオード22の発光タイミングと90度ずらした位相で露光させて、90度ずらした位相(Phase90)、および270度ずらした位相(Phase270)の画素信号を連続して検出して、同期処理部28を介して演算部29に出力させる。
In step S13, the control unit 31 controls the second tap TB (TapB) of each of the pixels 51 of the TOF sensor 26 to expose the light emitting diode 22 in a phase shifted by 90 degrees from the light emitting timing of the light emitting diode 22 to 90 degrees. Pixel signals having a shifted phase (Phase90) and a shifted phase (Phase270) of 270 degrees are continuously detected and output to the calculation unit 29 via the synchronization processing unit 28.
すなわち、この処理により、各画素51のそれぞれの図4を参照して説明した、式(1)におけるφ1,φ3である位相Phase90、Phase270のそれぞれで検出されたシグナル値q1A、q3Aが連続的に求められることになる。
That is, by this processing, the signal values q 1A and q 3A detected in the phases Phase 90 and Phase 270, which are φ1 and φ3 in the equation (1), which are described with reference to FIG. 4 of each pixel 51, are continuous. Will be required.
ステップS14において、演算部29は、式(1)を演算することにより、各画素51について、測距演算を行い、演算結果であるデプス信号をデプス画像生成部30に出力する。
In step S14, the calculation unit 29 calculates the distance measurement calculation for each pixel 51 by calculating the equation (1), and outputs the depth signal as the calculation result to the depth image generation unit 30.
ステップS15において、デプス画像生成部30は、各画素51のデプス信号に基づいて、デプス画像を生成して出力する。
In step S15, the depth image generation unit 30 generates and outputs a depth image based on the depth signal of each pixel 51.
ステップS16において、制御部31は、処理の終了が指示されたか否かを判定し、終了が指示されていない場合、処理は、ステップS11に戻る。すなわち、処理の終了が指示されるまで、ステップS11乃至S16の処理が繰り返されて、デプス画像が生成されて出力され続ける。
In step S16, the control unit 31 determines whether or not the end of the process is instructed, and if the end is not instructed, the process returns to step S11. That is, the processes of steps S11 to S16 are repeated until the end of the process is instructed, and the depth image is generated and continues to be output.
そして、ステップS16において、処理の終了が指示されると、処理が終了する。
Then, in step S16, when the end of the process is instructed, the process ends.
以上の一連の処理により、単独のデプスセンサ11による測距処理がなされて、測距結果に基づいてデプス画像が生成されて出力される。通常モードにおいては、1フレームで4位相分のシグナル値が求められて、デプス画像が生成されるため、高速な処理を実現することが可能となる。
By the above series of processing, distance measurement processing is performed by a single depth sensor 11, and a depth image is generated and output based on the distance measurement result. In the normal mode, signal values for four phases are obtained in one frame and a depth image is generated, so that high-speed processing can be realized.
<補正モード処理>
次に、図9のフローチャートを参照して、補正モード処理について説明する。補正モード処理は、複数のデプスセンサ11により、同一の物体に対する測距処理がなされていることを想定した処理である。 <Correction mode processing>
Next, the correction mode processing will be described with reference to the flowchart of FIG. The correction mode processing is a processing assuming that distance measurement processing is performed on the same object by a plurality ofdepth sensors 11.
次に、図9のフローチャートを参照して、補正モード処理について説明する。補正モード処理は、複数のデプスセンサ11により、同一の物体に対する測距処理がなされていることを想定した処理である。 <Correction mode processing>
Next, the correction mode processing will be described with reference to the flowchart of FIG. The correction mode processing is a processing assuming that distance measurement processing is performed on the same object by a plurality of
ステップS31において、制御部31は、光変調部21を制御して、所定の変調周波数で、かつ、所定のインターバルが生じるように発光ダイオード22を発光させて、照射光を照射させる。
In step S31, the control unit 31 controls the optical modulation unit 21 to emit light from the light emitting diode 22 at a predetermined modulation frequency and at a predetermined interval to irradiate the irradiation light.
すなわち、補正モード処理においては、図5を参照して説明したように、所定の変調周波数で発光する光が十分に減衰する長さTdよりも十分に長い周期TLで発光ダイオード22が発光と、消灯を繰り返すように制御される。
That is, in the correction mode processing, as described with reference to FIG. 5, the light emitting diode 22 emits light at a period TL sufficiently longer than the length Td at which the light emitted at a predetermined modulation frequency is sufficiently attenuated. It is controlled to repeat turning off.
また、この処理により、光源32より所定の変調周波数で変調された変調光からなる照射光が、周期TLで照射領域に照射され、照射領域内の物体により反射された反射光がTOFセンサ26に入射される状態になる。
Further, by this processing, the irradiation light composed of the modulated light modulated at a predetermined modulation frequency from the light source 32 is irradiated to the irradiation region in the periodic TL, and the reflected light reflected by the object in the irradiation region is applied to the TOF sensor 26. It will be in a state of being incident.
ステップS32において、制御部31は、位相のずれをカウントするためのカウンタθを0に初期化する。このカウンタθは、90度ずつ0,90,180,270度の4種類の位相がカウントされるカウンタである。
In step S32, the control unit 31 initializes the counter θ for counting the phase shift to 0. This counter θ is a counter that counts four types of phases of 0, 90, 180, and 270 degrees in 90 degree increments.
ステップS33において、制御部31は、TOFセンサ26の画素51のそれぞれの第1のタップTA(TapA)を制御して、発光ダイオード22の発光タイミングに対してθ度だけ位相がずれたタイミングで露光させて、画素信号を検出させて同期処理部28を介して演算部29に出力させる。
In step S33, the control unit 31 controls the first tap TA (TapA) of each of the pixels 51 of the TOF sensor 26, and exposes the light at a timing that is out of phase by θ degrees with respect to the light emission timing of the light emitting diode 22. The pixel signal is detected and output to the calculation unit 29 via the synchronization processing unit 28.
ステップS34において、演算部29は、カウンタθに対応付けてTapAの露光結果である画素信号を積算して記憶する。
In step S34, the calculation unit 29 integrates and stores the pixel signal which is the exposure result of TapA in association with the counter θ.
ステップS35において、制御部31は、TOFセンサ26の画素51のそれぞれの第1のタップTB(TapB)を制御して、発光ダイオード22が消灯して、光が十分に減衰したタイミングに対してθ度だけ位相がずれたタイミングで露光させて、画素信号を検出させて同期処理部28を介して演算部29に出力させる。
In step S35, the control unit 31 controls the first tap TB (TapB) of each of the pixels 51 of the TOF sensor 26, and θ with respect to the timing when the light emitting diode 22 is turned off and the light is sufficiently attenuated. The pixels are exposed at a timing that is out of phase by a degree, and the pixel signal is detected and output to the calculation unit 29 via the synchronization processing unit 28.
ステップS36において、演算部29は、カウンタθに対応付けてTapBの露光結果である画素信号を積算して記憶する。
In step S36, the calculation unit 29 integrates and stores the pixel signal which is the exposure result of TapB in association with the counter θ.
ステップS37において、制御部31は、カウンタθが270か否かを判定し、270ではない場合、処理は、ステップS38に進む。
In step S37, the control unit 31 determines whether or not the counter θ is 270, and if it is not 270, the process proceeds to step S38.
ステップS38において、制御部31は、カウンタθを90だけインクリメントし、処理は、ステップS33に戻る。
In step S38, the control unit 31 increments the counter θ by 90, and the process returns to step S33.
すなわち、ステップS37において、270であると判定されるまで、ステップS33乃至S38の処理が繰り返されて、0,90,180,270度の4種類の位相のずれに対応した画素信号が順次積算される。
That is, in step S37, the processes of steps S33 to S38 are repeated until it is determined to be 270, and pixel signals corresponding to four types of phase shifts of 0, 90, 180, and 270 degrees are sequentially integrated.
そして、ステップS37において、カウンタθが270であると判定された場合、処理は、ステップS39に進む。
Then, if it is determined in step S37 that the counter θ is 270, the process proceeds to step S39.
ステップS39において、制御部31は、ステップS32乃至S38の処理が所定回数繰り返されたか否かを判定し、所定回数繰り返されていない場合、処理は、ステップS32に戻る。
In step S39, the control unit 31 determines whether or not the processes of steps S32 to S38 have been repeated a predetermined number of times, and if the processes have not been repeated a predetermined number of times, the process returns to step S32.
すなわち、ステップS33,S34における処理は、カウンタθが0,90,180,270のそれぞれに対応する、図5のフレームN乃至N+3における自らの照射光に対する反射光WRの画素信号を繰り返し検出し、それぞれに積算する処理である。
That is, in the processing in steps S33 and S34, the pixel signals of the reflected light WR with respect to the own irradiation light in frames N to N + 3 of FIG. This is the process of integrating.
ステップS35,S36における処理は、カウンタθが0,90,180,270のそれぞれに対応する、図5のフレームN乃至N+3における他の照射光に対する反射光WRoの画素信号を繰り返し検出し、それぞれに積算する処理である。
In the processing in steps S35 and S36, the pixel signals of the reflected light WRo with respect to the other irradiation light in the frames N to N + 3 of FIG. 5 corresponding to the counters θ corresponding to 0, 90, 180, and 270 are repeatedly detected and integrated into each. It is a process.
そして、ステップS32乃至S39の処理が繰り返されることにより、反射光WR、および反射光WRoの画素信号が、0,90,180,270の4種類の位相のずれに対応付けて、所定回数繰り返し積算される。
Then, by repeating the processes of steps S32 to S39, the pixel signals of the reflected light WR and the reflected light WRo are repeatedly integrated a predetermined number of times in association with the four types of phase shifts of 0,90,180,270.
すなわち、ステップS32乃至39の処理が繰り返されることにより、式(2)におけるΣTapAN,ΣTapAN+2,ΣTapAN+1,ΣTapAN+3、およびΣTapBN,ΣTapBN+2,ΣTapBN+1,ΣTapBN+3が求められる。
That is, by repeating the processes of steps S32 to 39, ΣTapA N , ΣTapA N + 2 , ΣTapA N + 1 , ΣTapA N + 3 , and ΣTapB N , ΣTapB N + 2 , ΣTapB N + 1 in the equation (2). , ΣTapB N + 3 is required.
ステップS39において、所定回数繰り返されたと判定された場合、処理は、ステップS40に進む。
If it is determined in step S39 that the process has been repeated a predetermined number of times, the process proceeds to step S40.
ステップS40において、演算部29は、上述した式(2)により得られるφ0乃至φ3を用いて式(1)を演算することにより、各画素51について測距演算を行い、演算結果であるデプス信号をデプス画像生成部30に出力する。
In step S40, the calculation unit 29 performs a distance measurement calculation for each pixel 51 by calculating the formula (1) using φ0 to φ3 obtained by the above formula (2), and the depth signal which is the calculation result. Is output to the depth image generation unit 30.
ステップS41において、デプス画像生成部30は、各画素51のデプス信号に基づいて、デプス画像を生成して出力する。
In step S41, the depth image generation unit 30 generates and outputs a depth image based on the depth signal of each pixel 51.
ステップS42において、制御部31は、処理の終了が指示されたか否かを判定し、終了が指示されていない場合、処理は、ステップS32に戻る。すなわち、処理の終了が指示されるまで、ステップS32乃至S42の処理が繰り返されて、デプス画像が生成されて出力され続ける。
In step S42, the control unit 31 determines whether or not the end of the process is instructed, and if the end is not instructed, the process returns to step S32. That is, the processes of steps S32 to S42 are repeated until the end of the process is instructed, and the depth image is generated and continues to be output.
そして、ステップS42において、処理の終了が指示されると、処理が終了する。
Then, in step S42, when the end of the process is instructed, the process ends.
以上の一連の処理により、式(2)で示されるφ0乃至φ3のそれぞれの減算結果は、実質的に、各位相における自らの光源2に基づいた反射光WR(必要反射光)と、他の光源2に基づいた反射光WRo(不要反射光)とを含む反射光により検出される画素信号の平均値の積算値から、各位相における他の光源2のみに基づいた反射光WRo(不要反射光)により検出される画素信号の平均値の積算値が除去されたものとなる。つまり、式(2)で示されるφ0乃至φ3は、各位相における、他の光源2に基づいた反射光WRoにより検出される画素信号を含まない、自らの光源2に基づいた反射光WRにより検出される画素信号の平均値の積算値とされる。
Through the above series of processing, the subtraction results of φ0 to φ3 represented by the equation (2) are substantially the reflected light WR (necessary reflected light) based on the own light source 2 in each phase and the other. From the integrated value of the average value of the pixel signals detected by the reflected light including the reflected light WRo (unnecessary reflected light) based on the light source 2, the reflected light WRo (unwanted reflected light) based only on the other light source 2 in each phase. ), The integrated value of the average value of the pixel signals is removed. That is, φ0 to φ3 represented by the equation (2) are detected by the reflected light WR based on the own light source 2 in each phase, which does not include the pixel signal detected by the reflected light WRo based on the other light source 2. It is the integrated value of the average value of the pixel signals to be generated.
結果として、(φ1-φ3)/(φ0-φ2)の演算においては、他の光源2に基づいた反射光WRoによる影響が実質的に除去された値が用いられることになるので、他の光源2による衝突の影響を低減するように補正することが可能となり、適切な測距演算を実現することが可能となる。
As a result, in the calculation of (φ1-φ3) / (φ0-φ2), a value in which the influence of the reflected light WRo based on the other light source 2 is substantially removed is used, so that the other light source is used. It is possible to make corrections so as to reduce the influence of the collision caused by 2, and it is possible to realize an appropriate distance measurement calculation.
<<3.第2の実施の形態>>
<衝突の検出>
以上においては、1台のデプスセンサ11により、単独で所定の領域のデプス画像を生成する場合には通常モード処理によりデプス画像を生成し、複数のデプスセンサ11により、同一の所定の領域のデプス画像を生成する場合には補正モード処理によりデプス画像を生成する例について説明してきた。 << 3. Second Embodiment >>
<Collision detection>
In the above, when asingle depth sensor 11 independently generates a depth image of a predetermined region, a depth image is generated by normal mode processing, and a plurality of depth sensors 11 generate a depth image of the same predetermined region. In the case of generation, an example of generating a depth image by correction mode processing has been described.
<衝突の検出>
以上においては、1台のデプスセンサ11により、単独で所定の領域のデプス画像を生成する場合には通常モード処理によりデプス画像を生成し、複数のデプスセンサ11により、同一の所定の領域のデプス画像を生成する場合には補正モード処理によりデプス画像を生成する例について説明してきた。 << 3. Second Embodiment >>
<Collision detection>
In the above, when a
しかしながら、複数のデプスセンサ11が用いられているか否かが認識できないような場合、補正モード処理でデプス画像を生成する必要があるが、通常モード処理に比べるとフレームレートが落ちる、または、照射光のDutyが下がるといった状況でデプス画像が生成され続けることになる。
However, when it is not possible to recognize whether or not a plurality of depth sensors 11 are used, it is necessary to generate a depth image by the correction mode processing, but the frame rate is lower than that of the normal mode processing, or the irradiation light Depth images will continue to be generated in situations where the Duty goes down.
そこで、例えば、他の光源からの混信や衝突を検出する動作モードとして、画素51のうち、TapAのみで、発光タイミングに同期した4位相の画素信号を検出して測距を行いつつ、TapBが消灯したタイミングに同期して、複数の異なる積算時間(Integ Time)を設定し、画素信号の積算値を検出する。
Therefore, for example, as an operation mode for detecting interference or collision from another light source, TapB among the pixels 51 detects a pixel signal of four phases synchronized with the light emission timing and measures the distance with only TapA. A plurality of different integration times (IntegTime) are set in synchronization with the timing when the light is turned off, and the integrated value of the pixel signal is detected.
他の光源からの変調光が含まれていない場合、TapBでは、太陽光などの一定のレベルの妨害光のみが検出されるため、積算時間の長さに比例して変化する積算値が検出される。
When modulated light from other light sources is not included, TapB detects only a certain level of disturbing light such as sunlight, so it detects an integrated value that changes in proportion to the length of the integrated time. To.
一方、他の光源からの変調光が含まれている場合、TapBでは、太陽光などの一定のレベルの妨害光に加えて、変調光からなる反射光の画素信号が検出されることになるので、積算時間の長さとは無関係に変化する積算値が検出される。
On the other hand, when modulated light from another light source is included, TapB detects a pixel signal of reflected light consisting of modulated light in addition to a certain level of interfering light such as sunlight. , An integrated value that changes regardless of the length of the integrated time is detected.
すなわち、図10においては、上から連続するNフレーム目乃至N+3フレーム目のそれぞれにおいて、それぞれ異なる積算時間Te1乃至Te4でTapBが露光されるときの画素信号の積算値が検出される様子が示されている。
That is, in FIG. 10, the integrated value of the pixel signal when TapB is exposed at different integrated times Te1 to Te4 is detected in each of the Nth frame to the N + 3rd frame continuous from the top. It is shown.
尚、各フレームにおける上段の波形が変調光の波形Wmであり、下段が太陽光の波形Wsである。
The upper waveform in each frame is the waveform Wm of the modulated light, and the lower waveform is the waveform Ws of sunlight.
図10で示されるように、変調光の波形Wmは、変調に応じて変化するため、Nフレーム目乃至N+3フレーム目において、それぞれ異なる長さの積算時間Te1乃至Te4で画素信号が検出されても、時間の長さとは相関のない積算値が検出される。
As shown in FIG. 10, since the waveform Wm of the modulated light changes according to the modulation, pixel signals are detected in the Nth frame to the N + 3rd frame at different lengths of integration times Te1 to Te4. However, an integrated value that does not correlate with the length of time is detected.
一方、太陽光の波形Wsは、ある程度一定であるため、Nフレーム目乃至N+3フレーム目において、それぞれ異なる長さの積算時間Te1乃至Te4で画素信号が検出されると、積算時間の長さと相関のある積算値が検出される。
On the other hand, since the waveform Ws of sunlight is constant to some extent, when pixel signals are detected at the integration times Te1 to Te4 having different lengths in the Nth frame to the N + 3rd frame, the length of the integration time is determined. Correlated integrated values are detected.
このため、例えば、異なる積算時間における、太陽光など背景光とみなされる積算値の標準偏差を予め閾値として求めておき、異なる積算時間で積算値を所定回数だけ検出したときの標準偏差と、予め求めた閾値との比較に基づいて、衝突の有無を判定するようにしてもよい。
Therefore, for example, the standard deviation of the integrated value considered to be background light such as sunlight at different integrated times is obtained in advance as a threshold value, and the standard deviation when the integrated value is detected a predetermined number of times at different integrated times and the standard deviation in advance. The presence or absence of collision may be determined based on the comparison with the obtained threshold value.
例えば、異なる積算時間で積算値を所定回数だけ検出したときの標準偏差が、予め設定された閾値よりも小さく、すなわち、積算値の分散が小さく、ばらつきがないとみなせるときには、変調光による影響はなく、他の光源との衝突はないものと判定することができる。
For example, when the standard deviation when the integrated values are detected a predetermined number of times at different integrated times is smaller than the preset threshold value, that is, when the variance of the integrated values is small and it can be considered that there is no variation, the influence of the modulated light is It can be determined that there is no collision with other light sources.
逆に、積算値の標準偏差が予め求めた閾値よりも大きく、積算値の分散が大きく、ばらつきがあるとみなせるときには、変調光による影響があり、他の光源との衝突があるものと判定することができる。
On the contrary, when the standard deviation of the integrated value is larger than the threshold value obtained in advance, the variance of the integrated value is large, and it can be considered that there is a variation, it is determined that there is an influence of the modulated light and there is a collision with another light source. be able to.
尚、衝突を検出する間も、TapAは、測距処理を行うようにする。この場合、TapAとTapBとの両方を用いた測距処理はできないが、TapBにより衝突の検出がなされている間も、フレームレートは低下するが測距処理が行われるようにしてもよい。
Note that TapA will perform distance measurement processing while detecting a collision. In this case, the distance measuring process using both TapA and TapB cannot be performed, but the distance measuring process may be performed although the frame rate is reduced while the collision is detected by TapB.
<衝突の回避>
また、他の光源との衝突が検出された場合、光源を発光させる期間の間に設定される、光源を消灯させる期間、すなわち、ブランク期間の長さを、乱数を用いてランダムに変化させることにより、他の光源との衝突を回避するようにすることで衝突による影響を低減するようにしてもよい。 <Avoid collision>
In addition, when a collision with another light source is detected, the length of the period for turning off the light source, that is, the blank period, which is set during the period for emitting the light source, is randomly changed by using a random number. Therefore, the influence of the collision may be reduced by avoiding the collision with another light source.
また、他の光源との衝突が検出された場合、光源を発光させる期間の間に設定される、光源を消灯させる期間、すなわち、ブランク期間の長さを、乱数を用いてランダムに変化させることにより、他の光源との衝突を回避するようにすることで衝突による影響を低減するようにしてもよい。 <Avoid collision>
In addition, when a collision with another light source is detected, the length of the period for turning off the light source, that is, the blank period, which is set during the period for emitting the light source, is randomly changed by using a random number. Therefore, the influence of the collision may be reduced by avoiding the collision with another light source.
すなわち、図11の上段で示されるように、自らの光源の波形WLにおける発光期間TLと、他の光源の波形WLoにおける発光期間TLoとが重なっている場合、上述した処理により衝突が検出されることになる。
That is, as shown in the upper part of FIG. 11, when the light emission period TL in the waveform WL of the own light source and the light emission period TL in the waveform WLo of another light source overlap, the collision is detected by the above-mentioned processing. It will be.
なお、図11の上段においては、所定の長さのブランクTb1の後、発光期間TLが設定されると、同期してTapAの露光期間Teが設定され、その後、読出期間Trが設定される。
In the upper part of FIG. 11, when the light emission period TL is set after the blank Tb1 having a predetermined length, the exposure period Te of TapA is set synchronously, and then the read period Tr is set.
そして、再び、ブランク期間Tb1とされた後、発光期間TLが設定されると、同期してTapAの露光期間Teが設定され、その後、読出期間Trが設定され、同様の処理が繰り返されている。
Then, after the blank period Tb1 is set again, when the light emission period TL is set, the exposure period Te of TapA is set in synchronization, and then the read period Tr is set, and the same process is repeated. ..
図11の上段のような場合については、衝突が検出されることになるので、ブランク期間Tb1を乱数に基づいた長さに変更させることで、例えば、図11の下段で示されるように、ブランク期間Tb2に変更させるようにしてもよい。
In the case of the upper part of FIG. 11, a collision will be detected. Therefore, by changing the blank period Tb1 to a length based on a random number, for example, as shown in the lower part of FIG. 11, a blank period is used. The period Tb2 may be changed.
この結果、他の光源の波形WLoにおける発光期間TLoと、発光期間TLとは重ならない状態となるので、衝突が回避される。
As a result, the light emission period TLo in the waveform WLo of another light source and the light emission period TL do not overlap, so that collision is avoided.
尚、ブランク期間の長さは乱数によりランダムに変更されることになるため、ブランク期間の長さを変更しても衝突が回避されない恐れがある。この場合、衝突が回避されるまで、繰り返しブランク期間の長さをランダムに変更させるようにしてもよい。
Since the length of the blank period is randomly changed by a random number, there is a risk that collision will not be avoided even if the length of the blank period is changed. In this case, the length of the repeating blank period may be randomly changed until a collision is avoided.
また、図11の上段におけるブランク期間Tb1と、図11の下段におけるブランク期間Tb2とは、長さがランダムに変更されるが、露光期間Teと読出期間Trの長さについては同一のままである。
Further, although the lengths of the blank period Tb1 in the upper part of FIG. 11 and the blank period Tb2 in the lower part of FIG. 11 are randomly changed, the lengths of the exposure period Te and the read period Tr remain the same. ..
さらに、衝突を検出する場合、TapAのみでの測距がなされることにより、フレームレートが低減するので、通常は、通常モード処理での動作を行い、所定時間が経過するごとに衝突を検出するようにし、衝突が検出されたときにはブランク期間の長さをランダムに変更するようにしてもよい。
Furthermore, when detecting a collision, the frame rate is reduced by measuring the distance only with TapA. Therefore, normally, the operation is performed in the normal mode processing, and the collision is detected every time a predetermined time elapses. Therefore, when a collision is detected, the length of the blank period may be changed at random.
上述したように衝突検出を行い、衝突が検出された場合には、衝突を回避させるような動作モードを、以降において、衝突回避モード処理と称する。
The operation mode in which the collision is detected as described above and the collision is avoided when the collision is detected is hereinafter referred to as the collision avoidance mode processing.
<衝突回避モード処理>
次に、図12のフローチャートを参照して、衝突回避モード処理について説明する。 <Collision avoidance mode processing>
Next, the collision avoidance mode processing will be described with reference to the flowchart of FIG.
次に、図12のフローチャートを参照して、衝突回避モード処理について説明する。 <Collision avoidance mode processing>
Next, the collision avoidance mode processing will be described with reference to the flowchart of FIG.
ステップS51において、制御部31は、図8のフローチャートを参照して説明した通常モードにより画素単位で測距処理を行うとともに、測距結果であるデプス信号に基づいて、デプス画像を生成して出力する。尚、ステップS51の通常モード処理は、処理は、図8のフローチャートにおけるステップS11乃至S15のみの処理となる。
In step S51, the control unit 31 performs distance measurement processing in pixel units in the normal mode described with reference to the flowchart of FIG. 8, and generates and outputs a depth image based on the depth signal which is the distance measurement result. To do. In the normal mode processing of step S51, the processing is only the processing of steps S11 to S15 in the flowchart of FIG.
ステップS52において、制御部31は、通常モード処理がなされてから所定時間が経過したか否かを判定し、経過するまで、ステップS51,S52の処理が繰り返されて、通常モード処理により、デプス画像が生成され続ける。そして、ステップS52において、所定時間が経過したと判定された場合、処理は、ステップS53に進む。
In step S52, the control unit 31 determines whether or not a predetermined time has elapsed since the normal mode processing was performed, and the processing of steps S51 and S52 is repeated until the predetermined time elapses, and the depth image is obtained by the normal mode processing. Continues to be generated. Then, if it is determined in step S52 that the predetermined time has elapsed, the process proceeds to step S53.
ステップS53において、制御部31は、衝突検出処理を実行して、4種類の積算時間におけるTapBの画素信号の所定回数の積算値を求める。尚、衝突検出処理については、図13を参照して詳細を後述する。
In step S53, the control unit 31 executes the collision detection process to obtain the integrated value of the predetermined number of times of the TapB pixel signal in the four types of integrated times. The details of the collision detection process will be described later with reference to FIG.
ステップS54において、制御部31は、4種類の積算時間におけるTapBの画素信号の積算値の所定数の標準偏差が、いずれも所定の閾値未満であるか否かを判定し、積算値のばらつきがなく、他の光源からの変調光が含まれていないか否かを判定する。
In step S54, the control unit 31 determines whether or not the standard deviation of the predetermined number of the integrated values of the TapB pixel signals at the four types of integration times is less than the predetermined threshold value, and the variation of the integrated values varies. It is determined whether or not modulated light from another light source is included.
ステップS54において、4種類の積算時間におけるTapBの画素信号の積算値の所定数の標準偏差の少なくともいずれかが所定の閾値未満ではなく、何らかの他の光源からの変調光による衝突が発生しているとみなされる場合、処理は、ステップS55に進む。
In step S54, at least one of the standard deviations of a predetermined number of the integrated values of the TapB pixel signals at the four types of integration times is not less than a predetermined threshold value, and a collision due to modulated light from some other light source occurs. If it is determined, the process proceeds to step S55.
ステップS55において、制御部31は、光変調部21を制御して、発光ダイオード22の発光期間のブランク期間の長さを乱数に基づいて変更させるように制御する。すなわち、この処理により、通常モード処理における発光ダイオード22の発光期間のブランク期間の長さが変更される。
In step S55, the control unit 31 controls the optical modulation unit 21 to change the length of the blank period of the light emitting period of the light emitting diode 22 based on a random number. That is, this process changes the length of the blank period of the light emitting period of the light emitting diode 22 in the normal mode process.
尚、ステップS54において、4種類の積算時間におけるTapBの画素信号の積算値の所定数の標準偏差が、いずれも所定の閾値未満であり、他の光源からの変調光による衝突がないと判定された場合、ステップS55の処理はスキップされる。
In step S54, it is determined that the standard deviations of the predetermined numbers of the integrated values of the TapB pixel signals at the four types of integration times are all less than the predetermined threshold values, and there is no collision due to the modulated light from other light sources. If so, the process of step S55 is skipped.
ステップS56において、制御部31は、処理の終了が指示されたか否かを判定し、終了が指示されていない場合、処理は、ステップS51に戻り、それ以降の処理が繰り返される。
In step S56, the control unit 31 determines whether or not the end of the process is instructed, and if the end is not instructed, the process returns to step S51, and the subsequent processes are repeated.
そして、ステップS56において、処理の終了が指示されると、処理が終了する。
Then, in step S56, when the end of the process is instructed, the process ends.
以上の処理により、他の光源からの変調光による衝突の有無が検出されて、衝突が検出されると、発光ダイオード22の発光期間の間のブランク期間の長さがランダムに変更されることにより、衝突が回避されるようになる。
By the above processing, the presence or absence of collision due to the modulated light from another light source is detected, and when the collision is detected, the length of the blank period between the light emitting periods of the light emitting diode 22 is randomly changed. , Collisions will be avoided.
<衝突検出処理>
次に、図13のフローチャートを参照して、衝突検出処理について説明する。 <Collision detection process>
Next, the collision detection process will be described with reference to the flowchart of FIG.
次に、図13のフローチャートを参照して、衝突検出処理について説明する。 <Collision detection process>
Next, the collision detection process will be described with reference to the flowchart of FIG.
ステップS71において、制御部31は、位相のずれをカウントするためのカウンタNを0に初期化する。このカウンタNは、4種類の積算時間を区別するためのカウンタである。
In step S71, the control unit 31 initializes the counter N for counting the phase shift to 0. This counter N is a counter for distinguishing four types of integrated times.
ステップS72において、制御部31は、TOFセンサ26の画素51のそれぞれの第2のタップTB(TapB)を制御して、カウンタNに対応して設定された長さの積算時間内で露光させて、画素信号を検出させて同期処理部28を介して演算部29に出力させる。
In step S72, the control unit 31 controls the second tap TB (TapB) of each of the pixels 51 of the TOF sensor 26 to expose within the integration time of the length set corresponding to the counter N. , The pixel signal is detected and output to the calculation unit 29 via the synchronization processing unit 28.
ステップS73において、演算部29は、カウンタNに対応付けて設定された積算時間だけ、TapBの露光結果である画素信号を積算して積算値を求め記憶する。従って、ステップS72,S73の処理は、積算時間内において、繰り返されることになる。
In step S73, the calculation unit 29 integrates the pixel signals, which are the exposure results of TapB, for the integration time set in association with the counter N, obtains and stores the integrated value. Therefore, the processes of steps S72 and S73 are repeated within the integration time.
ステップS74において、制御部31は、カウンタNが3か否かを判定し、Nが3ではない場合、処理は、ステップS75に進む。
In step S74, the control unit 31 determines whether or not the counter N is 3, and if N is not 3, the process proceeds to step S75.
ステップS75において、制御部31は、カウンタNを1だけインクリメントし、処理は、ステップS72に戻る。
In step S75, the control unit 31 increments the counter N by 1, and the process returns to step S72.
すなわち、ステップS74において、カウンタNが3であると判定されるまで、ステップS72乃至S75の処理が繰り返される。
That is, in step S74, the processes of steps S72 to S75 are repeated until the counter N is determined to be 3.
そして、ステップS74において、カウンタNが3であると判定された場合、すなわち、4種類の積算期間における画素信号の積算値が求められると、処理は、ステップS76に進む。
Then, when it is determined in step S74 that the counter N is 3, that is, when the integrated values of the pixel signals in the four types of integration periods are obtained, the process proceeds to step S76.
ステップS76において、制御部31は、処理が所定回数繰り返されたか否かを判定し、所定回数繰り返されていない場合、処理は、ステップS71に戻る。
In step S76, the control unit 31 determines whether or not the process has been repeated a predetermined number of times, and if the process has not been repeated a predetermined number of times, the process returns to step S71.
すなわち、ステップS72,S73における処理は、カウンタNで区別される4種類の異なる長さの積算時間において検出される画素信号の積算値が求められ、記憶される処理である。
That is, the process in steps S72 and S73 is a process in which the integrated value of the pixel signal detected in the four types of integrated times of different lengths distinguished by the counter N is obtained and stored.
ステップS76において、所定回数繰り返され、4種類の積算時間に対応する積算値所定数求められたと判定された場合、処理は、ステップS77に進む。
In step S76, when it is determined that a predetermined number of integrated values corresponding to the four types of integrated times have been obtained after being repeated a predetermined number of times, the process proceeds to step S77.
ステップS77において、演算部29は、それぞれ長さが異なる積算期間の4種類の積算値のそれぞれの標準偏差を求めて、制御部31に出力する。
In step S77, the calculation unit 29 obtains the standard deviation of each of the four types of integrated values in the integrated period having different lengths, and outputs the standard deviation to the control unit 31.
以上の一連の処理により、TapBにより、複数の異なる長さの積算時間での画素信号の積算値と、その標準偏差が求められる。
Through the above series of processing, TapB obtains the integrated value of the pixel signals at multiple integrated times of different lengths and their standard deviations.
TapBで検出される4種類の積算時間の積算値は、太陽光などの背景光のみが含まれている場合、ばらつきが小さいので、標準偏差は、予め求められる所定の閾値よりも小さくなるので、他の光源からの変調光による衝突は発生していないものとみなすことができる。
The integrated values of the four types of integrated times detected by TapB have small variations when only background light such as sunlight is included, so that the standard deviation is smaller than a predetermined threshold value obtained in advance. It can be considered that the collision due to the modulated light from another light source has not occurred.
一方、これに対して、TapBで検出される4種類の積算時間の積算値は、他の光源からの変調光が含まれるような場合、ばらつきが大きくなるので、標準偏差は予め求められる所定の閾値よりも大きくなるので、他の光源からの変調光による衝突が発生しているものとみなすことができる。
On the other hand, on the other hand, the integrated values of the four types of integrated times detected by TapB have a large variation when modulated light from another light source is included, so that the standard deviation is determined in advance. Since it is larger than the threshold value, it can be considered that a collision due to the modulated light from another light source has occurred.
すなわち、一連の処理により衝突の有無を判定するための複数の異なる長さの積算時間における画素信号の積算値と、その標準偏差を求めることが可能となる。
That is, it is possible to obtain the integrated value of the pixel signals at a plurality of integration times of different lengths for determining the presence or absence of collision and the standard deviation thereof by a series of processes.
結果として、衝突を判定して、上述した処理により衝突を回避することが可能となる。
As a result, it is possible to determine the collision and avoid the collision by the above-mentioned processing.
また、説明は省略したが、TapAによりフレームレートを低減させながら測距処理とデプス画像の生成を繰り返すこともできるので、衝突検出処理を行っている期間においても、TapAにおいて並列して測距を実現することができ、デプス画像を生成し続けながら衝突検出を行うことが可能となる。
In addition, although the explanation is omitted, since the distance measurement process and the depth image generation can be repeated while reducing the frame rate by TapA, the distance measurement can be performed in parallel in TapA even during the collision detection process. This can be achieved, and collision detection can be performed while continuously generating depth images.
<<4.第3の実施の形態>>
<他の光源の変調光の変調周波数の推定>
以上においては、他の光源からの変調光による衝突の有無を検出して、衝突が検出されるときには、衝突を回避する例について説明してきたが、他の光源からの変調光による衝突が検出される場合には、他の光源の変調光の変調周波数を推定して、推定された他の変調光の変調周波数に基づいて、他の光源を利用して測距するようにしてもよい。この場合、他の光源を利用して測距することで、自らの光源の照射光と、他の光源の照射光との衝突が実質的に回避されて、衝突による影響が低減される。 << 4. Third Embodiment >>
<Estimation of modulation frequency of modulated light from other light sources>
In the above, an example of detecting the presence or absence of a collision due to modulated light from another light source and avoiding the collision when the collision is detected has been described, but a collision due to the modulated light from another light source is detected. In this case, the modulation frequency of the modulated light of another light source may be estimated, and the distance may be measured by using another light source based on the estimated modulation frequency of the other modulated light. In this case, by measuring the distance using another light source, the collision between the irradiation light of the own light source and the irradiation light of the other light source is substantially avoided, and the influence of the collision is reduced.
<他の光源の変調光の変調周波数の推定>
以上においては、他の光源からの変調光による衝突の有無を検出して、衝突が検出されるときには、衝突を回避する例について説明してきたが、他の光源からの変調光による衝突が検出される場合には、他の光源の変調光の変調周波数を推定して、推定された他の変調光の変調周波数に基づいて、他の光源を利用して測距するようにしてもよい。この場合、他の光源を利用して測距することで、自らの光源の照射光と、他の光源の照射光との衝突が実質的に回避されて、衝突による影響が低減される。 << 4. Third Embodiment >>
<Estimation of modulation frequency of modulated light from other light sources>
In the above, an example of detecting the presence or absence of a collision due to modulated light from another light source and avoiding the collision when the collision is detected has been described, but a collision due to the modulated light from another light source is detected. In this case, the modulation frequency of the modulated light of another light source may be estimated, and the distance may be measured by using another light source based on the estimated modulation frequency of the other modulated light. In this case, by measuring the distance using another light source, the collision between the irradiation light of the own light source and the irradiation light of the other light source is substantially avoided, and the influence of the collision is reduced.
自らの光源の変調周波数と、自らの変調周波数とは異なる他の光源の変調周波数との間には、以下の関係が成り立つ。
The following relationship holds between the modulation frequency of one's own light source and the modulation frequency of another light source different from its own modulation frequency.
すなわち、自らの変調周波数のHi信号とLow信号とを等期間で設定し、それぞれをTapAとTapBとで露光する場合、自らの光源と他の光源との変調周波数の差分周波数の周期を積算時間に設定すると、TapAとTapBとの画素信号が等しくなるという関係が成り立つ。
That is, when the Hi signal and the Low signal of the own modulation frequency are set in the same period and each is exposed by TapA and TapB, the period of the difference frequency of the modulation frequency between the own light source and the other light source is integrated. When set to, the relationship that the pixel signals of TapA and TapB are equal is established.
例えば、自らの光源の変調周波数が100MHzであり、他の光源の変調周波数が83.3MHzである場合、その差分周波数は、16.7MHzであり、差分周波数の周期は、60ns(=1/(100MHz-83.3MHz))となる。
For example, if the modulation frequency of one's own light source is 100MHz and the modulation frequency of another light source is 83.3MHz, the difference frequency is 16.7MHz, and the period of the difference frequency is 60ns (= 1 / (100MHz-). 83.3MHz)).
この周期となる60nsを積算時間(Integ Time)に設定すると、図14の上段で示されるように、図中の他の光源の波形WLoにおけるが発光する期間は、期間H1乃至H5とされる。
When 60 ns, which is this cycle, is set as the integrated time (Integ Time), as shown in the upper part of FIG. 14, the period during which light is emitted in the waveform WLo of another light source in the figure is the period H1 to H5.
変調周波数が83.3MHzとなる、他の光源からの照射光を、他の光源の発光タイミングと同期して、自らの光源の変調周波数である100MHzに対応するようにTapA,TapBで受光するとき、TapAとTapBの画素信号の比は、期間H1乃至H5のそれぞれにおいて、TapA:TapB=7:5,3:9,3:9,7:5,10:2となる。
When the irradiation light from another light source with a modulation frequency of 83.3MHz is received by TapA and TapB so as to correspond to the modulation frequency of 100MHz of its own light source in synchronization with the emission timing of the other light source. The ratio of the pixel signals of TapA and TapB is TapA: TapB = 7: 5, 3: 9, 3: 9, 7: 5, 10: 2 in each of the periods H1 to H5.
したがって、60nsを積算時間(Integ Time)における全期間におけるTapAとTapBの画素信号は、TapA:TapB=30:30(=(7+3+3+7+10):(5+9+9+5+2))となる。
Therefore, the pixel signals of TapA and TapB for the entire period of 60ns in the integrated time (IntegTime) are TapA: TapB = 30:30 (= (7 + 3 + 3 + 7 + 10) :( 5 + 9 + 9 +). It becomes 5 + 2)).
また、TapAおよびTapBの露光期間を、90度ずらしても、TapAとTapBの画素信号の比は、期間H1乃至H5のそれぞれにおいて、TapA:TapB=10:2,8:4,4:8,2:10,6:6となる。
Even if the exposure periods of TapA and TapB are shifted by 90 degrees, the ratio of the pixel signals of TapA and TapB is TapA: TapB = 10: 2, 8: 4, 4: 8, in each of the periods H1 to H5. It becomes 2:10 and 6: 6.
したがって、60nsを積算時間(Integ Time)における全期間におけるTapAとTapBの画素信号は、この場合も、TapA:TapB=30:30(=(10+8+4+2+6):(2+4+8+10+6))となる。
Therefore, the pixel signals of TapA and TapB for the entire period of 60ns in the integrated time (IntegTime) are again TapA: TapB = 30: 30 (= (10 + 8 + 4 + 2 + 6) :( 2+). It becomes 4 + 8 + 10 + 6)).
さらに、例えば、自らの光源の変調周波数が100MHzであり、他の光源の変調周波数が90.9MHzである場合、その差分周波数は、9.1MHzであり、差分周波数の周期は、110ns(=1/(100MHz-90.9MHz))となる。
Further, for example, when the modulation frequency of one's own light source is 100 MHz and the modulation frequency of another light source is 90.9 MHz, the difference frequency is 9.1 MHz, and the period of the difference frequency is 110 ns (= 1 / (= 1 / ( It becomes 100MHz-90.9MHz)).
この周期となる110nsを積算時間(Integ Time)に設定すると、図14の中段で示されるように、図中の他の光源の波形WLoにおけるが発光する期間は、期間H11乃至H20とされる。
When 110 ns, which is this cycle, is set as the integrated time (Integ Time), as shown in the middle part of FIG. 14, the period during which the waveform WLo of the other light source in the figure emits light is set to the period H11 to H20.
変調周波数が90.9MHzとなる、他の光源からの照射光を、他の光源の発光タイミングと同期して、自らの光源の変調周波数である100MHzに対応するようにTapA,TapBで受光するとき、TapAとTapBの画素信号の比は、期間H11乃至H20のそれぞれにおいて、TapA:TapB=7:4,5:6,3:8,1:10,2:9,4:7,6:5,8:3,10:1,9:2となる。
When the irradiation light from another light source with a modulation frequency of 90.9MHz is received by TapA and TapB so as to correspond to the modulation frequency of 100MHz of its own light source in synchronization with the emission timing of the other light source. The ratio of the pixel signals of TapA and TapB is TapA: TapB = 7: 4, 5: 6, 3: 8, 1:10, 2: 9, 4: 7, 6: 5, respectively, in the periods H11 to H20. It becomes 8: 3, 10: 1, 9: 2.
したがって、110nsを積算時間(Integ Time)における全期間におけるTapAとTapBの画素信号は、TapA:TapB=55:55(=(7+5+3+1+2+4+6+8+10+9):(4+6+8+10+9+7+5+3+1+2))となる。
Therefore, the pixel signals of TapA and TapB for the entire period of 110ns in the integrated time (IntegTime) are TapA: TapB = 55:55 (= (7 + 5 + 3 + 1 + 2 + 4 + 6 + 8 + 10+). 9) :( 4 + 6 + 8 + 10 + 9 + 7 + 5 + 3 + 1 + 2)).
ただし、nsオーダでの計測は現実的ではないので、例えば、1000サイクル等の複数サイクルの計測結果を積算して使用する。これにより、usオーダでの計測結果を用いることになるが、積算時間を60nsや110nsから、例えば、1000倍した60usや110usにしてもTapA,TapBの画素信号の比は同様である。
However, since measurement in the ns order is not realistic, for example, the measurement results of multiple cycles such as 1000 cycles are integrated and used. As a result, the measurement results in the us order will be used, but the ratio of the pixel signals of TapA and TapB is the same even if the integration time is increased from 60ns or 110ns, for example, 60us or 110us multiplied by 1000.
尚、他の光源の変調周波数が、自らの光源の変調周波数と同一の100MHzである場合、差分周波数は0となり、差分周波数の周期は無限大となる。このため、光源の波形WLoにおけるが発光する期間におけるTapAとTapBの画素信号の比は、一定の値となり、例えば、図14の下段で示されるように、TapA:TapB=7:3である場合、全期間において同一の比とされる。
If the modulation frequency of another light source is 100 MHz, which is the same as the modulation frequency of its own light source, the difference frequency becomes 0 and the period of the difference frequency becomes infinite. Therefore, the ratio of the pixel signals of TapA and TapB in the period of light emission in the waveform WLo of the light source becomes a constant value, and for example, when TapA: TapB = 7: 3 as shown in the lower part of FIG. , The ratio is the same for the entire period.
図14の上段および中段を参照して説明した関係を用いることにより、TapAおよびTapBの画素信号の積算値が同一になる積算時間の長さが求められれば、自らの変調周波数との差分周波数を求めることが可能となり、自らの変調周波数と差分周波数とから、他の光源の凡その変調周波数を求めることができる。
By using the relationship described with reference to the upper and middle stages of FIG. 14, if the length of the integration time at which the integrated values of the pixel signals of Tap A and Tap B become the same is obtained, the difference frequency from the own modulation frequency can be obtained. It becomes possible to obtain it, and it is possible to obtain the approximate modulation frequency of another light source from its own modulation frequency and the difference frequency.
また、他の光源の凡その変調周波数が求められれば、他の光源の変調周波数を推定することが可能となる。
Also, if the approximate modulation frequency of another light source is obtained, it is possible to estimate the modulation frequency of the other light source.
すなわち、自らの変調周波数を100MHzとして、他の光源の反射光をTapAとTapBとで検出した(積算値の)差分I(=TapA-TapB)は、差分周波数に応じて、例えば、図15で示されるような正弦波からなる波形c1乃至c3として表現される。
That is, the difference I (= TapA-TapB) (integrated value) detected by TapA and TapB, where the modulation frequency of itself is 100 MHz and the reflected light of another light source is detected, depends on the difference frequency, for example, in FIG. It is represented as waveforms c1 to c3 consisting of sine waves as shown.
すなわち、自らの光源と他の光源との差分周波数が100kHzである場合には、例えば、TapAとTapBとで検出した差分Iは、周期が10usの波形c1として表現される。
That is, when the difference frequency between the own light source and another light source is 100 kHz, for example, the difference I detected by Tap A and Tap B is expressed as a waveform c1 having a period of 10 us.
また、差分周波数が90kHzである場合には、TapAとTapBとで検出した差分の差分Iは、周期が11.1usの波形c2として表現される。
Further, when the difference frequency is 90 kHz, the difference I of the difference detected by TapA and TapB is expressed as a waveform c2 having a period of 11.1us.
さらに、差分周波数が80kHzである場合には、TapAとTapBとで検出した積算値の差分Iは、周期が12.5usの波形c3として表現される。
Further, when the difference frequency is 80 kHz, the difference I of the integrated values detected by TapA and TapB is expressed as a waveform c3 having a period of 12.5us.
尚、TapAとTapBとの検出結果は、ここでは1000サイクルのものを用いているため、実際のオーダはnsオーダであるが、ここでは、usオーダで表現されている。
Note that the detection results of TapA and TapB are 1000 cycles here, so the actual order is ns order, but here it is expressed in us order.
例えば、自らの光源の変調周波数が100MHzであり、変調周期が10nsであって、他の光源の変調周波数が99.9MHzで、変調周期が10.01nsである場合、TapAとTapBとで検出した差分Iは、100kHzで変化することになる。
For example, if the modulation frequency of one's own light source is 100 MHz, the modulation cycle is 10 ns, the modulation frequency of another light source is 99.9 MHz, and the modulation cycle is 10.01 ns, the difference I detected by Tap A and Tap B. Will change at 100kHz.
このように差分周波数が100kHzである場合には、TapAとTapBとで検出した画素信号の差分Iは、10usの周期で、周期的に変化する。
When the difference frequency is 100 kHz in this way, the difference I of the pixel signals detected by Tap A and Tap B changes periodically in a cycle of 10 us.
そこで、周期となる10usを、例えば、4等分した2.5usの倍数からなる積算時間を設定したTapAとTapBとで検出した差分Iをサンプリングすることで、100kHz以下となる差分周波数が特定される。
Therefore, the difference frequency of 100 kHz or less is specified by sampling the difference I detected by Tap A and Tap B for which the integration time consisting of multiples of 2.5 us divided into four equal parts is set for 10 us, which is the period. ..
例えば、図15の丸印で示されるように、周期を4等分した2.5usの倍数となる積算時間2.5us、5.0us、7.5us、10usの、TapAとTapBとで検出した積算値の差分Iのサンプルが、波形c1上にプロットされれば、差分周波数が100kHzの波形c1に特定される。
For example, as shown by the circle in FIG. 15, the difference between the integrated values detected by TapA and TapB at the integrated times of 2.5us, 5.0us, 7.5us, and 10us, which is a multiple of 2.5us obtained by dividing the period into four equal parts. If the sample of I is plotted on the waveform c1, it is specified as the waveform c1 having a difference frequency of 100 kHz.
同様に、4等分した2.5usの倍数からなる積算時間2.5us、5.0us、7.5us、10usの、TapAとTapBとで検出した積算値の差分Iが、波形c2上の値であれば、差分周波数が90kHzの波形c2に特定される。
Similarly, if the difference I of the integrated values detected by TapA and TapB for the integrated times 2.5us, 5.0us, 7.5us, and 10us consisting of multiples of 2.5us divided into four equal parts is the value on the waveform c2, The difference frequency is specified as the waveform c2 of 90 kHz.
さらに、4等分した2.5usの倍数からなる積算時間2.5us、5.0us、7.5us、10usの、TapAとTapBとで検出した積算値の差分Iが、波形c3上の値であれば、差分周波数が80kHzの波形c3に特定される。
Further, if the difference I of the integrated values detected by TapA and TapB of the integrated times 2.5us, 5.0us, 7.5us, and 10us consisting of multiples of 2.5us divided into four equal parts is the value on the waveform c3, the difference. It is specified as a waveform c3 having a frequency of 80 kHz.
尚、図15においては、差分周波数の波形c1乃至c3上にプロットされる例について説明しているが、積算時間2.5us、5.0us、7.5us、10usの、TapAとTapBとで検出した積算値の差分Iから、対応する正弦波の波形を特定することで、100kHz以下の差分周波数であれば求めることが可能である。
Note that FIG. 15 describes an example of plotting on the waveforms c1 to c3 of the difference frequency, but the integrated values detected by TapA and TapB at the integrated times of 2.5us, 5.0us, 7.5us, and 10us. By specifying the waveform of the corresponding sine wave from the difference I of, it is possible to obtain a difference frequency of 100 kHz or less.
自らの光源の変調周波数と差分周波数とから、他の光源の変調周波数が推定される。
The modulation frequency of another light source is estimated from the modulation frequency and difference frequency of its own light source.
他の光源の変調周波数がわかれば、キャリブレーションにより他の光源の変調周波数に対応したTapAとTapBの露光期間を設定することが可能となり、他の光源を用いた測距を実現することが可能となる。
If the modulation frequencies of other light sources are known, it is possible to set the exposure periods of Tap A and Tap B corresponding to the modulation frequencies of other light sources by calibration, and it is possible to realize distance measurement using other light sources. It becomes.
尚、一般的な位相変調の場合、キャリア周波数ずれが起きると、その差分となる周波数で、IQ座標が回転する。したがって、同じ周波数で視点も回転させれば止まって見えることになる。
In the case of general phase modulation, when a carrier frequency shift occurs, the IQ coordinates rotate at the frequency that is the difference. Therefore, if the viewpoint is also rotated at the same frequency, it will appear to stop.
つまり、100kHzの周波数差分があるということは、図16で示されるIQ座標上における周波数差分Δθ=θN+1-θNが100kHzに相当する(1秒間に100k回転する)ことになる。この周波数差分Δθの補正は、逆方向に回転する座標変換を行うための演算により実現される。
In other words, the fact that there is a frequency difference of 100kHz, becomes (100k rotates for one second) that the frequency difference Δθ = θ N + 1 -θ N corresponds to 100kHz on IQ coordinates shown in Figure 16. The correction of the frequency difference Δθ is realized by an operation for performing coordinate transformation that rotates in the opposite direction.
<他の光源の利用>
以上のように他の光源の変調周波数が推定される場合、TapAとTapBの露光期間が、他の光源の変調周波数に対応するように制御されることにより、自らの光源を用いることなく、他の光源のみを用いた測距を実現することができる。 <Use of other light sources>
When the modulation frequency of another light source is estimated as described above, the exposure period of TapA and TapB is controlled so as to correspond to the modulation frequency of the other light source, so that the other light source is not used. It is possible to realize distance measurement using only the light source of.
以上のように他の光源の変調周波数が推定される場合、TapAとTapBの露光期間が、他の光源の変調周波数に対応するように制御されることにより、自らの光源を用いることなく、他の光源のみを用いた測距を実現することができる。 <Use of other light sources>
When the modulation frequency of another light source is estimated as described above, the exposure period of TapA and TapB is controlled so as to correspond to the modulation frequency of the other light source, so that the other light source is not used. It is possible to realize distance measurement using only the light source of.
すなわち、自らの光源を用いた測距結果により得られるデプス画像と、他の光源を用いた測距結果により得られるデプス画像とから、他の光源までの距離と位置が特定される。
That is, the distance and position to the other light source are specified from the depth image obtained by the distance measurement result using the own light source and the depth image obtained by the distance measurement result using the other light source.
他の光源までの距離と位置が特定されれば、他の光源からの照射光が物体で反射することにより生じる反射光を受光することで、物体までの距離を測距することができる。
If the distance and position to another light source are specified, the distance to the object can be measured by receiving the reflected light generated by the irradiation light from the other light source being reflected by the object.
他の光源が、例えば、天井に設けられた高速点滅する天井照明であるような場合、他の光源の変調周波数を推定し、自らの光源と、他の光源とを用いて、他の光源の距離と位置が特定できる。そして、自らの光源と、天井照明からなる、他の光源との位置関係が変わらない限り、他の光源のみを用いた測距を実現し続けることができる。
If the other light source is, for example, a fast blinking ceiling light installed on the ceiling, the modulation frequency of the other light source is estimated, and the own light source and the other light source are used to obtain the other light source. The distance and position can be specified. Then, as long as the positional relationship between its own light source and another light source composed of ceiling lighting does not change, it is possible to continue to realize distance measurement using only the other light source.
例えば、図17で示されるように、同一の空間内に、デプスセンサ11、物体71乃至73、および天井照明81が設けられている場合について考える。
For example, consider the case where the depth sensor 11, the objects 71 to 73, and the ceiling lighting 81 are provided in the same space as shown in FIG.
尚、天井照明81は、図17の上部で示されるような波形パターンWLpで発光する。ここで、波形パターンWLpについては、矩形波形からなる変調周波数で発光されているタイミングと、矩形波形がない消灯しているタイミングとが交互に設定されているが、矩形波形がないタイミングを跨いでも、変調周波数のパターンにおける位相のずれはないものとする。
The ceiling illumination 81 emits light with a waveform pattern WLp as shown in the upper part of FIG. Here, with respect to the waveform pattern WLp, the timing at which light is emitted at a modulation frequency composed of a rectangular waveform and the timing at which the light is turned off without a rectangular waveform are alternately set, but even if the timing without a rectangular waveform is straddled. , It is assumed that there is no phase shift in the modulation frequency pattern.
図17の場合、デプスセンサ11が、自らの光源である発光ダイオード22を自らの変調周波数にて発光させることにより、物体71乃至73までの測距を行う。ここで、例えば、デプスセンサ11から物体71までの距離が1mであり、物体72までの距離が1.5mであり、物体73までの距離が0.5mであるものとする。
In the case of FIG. 17, the depth sensor 11 measures the distance to the objects 71 to 73 by causing the light emitting diode 22 which is its own light source to emit light at its own modulation frequency. Here, for example, it is assumed that the distance from the depth sensor 11 to the object 71 is 1 m, the distance to the object 72 is 1.5 m, and the distance to the object 73 is 0.5 m.
尚、デプス画像により物体71乃至73は、撮像されることになるため、デプスセンサ11と物体71乃至73との位置関係についても特定される。
Since the objects 71 to 73 are imaged by the depth image, the positional relationship between the depth sensor 11 and the objects 71 to 73 is also specified.
次に、デプスセンサ11は、天井照明81の変調周波数を推定し、自らの光源である発光ダイオード22の発光を停止して、推定結果である天井照明81の変調周波数にできるだけ近い変調周波数でTapA、およびTapBの露光期間を調整して、発光ダイオード22を自らの変調周波数にて発光させたときの測距結果を用いたキャリブレーションを行い、天井照明81による測距を可能にする。そして、デプスセンサ11は、天井照明81の変調周波数でTapA、およびTapBの露光期間を調整し、自らの光源である発光ダイオード22の発光を停止した状態で、物体71乃至73を経由した天井照明81までの距離を測距し、対応するデプス画像を生成する。
Next, the depth sensor 11 estimates the modulation frequency of the ceiling illumination 81, stops the light emission of the light emitting diode 22 which is its own light source, and tapA at a modulation frequency as close as possible to the modulation frequency of the ceiling illumination 81 which is the estimation result. And, by adjusting the exposure period of Tap B, calibration is performed using the distance measurement result when the light emitting diode 22 emits light at its own modulation frequency, and the distance measurement by the ceiling illumination 81 is enabled. Then, the depth sensor 11 adjusts the exposure periods of TapA and TapB at the modulation frequency of the ceiling illumination 81, and the ceiling illumination 81 via the objects 71 to 73 is stopped while the light emitting diode 22 which is its own light source is stopped. Measure the distance to and generate the corresponding depth image.
ここで、デプスセンサ11から物体71を経由した天井照明81までの距離が1.5mであり、物体72を経由した天井照明81までの距離が2.0mであり、物体73を経由した天井照明81までの距離が1.0mであるものとする。
Here, the distance from the depth sensor 11 to the ceiling illumination 81 via the object 71 is 1.5 m, the distance from the ceiling illumination 81 via the object 72 is 2.0 m, and the distance to the ceiling illumination 81 via the object 73 is It is assumed that the distance is 1.0 m.
デプスセンサ11は、自らの光源を用いたデプス画像と、天井照明81を光源としてデプス画像との対応関係から、天井照明81の位置と距離を求める。
The depth sensor 11 obtains the position and distance of the ceiling illumination 81 from the correspondence between the depth image using its own light source and the depth image using the ceiling illumination 81 as the light source.
すなわち、天井照明81は、物体71を中心とした半径2m(=1.5m×2-1.0m)の球面K1内に存在するものと仮定できる。
That is, it can be assumed that the ceiling illumination 81 exists in the spherical surface K1 having a radius of 2 m (= 1.5 m × 2-1.0 m) centered on the object 71.
また、天井照明81は、物体72を中心とした半径2.5m(=2.0m×2-1.5m)の球面K2内に存在するものと仮定できる。
Further, it can be assumed that the ceiling illumination 81 exists in the spherical surface K2 having a radius of 2.5 m (= 2.0 m × 2-1.5 m) centered on the object 72.
さらに、天井照明81は、物体73を中心とした半径1.5m(=1.0m×2-0.5m)の球面K3内に存在するものと仮定できる。
Further, it can be assumed that the ceiling illumination 81 exists in the spherical surface K3 having a radius of 1.5 m (= 1.0 m × 2-0.5 m) centered on the object 73.
したがって、天井照明81は、球面K1乃至K3の交点に存在することになる。そこで、デプスセンサ11は、球面K1乃至K3の交点を求めて、天井照明81の位置を特定する。尚、自光源である発光ダイオード22を用いた場合、デプスセンサ11は、発光ダイオード22から物体71乃至73までの距離を光路の往復時間の1/2を用いて測距することになるが、天井照明81を用いた場合、天井照明81から物体71乃至73までの復路分の時間で測距される。しかしながら、デプスセンサ11は、自光源である発光ダイオード22を用いた場合の測距結果に基づいて、キャリブレーションが行われるため、天井照明81を用いた場合の測距結果は実測の1/2となってしまう。そこで、天井照明81を光源とする場合、デプスセンサ11は、測距結果を2倍する(×2にする)必要がある。
Therefore, the ceiling lighting 81 exists at the intersection of the spherical surfaces K1 to K3. Therefore, the depth sensor 11 determines the position of the ceiling illumination 81 by obtaining the intersection of the spherical surfaces K1 to K3. When the light emitting diode 22 which is the self-light source is used, the depth sensor 11 measures the distance from the light emitting diode 22 to the objects 71 to 73 by using 1/2 of the round trip time of the optical path. When the illumination 81 is used, the distance is measured in the time required for the return route from the ceiling illumination 81 to the objects 71 to 73. However, since the depth sensor 11 is calibrated based on the distance measurement result when the light emitting diode 22 which is the self-light source is used, the distance measurement result when the ceiling illumination 81 is used is 1/2 of the actual measurement result. turn into. Therefore, when the ceiling illumination 81 is used as the light source, the depth sensor 11 needs to double the distance measurement result (make it × 2).
デプスセンサ11と天井照明81との位置関係が求められれば、デプスセンサ11は、天井照明81を光源とした測距を実現することが可能となり、自らの光源である発光ダイオード22は発光させる必要がなくなるので、発光ダイオード22の消費電力を低減させることが可能となる。
If the positional relationship between the depth sensor 11 and the ceiling illumination 81 is required, the depth sensor 11 can realize distance measurement using the ceiling illumination 81 as a light source, and the light emitting diode 22 which is its own light source does not need to emit light. Therefore, it is possible to reduce the power consumption of the light emitting diode 22.
<他の光源による測距処理>
次に、図18のフローチャートを参照して、他の光源による測距処理を説明する。 <Distance measurement processing with other light sources>
Next, the distance measuring process using another light source will be described with reference to the flowchart of FIG.
次に、図18のフローチャートを参照して、他の光源による測距処理を説明する。 <Distance measurement processing with other light sources>
Next, the distance measuring process using another light source will be described with reference to the flowchart of FIG.
ステップS111において、制御部31は、補正モード処理により自らの光源の照射光のみで周辺の物体までの距離を測距して、デプス画像を生成する。すなわち、この処理により、例えば、図17における物体71乃至73までの距離が、自らの光源の照射光が用いられた場合の、補正モード処理(図9)により測距され、デプス画像が生成される。
In step S111, the control unit 31 measures the distance to the surrounding objects only by the irradiation light of its own light source by the correction mode processing, and generates a depth image. That is, by this processing, for example, the distance from the objects 71 to 73 in FIG. 17 is measured by the correction mode processing (FIG. 9) when the irradiation light of the own light source is used, and a depth image is generated. To.
ステップS112において、制御部31は、変調周波数推定処理を実行し、他の光源の変調周波数を推定する。この処理により、例えば、図17を参照して説明した天井照明81のような他の光源の照射光の変調周波数が推定される。尚、変調周波数推定処理については、図19のフローチャートを参照して、詳細を後述する。
In step S112, the control unit 31 executes the modulation frequency estimation process and estimates the modulation frequency of another light source. By this process, for example, the modulation frequency of the irradiation light of another light source such as the ceiling illumination 81 described with reference to FIG. 17 is estimated. The details of the modulation frequency estimation process will be described later with reference to the flowchart of FIG.
ステップS113において、制御部31は、他の光源による通常モード処理(図8)により自らの光源により測定した物体と同一の物体の距離を、図17における天井照明81のような他の光源の照射光により測距し、デプス画像を生成する。この際、TapAおよびTapBの露光期間は、他の光源の変調周波数の推定結果に基づいて設定される。
In step S113, the control unit 31 irradiates the distance of the same object as the object measured by its own light source by the normal mode processing (FIG. 8) by another light source with another light source such as the ceiling illumination 81 in FIG. The distance is measured by light and a depth image is generated. At this time, the exposure periods of TapA and TapB are set based on the estimation results of the modulation frequencies of other light sources.
ステップS114において、演算部29は、自らの光源に基づいた測距により得られるデプス信号と、他の光源に基づいた測距により得られるデプス信号とから、他の光源の位置と距離を演算する。
In step S114, the calculation unit 29 calculates the position and distance of another light source from the depth signal obtained by distance measurement based on its own light source and the depth signal obtained by distance measurement based on another light source. ..
ステップS115において、制御部31は、他の光源の位置と距離を考慮した、他の光源による通常モード処理により、他の光源に基づいた物体までの距離を測距し、デプス画像を生成する。この際、演算部29は、デプスセンサ11から他の光源までの距離と位置を考慮して、他の光源の照射光により物体までの距離を測距して、デプス画像を生成する。また、デプスセンサ11は、自らの光源の照射光を必要としないので、自らの光源については消灯させることが可能となり、結果として、消費電力を低減させることが可能となる。
In step S115, the control unit 31 measures the distance to the object based on the other light source by the normal mode processing by the other light source in consideration of the position and distance of the other light source, and generates a depth image. At this time, the calculation unit 29 considers the distance and position from the depth sensor 11 to another light source, measures the distance to the object by the irradiation light of the other light source, and generates a depth image. Further, since the depth sensor 11 does not require the irradiation light of its own light source, it is possible to turn off the light source of its own, and as a result, it is possible to reduce the power consumption.
ステップS116において、制御部31は、他の光源による測距処理の終了が指示されたか否かを判定し、処理の終了が指示されていない場合、処理は、ステップS115に戻る。
In step S116, the control unit 31 determines whether or not the end of the distance measurement process by another light source is instructed, and if the end of the process is not instructed, the process returns to step S115.
すなわち、ステップS116において、処理の終了が指示されるまで、他の光源による通常モード処理の終了が指示されるまで、同様の処理が繰り返される。
That is, in step S116, the same process is repeated until the end of the process is instructed and the end of the normal mode process by another light source is instructed.
そして、ステップS116において、終了が指示されると、処理が終了する。
Then, in step S116, when the end is instructed, the process ends.
以上の一連の処理により、他の光源の変調周波数が推定されて、他の光源の位置および距離が特定されることにより、自らの光源の照射光を利用することなく、他の光源の照射光のみで測距を実現し、デプス画像を生成することが可能となる。
By the above series of processing, the modulation frequency of the other light source is estimated, and the position and distance of the other light source are specified, so that the irradiation light of the other light source is not used without using the irradiation light of the other light source. It is possible to realize distance measurement and generate a depth image only by itself.
結果として、自らの光源を消灯したまま測距処理とデプス画像の生成を実現することが可能となるので、デプスセンサ11の消費電力を低減させることが可能となる。
As a result, it is possible to realize distance measurement processing and generation of a depth image while turning off its own light source, so that it is possible to reduce the power consumption of the depth sensor 11.
また、以上においては、デプスセンサ11と他の光源である天井照明81との位置と距離が求められる例について説明してきたが、予めデプスセンサ11と天井照明81との位置関係がわかっていれば、デプスセンサ11は、自らの光源を備えることなく、天井照明81の照射光を利用して測距を実現することが可能である。
Further, in the above, an example in which the position and distance between the depth sensor 11 and the ceiling illumination 81, which is another light source, is required has been described. However, if the positional relationship between the depth sensor 11 and the ceiling illumination 81 is known in advance, the depth sensor No. 11 can realize distance measurement by using the irradiation light of the ceiling illumination 81 without providing its own light source.
<変調周波数推定処理>
次に、図19のフローチャートを参照して、変調周波数推定処理について説明する。尚、この処理においては、図14を参照して説明した関係に基づいて、他の光源の変調周波数と自らの光源の変調周波数との差分周波数が大凡わかっていることを前提とする。従って、この処理にあたっては、図14を参照したように差分周波数の周期が大凡求められているものとする。 <Modulation frequency estimation process>
Next, the modulation frequency estimation process will be described with reference to the flowchart of FIG. In this process, it is premised that the difference frequency between the modulation frequency of the other light source and the modulation frequency of the own light source is roughly known based on the relationship described with reference to FIG. Therefore, in this process, it is assumed that the period of the difference frequency is roughly obtained as shown in FIG.
次に、図19のフローチャートを参照して、変調周波数推定処理について説明する。尚、この処理においては、図14を参照して説明した関係に基づいて、他の光源の変調周波数と自らの光源の変調周波数との差分周波数が大凡わかっていることを前提とする。従って、この処理にあたっては、図14を参照したように差分周波数の周期が大凡求められているものとする。 <Modulation frequency estimation process>
Next, the modulation frequency estimation process will be described with reference to the flowchart of FIG. In this process, it is premised that the difference frequency between the modulation frequency of the other light source and the modulation frequency of the own light source is roughly known based on the relationship described with reference to FIG. Therefore, in this process, it is assumed that the period of the difference frequency is roughly obtained as shown in FIG.
ステップS151において、制御部31は、差分周波数の周期を4等分したときのそれぞれの積算時間(Integ Time)を識別するカウンタNを0に初期化する。
In step S151, the control unit 31 initializes the counter N for identifying each integrated time (IntegTime) when the period of the difference frequency is divided into four equal parts.
例えば、図15を参照して説明したように差分周波数の周期が10usであるような場合、4等分すると2.5usになるので、2.5us、5.0us、7.5us、10usからなる4種類の露光時間(Integ Time)が設定される。カウンタNは、この4種類の積算時間を識別するためのカウンタであり、カウンタN=0が、2.5usの積算時間を示し、カウンタN=1が、5.0usの積算時間を示し、カウンタN=2が、7.5usの積算時間を示し、カウンタN=3が、10usの積算時間を示すものとする。
For example, as described with reference to FIG. 15, when the period of the difference frequency is 10us, it becomes 2.5us when divided into four equal parts, so four types of exposure consisting of 2.5us, 5.0us, 7.5us, and 10us. The time (IntegTime) is set. The counter N is a counter for identifying these four types of integration time. Counter N = 0 indicates the integration time of 2.5us, counter N = 1 indicates the integration time of 5.0us, and counter N = It is assumed that 2 indicates the integration time of 7.5us, and the counter N = 3 indicates the integration time of 10us.
ステップS152において、制御部31は、光変調部21を制御して、発光ダイオード22の発光を停止させる。
In step S152, the control unit 31 controls the optical modulation unit 21 to stop the light emission of the light emitting diode 22.
ステップS153において、制御部31は、TOFセンサ26の画素51のそれぞれの第1のタップTA(TapA)を制御して、自らの光源である発光ダイオード22の発光タイミングと同期したタイミングで露光させて、画素信号を検出させて同期処理部28を介して演算部29に出力させる。
In step S153, the control unit 31 controls the first tap TA (TapA) of each of the pixels 51 of the TOF sensor 26 to expose the light emitting diode 22 which is its own light source at a timing synchronized with the light emitting timing. , The pixel signal is detected and output to the calculation unit 29 via the synchronization processing unit 28.
ステップS154において、演算部29は、TapAの露光結果である画素信号を積算して記憶する。
In step S154, the calculation unit 29 integrates and stores the pixel signal which is the exposure result of TapA.
ステップS155において、制御部31は、TOFセンサ26の画素51のそれぞれの第2のタップTB(TapB)を制御して、自らの光源である発光ダイオード22が消灯するタイミングで露光させて、画素信号を検出させて同期処理部28を介して演算部29に出力させる。
In step S155, the control unit 31 controls the second tap TB (TapB) of each of the pixels 51 of the TOF sensor 26 and exposes the light emitting diode 22 as its own light source at the timing when the light emitting diode 22 is turned off to obtain a pixel signal. Is detected and output to the calculation unit 29 via the synchronization processing unit 28.
ステップS156において、演算部29は、TapBの露光結果である画素信号を積算して記憶する。
In step S156, the calculation unit 29 integrates and stores the pixel signal which is the exposure result of TapB.
ステップS157において、演算部29は、カウンタNに対応する積算時間が経過したか否かを判定し、カウンタNに対応する積算時間が経過していない場合、処理は、ステップS153に戻る。すなわち、カウンタNに対応する積算時間が経過するまで、TapAおよびTapBによる露光結果である画素信号が順次積算される。
In step S157, the calculation unit 29 determines whether or not the integration time corresponding to the counter N has elapsed, and if the integration time corresponding to the counter N has not elapsed, the process returns to step S153. That is, the pixel signals, which are the exposure results by Tap A and Tap B, are sequentially integrated until the integration time corresponding to the counter N elapses.
ステップS157において、カウンタNに対応する露光時間が経過したと判定された場合、処理は、ステップS158に進む。
If it is determined in step S157 that the exposure time corresponding to the counter N has elapsed, the process proceeds to step S158.
ステップS158において、演算部29は、記憶されたTapAの露光結果である画素信号の積算結果と、TapBの露光結果である画素信号の積算結果との差分Iを演算し、カウンタNに対応する積算時間のサンプリング結果として記憶する。
In step S158, the calculation unit 29 calculates the difference I between the pixel signal integration result, which is the stored exposure result of TapA, and the pixel signal integration result, which is the exposure result of TapB, and integrates corresponding to the counter N. Store as the time sampling result.
ステップS159において、制御部31は、カウンタNが3であるか否か、すなわち、全ての露光時間の差分Iをサンプリングしたか否かを判定する。
In step S159, the control unit 31 determines whether or not the counter N is 3, that is, whether or not the difference I of all exposure times has been sampled.
ステップS159において、カウンタNが3ではない場合、処理は、ステップS160に進む。
If the counter N is not 3 in step S159, the process proceeds to step S160.
ステップS160において、制御部31は、カウンタNを1インクリメントし、処理は、ステップS153に戻る。すなわち、ステップS153乃至S159の処理が繰り返されることにより、TapAの露光結果である画素信号の積算結果と、TapBの露光結果である画素信号の積算結果との差分Iが、カウンタNに対応付けられた露光時間について、順次演算されて、4種類の積算時間の差分Iがサンプリングされるまで繰り返される。
In step S160, the control unit 31 increments the counter N by 1, and the process returns to step S153. That is, by repeating the processes of steps S153 to S159, the difference I between the pixel signal integration result, which is the exposure result of TapA, and the pixel signal integration result, which is the exposure result of TapB, is associated with the counter N. The exposure time is sequentially calculated and repeated until the difference I of the four types of integration time is sampled.
そして、ステップS159において、TapAの露光結果である画素信号の積算結果と、TapBの露光結果である画素信号の積算結果との差分が、4種類の積算時間の全てについて演算されサンプリングされたと判定された場合、すなわち、カウンタNが3である場合、処理は、ステップS160に進む。
Then, in step S159, it is determined that the difference between the pixel signal integration result, which is the exposure result of TapA, and the pixel signal integration result, which is the exposure result of TapB, has been calculated and sampled for all four types of integration times. If, that is, if the counter N is 3, the process proceeds to step S160.
ステップS160において、演算部29は、4種類の積算時間でサンプリングされた、TapAの露光結果である画素信号の積算結果と、TapBの露光結果である画素信号の積算結果との差分Iを用いて、他の光源の変調周波数を推定する。
In step S160, the calculation unit 29 uses the difference I between the pixel signal integration result, which is the exposure result of TapA, and the pixel signal integration result, which is the exposure result of TapB, sampled at four types of integration times. , Estimate the modulation frequencies of other light sources.
すなわち、演算部29は、図15を参照して説明したように、4種類の積算時間(Integ Time)でサンプリングされた、TapAの露光結果である画素信号の積算結果と、TapBの露光結果である画素信号の積算結果との差分Iを用いて、他の光源の変調周波数を推定する。
That is, as described with reference to FIG. 15, the calculation unit 29 uses the integration result of the pixel signal, which is the exposure result of TapA, and the exposure result of TapB, which are sampled at four types of integration times (IntegTime). The modulation frequency of another light source is estimated using the difference I from the integration result of a certain pixel signal.
すなわち、演算部29は、差分Iのサンプリング結果を用いて、自らの光源の変調周波数と、他の光源の変調周波数の差分周波数の波形を推定する。
That is, the calculation unit 29 estimates the waveform of the difference frequency between the modulation frequency of its own light source and the modulation frequency of another light source by using the sampling result of the difference I.
さらに、演算部29は、推定した差分周波数と、自らの光源の変調周波数に基づいて、他の光源の変調周波数を推定する。
Further, the calculation unit 29 estimates the modulation frequency of another light source based on the estimated difference frequency and the modulation frequency of its own light source.
以上の処理により、他の光源の照射光の変調周波数を推定することが可能となる。
By the above processing, it is possible to estimate the modulation frequency of the irradiation light of another light source.
結果として、推定された他の光源の変調周波数に基づいて、TapAおよびTapBの積算時間を制御することにより、自らの光源の照射光を用いることなく、他の光源の照射光のみを用いた測距とデプス画像の生成を実現することが可能となる。
As a result, by controlling the integration time of TapA and TapB based on the estimated modulation frequency of the other light source, the measurement using only the irradiation light of the other light source without using the irradiation light of its own light source. It is possible to realize the generation of distance and depth images.
尚、以上においては、他の光源の変調周波数との差分周波数の周期を4等分した積算時間(Integ Time)で差分Iをサンプリングする例について説明してきたが、差分周波数の周期を4以上に分割し、差分Iのサンプリング数を増やすようにしてもよい。このようにして、サンプリング数が増えることにより、他の光源の変調周波数を高精度に推定することが可能となる。
In the above, an example of sampling the difference I by the integration time (IntegTime) obtained by dividing the period of the difference frequency from the modulation frequency of another light source into four equal parts has been described, but the period of the difference frequency is set to 4 or more. It may be divided and the number of samples of the difference I may be increased. By increasing the number of samplings in this way, it is possible to estimate the modulation frequencies of other light sources with high accuracy.
また、本明細書中に記載された効果はあくまで例示であって限定されるものではなく、他の効果があってもよい。
Further, the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
<<5.移動体への応用例>>
本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。 << 5. Application example to moving body >>
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。 << 5. Application example to moving body >>
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
図20は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。
FIG. 20 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a moving body control system to which the technique according to the present disclosure can be applied.
車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図20に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(interface)12053が図示されている。
The vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001. In the example shown in FIG. 20, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050. Further, as a functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are shown.
駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。
The drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating braking force of the vehicle.
ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。
The body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps. In this case, the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches. The body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。
The vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000. For example, an imaging unit 12031 is connected to the vehicle exterior information detection unit 12030. The vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image. The vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or characters on the road surface based on the received image.
撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。
The imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received. The image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。
The in-vehicle information detection unit 12040 detects the in-vehicle information. For example, a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040. The driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。
The microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit. A control command can be output to 12010. For example, the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。
Further, the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving that runs autonomously without depending on the operation.
また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12020に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。
Further, the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030. For example, the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs coordinated control for the purpose of anti-glare such as switching the high beam to the low beam. It can be carried out.
音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図20の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。
The audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger of the vehicle or the outside of the vehicle. In the example of FIG. 20, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices. The display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
図21は、撮像部12031の設置位置の例を示す図である。
FIG. 21 is a diagram showing an example of the installation position of the imaging unit 12031.
図21では、車両12100は、撮像部12031として、撮像部12101,12102,12103,12104,12105を有する。
In FIG. 21, the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, 12105 as imaging units 12031.
撮像部12101,12102,12103,12104,12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102,12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。撮像部12101及び12105で取得される前方の画像は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。
The imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as, for example, the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100. The imaging unit 12101 provided on the front nose and the imaging unit 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100. The imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100. The imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100. The images in front acquired by the imaging units 12101 and 12105 are mainly used for detecting the preceding vehicle, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
なお、図21には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。
Note that FIG. 21 shows an example of the photographing range of the imaging units 12101 to 12104. The imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose, the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively, and the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103. The imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。
At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。
For example, the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100). By obtaining, it is possible to extract as the preceding vehicle a three-dimensional object that is the closest three-dimensional object on the traveling path of the vehicle 12100 and that travels in substantially the same direction as the vehicle 12100 at a predetermined speed (for example, 0 km / h or more). it can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic braking control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform coordinated control for the purpose of automatic driving or the like in which the vehicle runs autonomously without depending on the operation of the driver.
例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。
For example, the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be seen by the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。
At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104. Such pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine. When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian. The display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、例えば、撮像部12031に適用され得る。具体的には、図6のTOFセンサ26は、撮像部12031に適用することができる。撮像部12031に本開示に係る技術を適用することにより、複数の測距装置が存在する環境においても、高精度に測距することが可能となる。
The above is an example of a vehicle control system to which the technology according to the present disclosure can be applied. The technique according to the present disclosure can be applied to, for example, the imaging unit 12031 among the configurations described above. Specifically, the TOF sensor 26 of FIG. 6 can be applied to the imaging unit 12031. By applying the technique according to the present disclosure to the imaging unit 12031, it is possible to measure the distance with high accuracy even in an environment where a plurality of distance measuring devices exist.
尚、本開示は、以下のような構成も取ることができる。
<1> 検出領域内の物体により照射光が反射されることにより生じる反射光を受光する受光部と、
前記照射光と前記反射光との位相差により前記物体までの距離を演算する演算部と、
前記受光部における、前記物体までの距離の演算に用いられる照射光である必要照射光と、前記物体までの距離の演算に用いられない照射光である不要照射光との衝突の影響を抑制するように、前記受光部および前記演算部を制御する制御部と
を含む測距装置。
<2> 前記物体までの距離の演算に用いられる前記必要照射光は、前記物体までの距離の演算に必要とされる情報が既知の前記照射光である
<1>に記載の測距装置。
<3> 前記物体までの距離の演算に必要とされる情報は、前記照射光の変調周波数および位相の情報である
<2>に記載の測距装置。
<4> 前記検出領域に対して、パルス状の前記照射光を前記必要照射光として照射する照射部をさらに含み、
前記制御部は、
前記受光部に、前記照射部により照射された前記必要照射光が、前記検出領域内の物体により、反射されることにより生じる反射光である必要反射光を受光させ、
前記演算部に、前記照射部により照射された前記必要照射光と前記必要反射光との位相差により前記物体までの距離を演算させるように制御する
<1>に記載の測距装置。
<5> 前記制御部は、前記受光部に、画素の一部で測距に必要な前記必要反射光を受光させ、前記一部以外の画素で前記不要照射光を受光させるように制御する
<4>に記載の測距装置。
<6> 前記制御部は、
前記照射部に、前記必要照射光を照射させる期間と、消灯させる期間とを設定して動作させ、
前記受光部に、前記画素の一部を、前記照射部により前記必要照射光が照射されるタイミングと対応する位相で前記必要反射光を受光させ、前記一部以外の画素で前記照射部が消灯しているタイミングと対応する位相で受光させ、
前記演算部に、前記照射部が前記必要照射光を照射するタイミングと対応する位相で前記必要反射光を受光することにより生成される画素信号と、前記照射部が消灯しているタイミングと対応する位相で受光することにより生成される画素信号との差分に基づいて、前記必要照射光と前記必要反射光との位相差により前記物体までの距離を演算させるように制御する
<5>に記載の測距装置。
<7> 前記制御部は、
前記演算部に、前記照射部が前記必要照射光を照射するタイミングと対応する位相で前記必要反射光を受光することにより生成される画素信号の積算値と、前記照射部が消灯しているタイミングと対応する位相で受光することにより生成される画素信号の積算値との差分に基づいて、前記物体までの距離を演算させるように制御する
<6>に記載の測距装置。
<8> 前記制御部は、
前記受光部に、画素の一部が、前記照射部により前記必要照射光が照射されるタイミングと対応する位相で前記必要反射光を受光し、前記照射部が消灯しているタイミングと対応する位相で、複数の積算時間で受光させるように制御し、
前記演算部に、前記複数の積算時間で受光することにより生成される画素信号の積算値に基づいて、前記衝突の有無を判定させ、
前記照射部に、前記必要照射光を照射させる期間と、消灯させる期間とを設定すると共に、前記衝突が発生していると判定された場合、前記消灯させる期間の長さを変更させるように制御する
<4>に記載の測距装置。
<9> 前記制御部は、前記衝突が発生していると判定された場合、前記照射部に、前記消灯させる期間の長さを、ランダムに変更させるように制御する
<8>に記載の測距装置。
<10> 前記演算部は、前記複数の積算時間で受光されることにより生成される画素信号の積算値の標準偏差と、所定の閾値との比較に基づいて、前記衝突の有無を判定する
<8>に記載の測距装置。
<11> 前記制御部は、
前記受光部に、画素の一部と、前記一部以外とを、それぞれ前記照射部における前記必要照射光の変調周波数の変調周期の半周期ずつ交互に露光させ、
前記演算部に、前記受光部により検出される画素信号の、複数の積算時間における積算値に基づいて、前記照射部とは異なる他の光源の変調周波数を推定させ、
前記受光部に、画素の一部と、前記一部以外とを、それぞれ推定された前記他の光源の変調周波数の変調周期の半周期ずつ交互に露光させ、
前記演算部に、前記他の光源の照射光と、前記他の光源の照射光の反射光との位相差により物体までの距離を演算させるように制御する
<4>に記載の測距装置。
<12> 前記制御部は、前記演算部に、前記複数の積算時間における、前記受光部における画素の一部により検出される画素信号の積算値と、前記受光部における画素の一部以外により検出される画素信号の積算値との差分に基づいて、前記他の光源の変調周波数を推定させるように制御する
<11>に記載の測距装置。
<13> 前記制御部は、前記演算部に、前記複数の積算時間における、前記受光部における画素の一部により検出される画素信号の積算値と、前記受光部における画素の一部以外により検出される画素信号の積算値との差分に基づいて、前記他の光源の変調周波数と、前記照射部により照射される前記必要照射光の変調周波数との差分周波数を推定させ、前記差分周波数に基づいて、前記他の光源の変調周波数を推定させるように制御する
<12>に記載の測距装置。
<14> 前記複数の積算時間は、前記他の光源の変調周波数の凡その変調周波数の変調周期を複数に分割した時間の倍数である
<11>に記載の測距装置。
<15> 前記演算部により演算された物体までの距離の情報に基づいてデプス画像を生成するデプス画像生成部をさらに含み、
前記制御部は、
前記デプス画像生成部に、
前記照射部により照射される前記必要照射光の変調周波数に基づいて制御された、前記照射部の前記必要照射光に対応する前記必要反射光が前記受光部により受光され、検出される画素信号で演算された前記物体までの距離の情報に基づいた第1のデプス画像と、
前記他の光源の変調周波数に基づいて制御された、前記他の光源の前記照射光が、前記必要照射光に対応する前記必要反射光が前記受光部により受光され、検出される画素信号で演算された前記物体までの距離の情報に基づいた第2のデプス画像とを生成させ、
前記照射部を消灯させ、
前記演算部に、
前記第1のデプス画像と、前記第2のデプス画像とに基づいて、前記他の光源の位置と距離を演算させ、
前記他の光源より必要照射光として照射される照射光と、前記他の光源より必要照射光として照射された照射光が物体により前記必要反射光として反射される反射光との位相差、並びに、前記他の光源の位置と距離に基づいて、前記物体までの距離を演算させる
<11>に記載の測距装置。
<16> 前記受光部は、TOF(Time of Flight)センサである
<1>乃至<15>のいずれかに記載の測距装置。
<17> 検出領域内の物体により照射光が反射されることにより生じる反射光を受光する受光処理と、
前記照射光と前記反射光との位相差により前記物体までの距離を演算する演算処理と、
前記受光処理における、前記物体までの距離の演算に用いられる照射光である必要照射光と、前記物体までの距離の演算に用いられない照射光である不要照射光との衝突の影響を抑制するように、前記受光処理および演算処理を制御する制御処理と
を含む測距方法。 The present disclosure may also have the following configuration.
<1> A light receiving unit that receives the reflected light generated by the reflection of the irradiation light by an object in the detection area, and a light receiving portion.
An arithmetic unit that calculates the distance to the object based on the phase difference between the irradiation light and the reflected light.
Suppresses the influence of collision between the required irradiation light, which is the irradiation light used for calculating the distance to the object, and the unnecessary irradiation light, which is the irradiation light not used for calculating the distance to the object, in the light receiving unit. As described above, a distance measuring device including the light receiving unit and the control unit that controls the calculation unit.
<2> The distance measuring device according to <1>, wherein the required irradiation light used for calculating the distance to the object is the irradiation light for which the information required for calculating the distance to the object is known.
<3> The distance measuring device according to <2>, wherein the information required for calculating the distance to the object is information on the modulation frequency and phase of the irradiation light.
<4> An irradiation unit that irradiates the detection region with the pulsed irradiation light as the necessary irradiation light is further included.
The control unit
The light receiving unit is made to receive the required reflected light, which is the reflected light generated when the required irradiation light irradiated by the irradiation unit is reflected by an object in the detection region.
The distance measuring device according to <1>, wherein the calculation unit is controlled to calculate the distance to the object based on the phase difference between the required irradiation light and the required reflected light irradiated by the irradiation unit.
<5> The control unit controls the light receiving unit to receive the necessary reflected light required for distance measurement with a part of the pixels and to receive the unnecessary irradiation light with the pixels other than the part. 4> The ranging device.
<6> The control unit
The irradiation unit is operated by setting a period for irradiating the required irradiation light and a period for turning off the light.
A part of the pixel is received by the light receiving unit at a phase corresponding to the timing when the required irradiation light is irradiated by the irradiation unit, and the irradiation unit is turned off by pixels other than the part. Receive light in the phase corresponding to the timing of the light
The pixel signal generated by receiving the required reflected light in the phase corresponding to the timing when the irradiation unit irradiates the required irradiation light to the calculation unit corresponds to the timing when the irradiation unit is turned off. The method according to <5>, wherein the distance to the object is calculated by the phase difference between the required irradiation light and the required reflected light based on the difference from the pixel signal generated by receiving the light in phase. Distance measuring device.
<7> The control unit
The integrated value of the pixel signal generated by receiving the required reflected light in the phase corresponding to the timing when the irradiation unit irradiates the calculation unit with the required irradiation light, and the timing at which the irradiation unit is turned off. The distance measuring device according to <6>, which controls to calculate the distance to the object based on the difference from the integrated value of the pixel signal generated by receiving light in the phase corresponding to.
<8> The control unit
A part of the pixel receives the required reflected light on the light receiving unit at a phase corresponding to the timing when the required irradiation light is irradiated by the irradiation unit, and the phase corresponds to the timing when the irradiation unit is turned off. So, control so that the light is received at multiple integration times,
The calculation unit is made to determine the presence or absence of the collision based on the integrated value of the pixel signals generated by receiving light at the plurality of integrated times.
A period for irradiating the required irradiation light to the irradiation unit and a period for turning off the light are set, and when it is determined that the collision has occurred, the length of the period for turning off the light is changed. The ranging device according to <4>.
<9> The measurement according to <8>, wherein the control unit controls the irradiation unit to randomly change the length of the extinguishing period when it is determined that the collision has occurred. Distance device.
<10> The calculation unit determines the presence or absence of the collision based on a comparison between the standard deviation of the integrated value of the pixel signal generated by receiving light at the plurality of integrated times and a predetermined threshold value. 8> The distance measuring device.
<11> The control unit
A part of the pixel and a part other than the part of the pixel are alternately exposed to the light receiving unit for half a cycle of the modulation cycle of the modulation frequency of the required irradiation light in the irradiation unit.
The calculation unit is made to estimate the modulation frequency of another light source different from the irradiation unit based on the integrated values of the pixel signals detected by the light receiving unit at a plurality of integration times.
A part of the pixel and a part other than the part of the pixel are alternately exposed to the light receiving portion for half a cycle of the modulation cycle of the modulation frequency of the other light source estimated.
The distance measuring device according to <4>, wherein the calculation unit is controlled to calculate the distance to an object by the phase difference between the irradiation light of the other light source and the reflected light of the irradiation light of the other light source.
<12> The control unit detects the integrated value of the pixel signal detected by a part of the pixels in the light receiving unit and the integrated value of the pixel signal other than a part of the pixels in the light receiving unit during the plurality of integration times. The distance measuring device according to <11>, which controls to estimate the modulation frequency of the other light source based on the difference from the integrated value of the pixel signal.
<13> The control unit detects the integrated value of the pixel signal detected by a part of the pixels in the light receiving unit and the integrated value of the pixel signal other than a part of the pixels in the light receiving unit during the plurality of integration times. Based on the difference from the integrated value of the pixel signal, the difference frequency between the modulation frequency of the other light source and the modulation frequency of the required irradiation light irradiated by the irradiation unit is estimated, and based on the difference frequency. The distance measuring device according to <12>, which controls the modulation frequency of the other light source to be estimated.
<14> The distance measuring device according to <11>, wherein the plurality of integrated times are multiples of a time obtained by dividing the modulation period of the modulation frequency of the other light source into a plurality of modulation frequencies.
<15> Further includes a depth image generation unit that generates a depth image based on the information of the distance to the object calculated by the calculation unit.
The control unit
In the depth image generation unit,
The required reflected light corresponding to the required irradiation light of the irradiation unit, which is controlled based on the modulation frequency of the required irradiation light emitted by the irradiation unit, is received by the light receiving unit and is detected by the pixel signal. A first depth image based on the calculated distance information to the object,
The irradiation light of the other light source, which is controlled based on the modulation frequency of the other light source, is calculated by the pixel signal detected when the required reflected light corresponding to the required irradiation light is received by the light receiving unit. A second depth image based on the information on the distance to the object is generated.
Turn off the irradiation part
In the calculation unit
Based on the first depth image and the second depth image, the position and distance of the other light source are calculated.
The phase difference between the irradiation light emitted as the necessary irradiation light from the other light source and the reflected light in which the irradiation light emitted as the necessary irradiation light from the other light source is reflected by the object as the necessary reflected light, and The distance measuring device according to <11>, wherein the distance to the object is calculated based on the position and distance of the other light source.
<16> The distance measuring device according to any one of <1> to <15>, wherein the light receiving unit is a TOF (Time of Flight) sensor.
<17> A light receiving process that receives the reflected light generated by the reflection of the irradiation light by an object in the detection area.
Arithmetic processing that calculates the distance to the object by the phase difference between the irradiation light and the reflected light,
In the light receiving process, the influence of collision between the required irradiation light, which is the irradiation light used for calculating the distance to the object, and the unnecessary irradiation light, which is the irradiation light not used for calculating the distance to the object, is suppressed. As described above, a distance measuring method including a control process for controlling the light receiving process and the arithmetic process.
<1> 検出領域内の物体により照射光が反射されることにより生じる反射光を受光する受光部と、
前記照射光と前記反射光との位相差により前記物体までの距離を演算する演算部と、
前記受光部における、前記物体までの距離の演算に用いられる照射光である必要照射光と、前記物体までの距離の演算に用いられない照射光である不要照射光との衝突の影響を抑制するように、前記受光部および前記演算部を制御する制御部と
を含む測距装置。
<2> 前記物体までの距離の演算に用いられる前記必要照射光は、前記物体までの距離の演算に必要とされる情報が既知の前記照射光である
<1>に記載の測距装置。
<3> 前記物体までの距離の演算に必要とされる情報は、前記照射光の変調周波数および位相の情報である
<2>に記載の測距装置。
<4> 前記検出領域に対して、パルス状の前記照射光を前記必要照射光として照射する照射部をさらに含み、
前記制御部は、
前記受光部に、前記照射部により照射された前記必要照射光が、前記検出領域内の物体により、反射されることにより生じる反射光である必要反射光を受光させ、
前記演算部に、前記照射部により照射された前記必要照射光と前記必要反射光との位相差により前記物体までの距離を演算させるように制御する
<1>に記載の測距装置。
<5> 前記制御部は、前記受光部に、画素の一部で測距に必要な前記必要反射光を受光させ、前記一部以外の画素で前記不要照射光を受光させるように制御する
<4>に記載の測距装置。
<6> 前記制御部は、
前記照射部に、前記必要照射光を照射させる期間と、消灯させる期間とを設定して動作させ、
前記受光部に、前記画素の一部を、前記照射部により前記必要照射光が照射されるタイミングと対応する位相で前記必要反射光を受光させ、前記一部以外の画素で前記照射部が消灯しているタイミングと対応する位相で受光させ、
前記演算部に、前記照射部が前記必要照射光を照射するタイミングと対応する位相で前記必要反射光を受光することにより生成される画素信号と、前記照射部が消灯しているタイミングと対応する位相で受光することにより生成される画素信号との差分に基づいて、前記必要照射光と前記必要反射光との位相差により前記物体までの距離を演算させるように制御する
<5>に記載の測距装置。
<7> 前記制御部は、
前記演算部に、前記照射部が前記必要照射光を照射するタイミングと対応する位相で前記必要反射光を受光することにより生成される画素信号の積算値と、前記照射部が消灯しているタイミングと対応する位相で受光することにより生成される画素信号の積算値との差分に基づいて、前記物体までの距離を演算させるように制御する
<6>に記載の測距装置。
<8> 前記制御部は、
前記受光部に、画素の一部が、前記照射部により前記必要照射光が照射されるタイミングと対応する位相で前記必要反射光を受光し、前記照射部が消灯しているタイミングと対応する位相で、複数の積算時間で受光させるように制御し、
前記演算部に、前記複数の積算時間で受光することにより生成される画素信号の積算値に基づいて、前記衝突の有無を判定させ、
前記照射部に、前記必要照射光を照射させる期間と、消灯させる期間とを設定すると共に、前記衝突が発生していると判定された場合、前記消灯させる期間の長さを変更させるように制御する
<4>に記載の測距装置。
<9> 前記制御部は、前記衝突が発生していると判定された場合、前記照射部に、前記消灯させる期間の長さを、ランダムに変更させるように制御する
<8>に記載の測距装置。
<10> 前記演算部は、前記複数の積算時間で受光されることにより生成される画素信号の積算値の標準偏差と、所定の閾値との比較に基づいて、前記衝突の有無を判定する
<8>に記載の測距装置。
<11> 前記制御部は、
前記受光部に、画素の一部と、前記一部以外とを、それぞれ前記照射部における前記必要照射光の変調周波数の変調周期の半周期ずつ交互に露光させ、
前記演算部に、前記受光部により検出される画素信号の、複数の積算時間における積算値に基づいて、前記照射部とは異なる他の光源の変調周波数を推定させ、
前記受光部に、画素の一部と、前記一部以外とを、それぞれ推定された前記他の光源の変調周波数の変調周期の半周期ずつ交互に露光させ、
前記演算部に、前記他の光源の照射光と、前記他の光源の照射光の反射光との位相差により物体までの距離を演算させるように制御する
<4>に記載の測距装置。
<12> 前記制御部は、前記演算部に、前記複数の積算時間における、前記受光部における画素の一部により検出される画素信号の積算値と、前記受光部における画素の一部以外により検出される画素信号の積算値との差分に基づいて、前記他の光源の変調周波数を推定させるように制御する
<11>に記載の測距装置。
<13> 前記制御部は、前記演算部に、前記複数の積算時間における、前記受光部における画素の一部により検出される画素信号の積算値と、前記受光部における画素の一部以外により検出される画素信号の積算値との差分に基づいて、前記他の光源の変調周波数と、前記照射部により照射される前記必要照射光の変調周波数との差分周波数を推定させ、前記差分周波数に基づいて、前記他の光源の変調周波数を推定させるように制御する
<12>に記載の測距装置。
<14> 前記複数の積算時間は、前記他の光源の変調周波数の凡その変調周波数の変調周期を複数に分割した時間の倍数である
<11>に記載の測距装置。
<15> 前記演算部により演算された物体までの距離の情報に基づいてデプス画像を生成するデプス画像生成部をさらに含み、
前記制御部は、
前記デプス画像生成部に、
前記照射部により照射される前記必要照射光の変調周波数に基づいて制御された、前記照射部の前記必要照射光に対応する前記必要反射光が前記受光部により受光され、検出される画素信号で演算された前記物体までの距離の情報に基づいた第1のデプス画像と、
前記他の光源の変調周波数に基づいて制御された、前記他の光源の前記照射光が、前記必要照射光に対応する前記必要反射光が前記受光部により受光され、検出される画素信号で演算された前記物体までの距離の情報に基づいた第2のデプス画像とを生成させ、
前記照射部を消灯させ、
前記演算部に、
前記第1のデプス画像と、前記第2のデプス画像とに基づいて、前記他の光源の位置と距離を演算させ、
前記他の光源より必要照射光として照射される照射光と、前記他の光源より必要照射光として照射された照射光が物体により前記必要反射光として反射される反射光との位相差、並びに、前記他の光源の位置と距離に基づいて、前記物体までの距離を演算させる
<11>に記載の測距装置。
<16> 前記受光部は、TOF(Time of Flight)センサである
<1>乃至<15>のいずれかに記載の測距装置。
<17> 検出領域内の物体により照射光が反射されることにより生じる反射光を受光する受光処理と、
前記照射光と前記反射光との位相差により前記物体までの距離を演算する演算処理と、
前記受光処理における、前記物体までの距離の演算に用いられる照射光である必要照射光と、前記物体までの距離の演算に用いられない照射光である不要照射光との衝突の影響を抑制するように、前記受光処理および演算処理を制御する制御処理と
を含む測距方法。 The present disclosure may also have the following configuration.
<1> A light receiving unit that receives the reflected light generated by the reflection of the irradiation light by an object in the detection area, and a light receiving portion.
An arithmetic unit that calculates the distance to the object based on the phase difference between the irradiation light and the reflected light.
Suppresses the influence of collision between the required irradiation light, which is the irradiation light used for calculating the distance to the object, and the unnecessary irradiation light, which is the irradiation light not used for calculating the distance to the object, in the light receiving unit. As described above, a distance measuring device including the light receiving unit and the control unit that controls the calculation unit.
<2> The distance measuring device according to <1>, wherein the required irradiation light used for calculating the distance to the object is the irradiation light for which the information required for calculating the distance to the object is known.
<3> The distance measuring device according to <2>, wherein the information required for calculating the distance to the object is information on the modulation frequency and phase of the irradiation light.
<4> An irradiation unit that irradiates the detection region with the pulsed irradiation light as the necessary irradiation light is further included.
The control unit
The light receiving unit is made to receive the required reflected light, which is the reflected light generated when the required irradiation light irradiated by the irradiation unit is reflected by an object in the detection region.
The distance measuring device according to <1>, wherein the calculation unit is controlled to calculate the distance to the object based on the phase difference between the required irradiation light and the required reflected light irradiated by the irradiation unit.
<5> The control unit controls the light receiving unit to receive the necessary reflected light required for distance measurement with a part of the pixels and to receive the unnecessary irradiation light with the pixels other than the part. 4> The ranging device.
<6> The control unit
The irradiation unit is operated by setting a period for irradiating the required irradiation light and a period for turning off the light.
A part of the pixel is received by the light receiving unit at a phase corresponding to the timing when the required irradiation light is irradiated by the irradiation unit, and the irradiation unit is turned off by pixels other than the part. Receive light in the phase corresponding to the timing of the light
The pixel signal generated by receiving the required reflected light in the phase corresponding to the timing when the irradiation unit irradiates the required irradiation light to the calculation unit corresponds to the timing when the irradiation unit is turned off. The method according to <5>, wherein the distance to the object is calculated by the phase difference between the required irradiation light and the required reflected light based on the difference from the pixel signal generated by receiving the light in phase. Distance measuring device.
<7> The control unit
The integrated value of the pixel signal generated by receiving the required reflected light in the phase corresponding to the timing when the irradiation unit irradiates the calculation unit with the required irradiation light, and the timing at which the irradiation unit is turned off. The distance measuring device according to <6>, which controls to calculate the distance to the object based on the difference from the integrated value of the pixel signal generated by receiving light in the phase corresponding to.
<8> The control unit
A part of the pixel receives the required reflected light on the light receiving unit at a phase corresponding to the timing when the required irradiation light is irradiated by the irradiation unit, and the phase corresponds to the timing when the irradiation unit is turned off. So, control so that the light is received at multiple integration times,
The calculation unit is made to determine the presence or absence of the collision based on the integrated value of the pixel signals generated by receiving light at the plurality of integrated times.
A period for irradiating the required irradiation light to the irradiation unit and a period for turning off the light are set, and when it is determined that the collision has occurred, the length of the period for turning off the light is changed. The ranging device according to <4>.
<9> The measurement according to <8>, wherein the control unit controls the irradiation unit to randomly change the length of the extinguishing period when it is determined that the collision has occurred. Distance device.
<10> The calculation unit determines the presence or absence of the collision based on a comparison between the standard deviation of the integrated value of the pixel signal generated by receiving light at the plurality of integrated times and a predetermined threshold value. 8> The distance measuring device.
<11> The control unit
A part of the pixel and a part other than the part of the pixel are alternately exposed to the light receiving unit for half a cycle of the modulation cycle of the modulation frequency of the required irradiation light in the irradiation unit.
The calculation unit is made to estimate the modulation frequency of another light source different from the irradiation unit based on the integrated values of the pixel signals detected by the light receiving unit at a plurality of integration times.
A part of the pixel and a part other than the part of the pixel are alternately exposed to the light receiving portion for half a cycle of the modulation cycle of the modulation frequency of the other light source estimated.
The distance measuring device according to <4>, wherein the calculation unit is controlled to calculate the distance to an object by the phase difference between the irradiation light of the other light source and the reflected light of the irradiation light of the other light source.
<12> The control unit detects the integrated value of the pixel signal detected by a part of the pixels in the light receiving unit and the integrated value of the pixel signal other than a part of the pixels in the light receiving unit during the plurality of integration times. The distance measuring device according to <11>, which controls to estimate the modulation frequency of the other light source based on the difference from the integrated value of the pixel signal.
<13> The control unit detects the integrated value of the pixel signal detected by a part of the pixels in the light receiving unit and the integrated value of the pixel signal other than a part of the pixels in the light receiving unit during the plurality of integration times. Based on the difference from the integrated value of the pixel signal, the difference frequency between the modulation frequency of the other light source and the modulation frequency of the required irradiation light irradiated by the irradiation unit is estimated, and based on the difference frequency. The distance measuring device according to <12>, which controls the modulation frequency of the other light source to be estimated.
<14> The distance measuring device according to <11>, wherein the plurality of integrated times are multiples of a time obtained by dividing the modulation period of the modulation frequency of the other light source into a plurality of modulation frequencies.
<15> Further includes a depth image generation unit that generates a depth image based on the information of the distance to the object calculated by the calculation unit.
The control unit
In the depth image generation unit,
The required reflected light corresponding to the required irradiation light of the irradiation unit, which is controlled based on the modulation frequency of the required irradiation light emitted by the irradiation unit, is received by the light receiving unit and is detected by the pixel signal. A first depth image based on the calculated distance information to the object,
The irradiation light of the other light source, which is controlled based on the modulation frequency of the other light source, is calculated by the pixel signal detected when the required reflected light corresponding to the required irradiation light is received by the light receiving unit. A second depth image based on the information on the distance to the object is generated.
Turn off the irradiation part
In the calculation unit
Based on the first depth image and the second depth image, the position and distance of the other light source are calculated.
The phase difference between the irradiation light emitted as the necessary irradiation light from the other light source and the reflected light in which the irradiation light emitted as the necessary irradiation light from the other light source is reflected by the object as the necessary reflected light, and The distance measuring device according to <11>, wherein the distance to the object is calculated based on the position and distance of the other light source.
<16> The distance measuring device according to any one of <1> to <15>, wherein the light receiving unit is a TOF (Time of Flight) sensor.
<17> A light receiving process that receives the reflected light generated by the reflection of the irradiation light by an object in the detection area.
Arithmetic processing that calculates the distance to the object by the phase difference between the irradiation light and the reflected light,
In the light receiving process, the influence of collision between the required irradiation light, which is the irradiation light used for calculating the distance to the object, and the unnecessary irradiation light, which is the irradiation light not used for calculating the distance to the object, is suppressed. As described above, a distance measuring method including a control process for controlling the light receiving process and the arithmetic process.
11 デプスセンサ, 21 光変調部, 22 発光ダイオード, 23 投光レンズ, 24 受光レンズ, 25 フィルタ, 26 TOFセンサ, 27 画像記憶部, 28 同期処理部, 29 演算部, 30 デプス画像生成部, 31 制御部, 40 画素アレイ部, 41 タップ駆動部, 42 垂直駆動部, 47 垂直信号線, 48 電圧供給線, 51 画素
11 depth sensor, 21 optical modulator, 22 light emitting diode, 23 floodlight lens, 24 light receiving lens, 25 filter, 26 TOF sensor, 27 image storage unit, 28 synchronization processing unit, 29 calculation unit, 30 depth image generation unit, 31 control Unit, 40 pixel array unit, 41 tap drive unit, 42 vertical drive unit, 47 vertical signal line, 48 voltage supply line, 51 pixels
Claims (17)
- 検出領域内の物体により照射光が反射されることにより生じる反射光を受光する受光部と、
前記照射光と前記反射光との位相差により前記物体までの距離を演算する演算部と、
前記受光部における、前記物体までの距離の演算に用いられる照射光である必要照射光と、前記物体までの距離の演算に用いられない照射光である不要照射光との衝突の影響を抑制するように、前記受光部および前記演算部を制御する制御部と
を含む測距装置。 A light receiving part that receives the reflected light generated by the reflection of the irradiation light by an object in the detection area,
An arithmetic unit that calculates the distance to the object based on the phase difference between the irradiation light and the reflected light.
Suppresses the influence of collision between the required irradiation light, which is the irradiation light used for calculating the distance to the object, and the unnecessary irradiation light, which is the irradiation light not used for calculating the distance to the object, in the light receiving unit. As described above, a distance measuring device including the light receiving unit and the control unit that controls the calculation unit. - 前記物体までの距離の演算に用いられる前記必要照射光は、前記物体までの距離の演算に必要とされる情報が既知の前記照射光である
請求項1に記載の測距装置。 The distance measuring device according to claim 1, wherein the required irradiation light used for calculating the distance to the object is the irradiation light for which the information required for calculating the distance to the object is known. - 前記物体までの距離の演算に必要とされる情報は、前記照射光の変調周波数および位相の情報である
請求項2に記載の測距装置。 The distance measuring device according to claim 2, wherein the information required for calculating the distance to the object is information on the modulation frequency and phase of the irradiation light. - 前記検出領域に対して、パルス状の前記照射光を前記必要照射光として照射する照射部をさらに含み、
前記制御部は、
前記受光部に、前記照射部により照射された前記必要照射光が、前記検出領域内の物体により、反射されることにより生じる反射光である必要反射光を受光させ、
前記演算部に、前記照射部により照射された前記必要照射光と前記必要反射光との位相差により前記物体までの距離を演算させるように制御する
請求項1に記載の測距装置。 An irradiation unit that irradiates the detection region with the pulsed irradiation light as the necessary irradiation light is further included.
The control unit
The light receiving unit is made to receive the required reflected light, which is the reflected light generated when the required irradiation light irradiated by the irradiation unit is reflected by an object in the detection region.
The distance measuring device according to claim 1, wherein the calculation unit is controlled to calculate the distance to the object by the phase difference between the required irradiation light irradiated by the irradiation unit and the required reflected light. - 前記制御部は、前記受光部に、画素の一部で測距に必要な前記必要反射光を受光させ、前記一部以外の画素で前記不要照射光を受光させるように制御する
請求項4に記載の測距装置。 According to claim 4, the control unit controls the light receiving unit to receive the necessary reflected light required for distance measurement with a part of the pixels and to receive the unnecessary irradiation light with the pixels other than the part. The distance measuring device described. - 前記制御部は、
前記照射部に、前記必要照射光を照射させる期間と、消灯させる期間とを設定して動作させ、
前記受光部に、前記画素の一部を、前記照射部により前記必要照射光が照射されるタイミングと対応する位相で前記必要反射光を受光させ、前記一部以外の画素で前記照射部が消灯しているタイミングと対応する位相で受光させ、
前記演算部に、前記照射部が前記必要照射光を照射するタイミングと対応する位相で前記必要反射光を受光することにより生成される画素信号と、前記照射部が消灯しているタイミングと対応する位相で受光することにより生成される画素信号との差分に基づいて、前記必要照射光と前記必要反射光との位相差により前記物体までの距離を演算させるように制御する
請求項5に記載の測距装置。 The control unit
The irradiation unit is operated by setting a period for irradiating the required irradiation light and a period for turning off the light.
A part of the pixel is received by the light receiving unit at a phase corresponding to the timing when the required irradiation light is irradiated by the irradiation unit, and the irradiation unit is turned off by pixels other than the part. Receive light in the phase corresponding to the timing of the light
The pixel signal generated by receiving the required reflected light in the phase corresponding to the timing when the irradiation unit irradiates the required irradiation light to the calculation unit corresponds to the timing when the irradiation unit is turned off. The fifth aspect of claim 5, wherein the distance to the object is controlled to be calculated by the phase difference between the required irradiation light and the required reflected light based on the difference from the pixel signal generated by receiving light in phase. Distance measuring device. - 前記制御部は、
前記演算部に、前記照射部が前記必要照射光を照射するタイミングと対応する位相で前記必要反射光を受光することにより生成される画素信号の積算値と、前記照射部が消灯しているタイミングと対応する位相で受光することにより生成される画素信号の積算値との差分に基づいて、前記物体までの距離を演算させるように制御する
請求項6に記載の測距装置。 The control unit
The integrated value of the pixel signal generated by receiving the required reflected light in the phase corresponding to the timing at which the irradiation unit irradiates the calculation unit with the required irradiation light, and the timing at which the irradiation unit is turned off. The distance measuring device according to claim 6, wherein the distance to the object is controlled to be calculated based on the difference from the integrated value of the pixel signal generated by receiving light in the phase corresponding to the above. - 前記制御部は、
前記受光部に、画素の一部が、前記照射部により前記必要照射光が照射されるタイミングと対応する位相で前記必要反射光を受光し、前記照射部が消灯しているタイミングと対応する位相で、複数の積算時間で受光させるように制御し、
前記演算部に、前記複数の積算時間で受光することにより生成される画素信号の積算値に基づいて、前記衝突の有無を判定させ、
前記照射部に、前記必要照射光を照射させる期間と、消灯させる期間とを設定すると共に、前記衝突が発生していると判定された場合、前記消灯させる期間の長さを変更させるように制御する
請求項4に記載の測距装置。 The control unit
A part of the pixel receives the required reflected light on the light receiving unit at a phase corresponding to the timing when the required irradiation light is irradiated by the irradiation unit, and the phase corresponds to the timing when the irradiation unit is turned off. So, control so that the light is received at multiple integration times,
The calculation unit is made to determine the presence or absence of the collision based on the integrated value of the pixel signals generated by receiving light at the plurality of integrated times.
A period for irradiating the required irradiation light to the irradiation unit and a period for turning off the light are set, and when it is determined that the collision has occurred, the length of the period for turning off the light is changed. The distance measuring device according to claim 4. - 前記制御部は、前記衝突が発生していると判定された場合、前記照射部に、前記消灯させる期間の長さを、ランダムに変更させるように制御する
請求項8に記載の測距装置。 The distance measuring device according to claim 8, wherein the control unit controls the irradiation unit to randomly change the length of the extinguishing period when it is determined that the collision has occurred. - 前記演算部は、前記複数の積算時間で受光されることにより生成される画素信号の積算値の標準偏差と、所定の閾値との比較に基づいて、前記衝突の有無を判定する
請求項8に記載の測距装置。 According to claim 8, the calculation unit determines the presence or absence of the collision based on a comparison between the standard deviation of the integrated value of the pixel signal generated by receiving light at the plurality of integrated times and a predetermined threshold value. The described ranging device. - 前記制御部は、
前記受光部に、画素の一部と、前記一部以外とを、それぞれ前記照射部における前記必要照射光の変調周波数の変調周期の半周期ずつ交互に露光させ、
前記演算部に、前記受光部により検出される画素信号の、複数の積算時間における積算値に基づいて、前記照射部とは異なる他の光源の変調周波数を推定させ、
前記受光部に、画素の一部と、前記一部以外とを、それぞれ推定された前記他の光源の変調周波数の変調周期の半周期ずつ交互に露光させ、
前記演算部に、前記他の光源の照射光と、前記他の光源の照射光の反射光との位相差により物体までの距離を演算させるように制御する
請求項4に記載の測距装置。 The control unit
A part of the pixel and a part other than the part of the pixel are alternately exposed to the light receiving unit for half a cycle of the modulation cycle of the modulation frequency of the required irradiation light in the irradiation unit.
The calculation unit is made to estimate the modulation frequency of another light source different from the irradiation unit based on the integrated values of the pixel signals detected by the light receiving unit at a plurality of integration times.
A part of the pixel and a part other than the part of the pixel are alternately exposed to the light receiving portion for half a cycle of the modulation cycle of the modulation frequency of the other light source estimated.
The distance measuring device according to claim 4, wherein the calculation unit is controlled to calculate the distance to an object by the phase difference between the irradiation light of the other light source and the reflected light of the irradiation light of the other light source. - 前記制御部は、前記演算部に、前記複数の積算時間における、前記受光部における画素の一部により検出される画素信号の積算値と、前記受光部における画素の一部以外により検出される画素信号の積算値との差分に基づいて、前記他の光源の変調周波数を推定させるように制御する
請求項11に記載の測距装置。 The control unit tells the calculation unit that the integrated value of the pixel signal detected by a part of the pixels in the light receiving unit and the pixels detected by other than a part of the pixels in the light receiving unit during the plurality of integration times. The distance measuring device according to claim 11, wherein the modulation frequency of the other light source is controlled to be estimated based on the difference from the integrated value of the signal. - 前記制御部は、前記演算部に、前記複数の積算時間における、前記受光部における画素の一部により検出される画素信号の積算値と、前記受光部における画素の一部以外により検出される画素信号の積算値との差分に基づいて、前記他の光源の変調周波数と、前記照射部により照射される前記必要照射光の変調周波数との差分周波数を推定させ、前記差分周波数に基づいて、前記他の光源の変調周波数を推定させるように制御する
請求項12に記載の測距装置。 The control unit tells the calculation unit that the integrated value of the pixel signal detected by a part of the pixels in the light receiving unit and the pixels detected by other than a part of the pixels in the light receiving unit during the plurality of integration times. The difference frequency between the modulation frequency of the other light source and the modulation frequency of the required irradiation light irradiated by the irradiation unit is estimated based on the difference from the integrated value of the signal, and the difference frequency is estimated based on the difference frequency. The distance measuring device according to claim 12, wherein the modulation frequency of another light source is controlled to be estimated. - 前記複数の積算時間は、前記他の光源の変調周波数の凡その変調周波数の変調周期を複数に分割した時間の倍数である
請求項11に記載の測距装置。 The distance measuring device according to claim 11, wherein the plurality of integrated times are multiples of a time obtained by dividing the modulation period of the modulation frequency of the other light sources into a plurality of modulation cycles. - 前記演算部により演算された物体までの距離の情報に基づいてデプス画像を生成するデプス画像生成部をさらに含み、
前記制御部は、
前記デプス画像生成部に、
前記照射部により照射される前記必要照射光の変調周波数に基づいて制御された、前記照射部の前記必要照射光に対応する前記必要反射光が前記受光部により受光され、検出される画素信号で演算された前記物体までの距離の情報に基づいた第1のデプス画像と、
前記他の光源の変調周波数に基づいて制御された、前記他の光源の前記照射光が、前記必要照射光に対応する前記必要反射光が前記受光部により受光され、検出される画素信号で演算された前記物体までの距離の情報に基づいた第2のデプス画像とを生成させ、
前記照射部を消灯させ、
前記演算部に、
前記第1のデプス画像と、前記第2のデプス画像とに基づいて、前記他の光源の位置と距離を演算させ、
前記他の光源より必要照射光として照射される照射光と、前記他の光源より必要照射光として照射された照射光が物体により前記必要反射光として反射される反射光との位相差、並びに、前記他の光源の位置と距離に基づいて、前記物体までの距離を演算させる
請求項11に記載の測距装置。 Further including a depth image generation unit that generates a depth image based on the information of the distance to the object calculated by the calculation unit.
The control unit
In the depth image generation unit,
The required reflected light corresponding to the required irradiation light of the irradiation unit, which is controlled based on the modulation frequency of the required irradiation light emitted by the irradiation unit, is received by the light receiving unit and is detected by the pixel signal. A first depth image based on the calculated distance information to the object,
The irradiation light of the other light source, which is controlled based on the modulation frequency of the other light source, is calculated by the pixel signal detected when the required reflected light corresponding to the required irradiation light is received by the light receiving unit. A second depth image based on the information on the distance to the object is generated.
Turn off the irradiation part
In the calculation unit
Based on the first depth image and the second depth image, the position and distance of the other light source are calculated.
The phase difference between the irradiation light emitted as the necessary irradiation light from the other light source and the reflected light in which the irradiation light emitted as the necessary irradiation light from the other light source is reflected by the object as the necessary reflected light, and The distance measuring device according to claim 11, wherein the distance to the object is calculated based on the position and distance of the other light source. - 前記受光部は、TOF(Time of Flight)センサである
請求項1に記載の測距装置。 The distance measuring device according to claim 1, wherein the light receiving unit is a TOF (Time of Flight) sensor. - 検出領域内の物体により照射光が反射されることにより生じる反射光を受光する受光処理と、
前記照射光と前記反射光との位相差により前記物体までの距離を演算する演算処理と、
前記受光処理における、前記物体までの距離の演算に用いられる照射光である必要照射光と、前記物体までの距離の演算に用いられない照射光である不要照射光との衝突の影響を抑制するように、前記受光処理および演算処理を制御する制御処理と
を含む測距方法。 Light receiving processing that receives the reflected light generated by the reflection of the irradiation light by the object in the detection area,
Arithmetic processing that calculates the distance to the object by the phase difference between the irradiation light and the reflected light,
In the light receiving process, the influence of collision between the required irradiation light, which is the irradiation light used for calculating the distance to the object, and the unnecessary irradiation light, which is the irradiation light not used for calculating the distance to the object, is suppressed. As described above, a distance measuring method including a control process for controlling the light receiving process and the arithmetic process.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019052129A JP2020153799A (en) | 2019-03-20 | 2019-03-20 | Distance measuring device and distance measuring method |
JP2019-052129 | 2019-03-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020189339A1 true WO2020189339A1 (en) | 2020-09-24 |
Family
ID=72520989
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/009677 WO2020189339A1 (en) | 2019-03-20 | 2020-03-06 | Distance measuring device and distance measuring method |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2020153799A (en) |
WO (1) | WO2020189339A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112799087A (en) * | 2020-12-30 | 2021-05-14 | 艾普柯微电子(江苏)有限公司 | Distance measuring method and distance measuring device |
CN115396607A (en) * | 2022-08-26 | 2022-11-25 | 天津大学 | TOF imaging 2-tap pixel modulation resolving method |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112653847B (en) * | 2020-12-17 | 2022-08-05 | 杭州艾芯智能科技有限公司 | Automatic exposure method of depth camera, computer device and storage medium |
WO2022201804A1 (en) * | 2021-03-25 | 2022-09-29 | ソニーセミコンダクタソリューションズ株式会社 | Information processing device, information processing method, and program |
WO2023009100A1 (en) * | 2021-07-26 | 2023-02-02 | Hewlett-Packard Development Company, L.P. | Audio signals output |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006190791A (en) * | 2005-01-05 | 2006-07-20 | Matsushita Electric Works Ltd | Photo detector, control method of photo detector, and space information detector |
JP2016090436A (en) * | 2014-11-06 | 2016-05-23 | 株式会社デンソー | Time-of-flight optical ranging device |
WO2017013857A1 (en) * | 2015-07-22 | 2017-01-26 | パナソニックIpマネジメント株式会社 | Distance measurement device |
WO2017022219A1 (en) * | 2015-08-04 | 2017-02-09 | パナソニックIpマネジメント株式会社 | Solid state image pickup device driving method |
JP2017190978A (en) * | 2016-04-12 | 2017-10-19 | 株式会社村田製作所 | Distance sensor, electronic device, and manufacturing method for electronic device |
JP2017198477A (en) * | 2016-04-25 | 2017-11-02 | スタンレー電気株式会社 | Distance image generation device |
-
2019
- 2019-03-20 JP JP2019052129A patent/JP2020153799A/en active Pending
-
2020
- 2020-03-06 WO PCT/JP2020/009677 patent/WO2020189339A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006190791A (en) * | 2005-01-05 | 2006-07-20 | Matsushita Electric Works Ltd | Photo detector, control method of photo detector, and space information detector |
JP2016090436A (en) * | 2014-11-06 | 2016-05-23 | 株式会社デンソー | Time-of-flight optical ranging device |
WO2017013857A1 (en) * | 2015-07-22 | 2017-01-26 | パナソニックIpマネジメント株式会社 | Distance measurement device |
WO2017022219A1 (en) * | 2015-08-04 | 2017-02-09 | パナソニックIpマネジメント株式会社 | Solid state image pickup device driving method |
JP2017190978A (en) * | 2016-04-12 | 2017-10-19 | 株式会社村田製作所 | Distance sensor, electronic device, and manufacturing method for electronic device |
JP2017198477A (en) * | 2016-04-25 | 2017-11-02 | スタンレー電気株式会社 | Distance image generation device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112799087A (en) * | 2020-12-30 | 2021-05-14 | 艾普柯微电子(江苏)有限公司 | Distance measuring method and distance measuring device |
CN115396607A (en) * | 2022-08-26 | 2022-11-25 | 天津大学 | TOF imaging 2-tap pixel modulation resolving method |
Also Published As
Publication number | Publication date |
---|---|
JP2020153799A (en) | 2020-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020189339A1 (en) | Distance measuring device and distance measuring method | |
US10746874B2 (en) | Ranging module, ranging system, and method of controlling ranging module | |
JP7214363B2 (en) | Ranging processing device, ranging module, ranging processing method, and program | |
CN109804266B (en) | Distance measuring device and distance measuring method | |
WO2021085128A1 (en) | Distance measurement device, measurement method, and distance measurement system | |
WO2020184224A1 (en) | Ranging device and skew correction method | |
US20210293958A1 (en) | Time measurement device and time measurement apparatus | |
WO2021065495A1 (en) | Ranging sensor, signal processing method, and ranging module | |
WO2021065494A1 (en) | Distance measurement sensor, signal processing method, and distance measurement module | |
JP7030607B2 (en) | Distance measurement processing device, distance measurement module, distance measurement processing method, and program | |
WO2020158378A1 (en) | Ranging device, ranging method, and program | |
WO2021085123A1 (en) | Photoreceiver device, ranging device, and photoreceiver circuit | |
WO2021059682A1 (en) | Solid-state imaging element, electronic device, and solid-state imaging element control method | |
WO2022004441A1 (en) | Ranging device and ranging method | |
US20240111033A1 (en) | Photodetection device and photodetection system | |
JP7517349B2 (en) | Signal processing device, signal processing method, and distance measuring device | |
WO2021065500A1 (en) | Distance measurement sensor, signal processing method, and distance measurement module | |
WO2020255855A1 (en) | Ranging device and ranging method | |
WO2023149335A1 (en) | Ranging device, and ranging method | |
US20230228875A1 (en) | Solid-state imaging element, sensing system, and control method of solid-state imaging element | |
WO2023181662A1 (en) | Range-finding device and range-finding method | |
WO2023079944A1 (en) | Control device, control method, and control program | |
WO2021131684A1 (en) | Ranging device, method for controlling ranging device, and electronic apparatus | |
WO2021166344A1 (en) | Sensing system and distance measurement system | |
WO2023281810A1 (en) | Distance measurement device and distance measurement method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20773649 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20773649 Country of ref document: EP Kind code of ref document: A1 |