WO2020235458A1 - Dispositif de traitement d'image, procédé, et appareil électronique - Google Patents

Dispositif de traitement d'image, procédé, et appareil électronique Download PDF

Info

Publication number
WO2020235458A1
WO2020235458A1 PCT/JP2020/019368 JP2020019368W WO2020235458A1 WO 2020235458 A1 WO2020235458 A1 WO 2020235458A1 JP 2020019368 W JP2020019368 W JP 2020019368W WO 2020235458 A1 WO2020235458 A1 WO 2020235458A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
signal
ranging
wavelength
depth
Prior art date
Application number
PCT/JP2020/019368
Other languages
English (en)
Japanese (ja)
Inventor
友希 鴇崎
諭志 河田
神尾 和憲
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2020235458A1 publication Critical patent/WO2020235458A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • This disclosure relates to image processing devices, methods and electronic devices.
  • the ToF sensor irradiates a predetermined distance measuring object with a predetermined ranging light and measures the distance to the measurement target position (each part of the measurement object) based on the difference between the phase of the irradiation light and the phase of the reflected light. doing.
  • the present disclosure has been made in view of such a situation, and it is possible to reduce the influence of disturbance due to external light or the difference in reflectance in the measurement target, and to perform distance measurement processing at the measurement target position more accurately. It is intended to provide possible image processing devices, methods and electronic devices.
  • the image processing apparatus of the present disclosure receives a received signal of the reflected light of the first ranging light of the first wavelength and a received light of the reflected light of the second ranging light of the second wavelength.
  • a difference calculation unit that calculates the difference between the reflection intensity of the first distance measurement light and the reflection intensity of the second distance measurement light at the same distance measurement position based on the signal, and the distance measurement position based on the difference.
  • An output unit that outputs a depth signal based on the in-phase component signal and the orthogonal component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light. , Equipped with.
  • FIG. 1 is an explanatory diagram of the principle of the image processing apparatus of the embodiment.
  • the sunlight spectrum after passing through the atmosphere means, for example, the sunlight spectrum under the conditions of Air Mass 1.5 defined in ASTM [American Society for Testing and Materials].
  • the image processing device 10 of the embodiment includes a difference calculation unit 11 and an output unit 12.
  • the difference calculation unit 11 has relative light receiving signals SL1 and spectral radiant intensity of the reflected light of the first ranging light L1 having a first wavelength ⁇ 1 having a relatively high spectral radiant intensity.
  • the output unit 12 Based on the received signal SL2 of the reflected light of the second ranging light L2 having a second wavelength ⁇ 2 (> f1), which is relatively low, the reflected intensity of the first ranging light L1 and the second ranging light L2 The difference d between the reflection intensity and the reflection intensity is calculated.
  • the output unit 12 has the in-phase component signal I1 and the orthogonal component signal Q1 corresponding to the light-receiving signal SL1 or the in-phase component signal I2 and the orthogonal component signal Q2 corresponding to the light-receiving signal SL2 based on the difference d calculated by the difference calculation unit 11.
  • the depth signal SDP corresponding to any of the above is output.
  • FIG. 2 is an explanatory diagram of an example of the sunlight spectrum after passing through the atmosphere.
  • the first ranging light L1 cannot be separated from sunlight. Therefore, the distance measurement using the first distance measurement light L1 may reduce the distance measurement accuracy under sunlight.
  • the second ranging light L2 has a relatively low spectral radiant intensity in the sunlight spectrum. That is, it is a component that is not contained much in sunlight after passing through the atmosphere. Therefore, the second ranging light L2 can be separated from the sunlight under sunlight and is not easily affected by the sunlight.
  • the wavelength of the second ranging light L2 is relatively long, blurring or scattering in the lens may become a problem. Further, even if the wavelengths are different, there is no influence on the distance measurement (influence on the phase difference).
  • the depth signal SDP corresponding to the orthogonal component signal Q is output.
  • the portion affected by sunlight (pixels affected by sunlight) can be separated from sunlight, and the light receiving signal SL2 corresponding to the second ranging light L2, which is not easily affected by sunlight.
  • the depth signal SDP corresponding to the in-phase component signal I and the orthogonal component signal Q corresponding to the above is output.
  • Depth signal SDP can be stably obtained for the entire image by adaptively selecting each of the scenes in which the area exposed to sunlight and the area not exposed to sunlight are mixed in one image such as (shade of trees). Will be.
  • the signal intensity of the received signal SL1 of the reflected light of the first ranging light L1 is a correct value indoors, which is not affected by sunlight. Further, in outdoors affected by sunlight, offset due to sunlight is included, and the signal intensity of the received signal SL1 of the reflected light of the first ranging light L1 becomes a large value.
  • the signal intensity of the received signal SL2 of the reflected light of the second ranging light L2 is not affected by sunlight both indoors and outdoors, and is a correct value.
  • the signal strength of the light receiving signal SL1 is AL1
  • the signal strength of the light receiving signal SL2 is AL2
  • the light receiving signal SL2 of the reflected light of the second ranging light L2 is regarded as having a large influence of sunlight.
  • the depth signal SDP is calculated using this.
  • the influence of sunlight is considered to be small, and the light receiving signal SL1 of the reflected light of the first ranging light L1 is used.
  • the depth signal SDP is calculated using this.
  • the correct depth signal can be calculated at any position where the influence of sunlight is large or the influence of sunlight is small. Therefore, for example, the control of the camera is stable and the image quality of photo processing is improved. It can be improved and obstacles can be detected robustly.
  • FIG. 3 is a schematic block diagram of the image processing apparatus of the first embodiment.
  • It includes a light receiving unit 22 that outputs a light receiving signal SL2, and a signal processing unit 23 that controls the irradiation unit 21 and generates a depth signal SDP based on the input light receiving signal SL1 or the light receiving signal SL2.
  • FIG. 4 is an explanatory diagram of the first aspect of the light receiving unit.
  • the light receiving unit 22 includes a light receiving lens 22A, a beam splitter (half mirror) 22B, a first TOF (Times of Flight) sensor 22D-1, and a second TOF sensor 22D-2.
  • the light receiving lens 22A collects the light received.
  • the beam splitter 22B divides the light received through the light receiving lens 22A into two systems.
  • the first TOF sensor 22D-1 receives light through the filter 22C-1 and outputs a light receiving signal SL1.
  • the second TOF sensor 22D-2 receives light through the filter 22C-2 and outputs a light receiving signal SL2.
  • the first TOF sensor 22D-1 and the second TOF sensor 22D-2 include a two-dimensional image pickup element in which a light receiving element (pixel cell) is two-dimensionally arranged. A plurality of light receiving signals SL1 and light receiving signals SL2 are output in pixel (pixel cell) units.
  • FIG. 5 is an explanatory view of a second aspect of the light receiving unit.
  • the TOF sensor 25 is provided with.
  • the lens array unit 25A in which the lens LS is arranged corresponding to each pixel cell C1 and C2, the filter FL1 corresponding to the pixel cell C1, and the filter FL2 corresponding to the pixel cell C2 are alternately arranged. It includes a filter unit 25B and a light receiving unit 25C in which a light receiving cell PC is arranged so as to correspond to the filter FL1 and the filter FL2.
  • FIG. 6 is an explanatory diagram of pixel interpolation.
  • the TOF sensor 25 further includes a signal interpolation unit, and for the light receiving cell PC corresponding to the pixel cell C1, the light receiving cell PC corresponding to the pixel cell C1 located around the light receiving cell corresponding to the pixel cell C2. If the pixel cell C1 (virtual pixel cell C1) is located in the pixel cell C2 based on the output of, the output of the light receiving cell of the pixel cell C1 may be interpolated. .. The same applies to the interpolation of the pixel cell C2.
  • the output of the virtual pixel cell C1 corresponding to the arrangement position of the pixel cell C2 is equal to the average value of the outputs of the four pixel cells C1 located adjacent to the light receiving cell PC. Just do it.
  • the virtual pixel cell C1 When the virtual pixel cell C1 is located at the four corners of the TOF sensor 25, it may be the average value of the outputs of the two adjacent pixel cells C1, and the virtual pixel cell C1 is the TOF sensor 25. When it is located on the peripheral edge excluding the four corners, it may be the average value of the outputs of the three adjacent pixel cells C1.
  • the signal processing unit 23 may perform the same processing.
  • FIG. 7 is an explanatory diagram of an example of a functional block of the signal processing unit of the first embodiment.
  • the signal processing unit 23 is roughly classified into a first RAW image storage unit 30-1, a second RAW image storage unit 30-2, a first reflection intensity calculation unit 31-1, a second reflection intensity calculation unit 31-2, and an intensity signal. It includes a difference calculation unit 32, a selection signal generation unit 33, a first I_Q signal calculation unit 34-1, a second I_Q signal calculation unit 34-2, a selection unit 35, and a depth conversion unit 36.
  • the intensity signal difference calculation unit 32 calculates the difference P between the input reflection intensity signal C850 and the reflection intensity signal C940 by the following equation and outputs it as a selection signal sel to the selection unit 35.
  • P (C850-C940) / (C850 + C940)
  • the first I_Q signal calculation unit 34-1 calculates the first orthogonal component signal Q850 corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850, and the selection unit 35. Output to.
  • the second I_Q signal calculation unit 34-2 calculates the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940, and the selection unit 35. Output to.
  • the selection signal generation unit 33 compares the difference P with the predetermined threshold value th, and determines the difference P. P ⁇ th In the case of, the selection signal sel for selecting the first common mode component signal I850 and the first orthogonal component signal Q850 is output to the selection unit 35, assuming that the influence of sunlight is small.
  • the selection signal generation unit 33 compares the difference P with the predetermined threshold value th, and determines the difference P. P> th In this case, the selection signal sel for selecting the second common mode component signal I940 and the second orthogonal component signal Q940, which are less affected by sunlight, is output to the selection unit 35, assuming that the influence of sunlight is large.
  • the predetermined threshold value th is a value sufficiently larger than the difference P that may occur when the influence of sunlight is small, and may occur when the influence of sunlight is large. It is set to a value sufficiently smaller than the difference P.
  • the selection unit 35 depths either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940 based on the selection signal sel. Output to the conversion unit 36.
  • the depth conversion unit 36 is based on either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940.
  • the depth signal SDP corresponding to the distance to the object OBJ is calculated and output for each pixel.
  • FIG. 8 is an overall processing flowchart of the embodiment.
  • the process is started, the object OBJ is imaged, and the process of measuring the depth (distance to the object OBJ) (step S11) and the visible image acquisition process (step S12) are performed in parallel.
  • an image processing process step S13 for generating a depth image by applying the measured depth to the visible image is performed, and the process is completed.
  • FIG. 9 is a processing flowchart of the depth measurement process of the first embodiment.
  • the light receiving unit 22 receives the reflected light of the first ranging light L1 and the reflected light of the second ranging light L2. Then, the light receiving unit 22 generates RAW image data RAW850 and RAW image data RAW940 and outputs them to the signal processing unit 23.
  • the signal processing unit 23 acquires the RAW image data RAW850 (step S21) and outputs it to the first reflection intensity calculation unit 31-1 and the first I_Q signal calculation unit 34-1. Similarly, the signal processing unit 23 acquires the RAW image data RAW940 (step S22) and outputs it to the second reflection intensity calculation unit 31-2 and the second I_Q signal calculation unit 34-2.
  • the signal processing unit 23 stores the acquired RAW image data RAW850 and RAW image data RAW940 in a work memory (not shown).
  • the first reflection intensity calculation unit 31-1 calculates the reflection intensity signal C850 for each pixel based on the acquired RAW image data RAW850 and outputs it to the intensity signal difference calculation unit 32 (step S23).
  • the second reflection intensity calculation unit 31-2 calculates the reflection intensity signal C940 for each pixel based on the acquired RAW image data RAW940 and outputs it to the intensity signal difference calculation unit 32 (step S24).
  • the intensity signal difference calculation unit 32 calculates the difference P between the input reflection intensity signal C850 and the reflection intensity signal C940 by the following equation and outputs it to the selection signal generation unit 33 (step S25).
  • P (C850-C940) / (C850 + C940)
  • the first I_Q signal calculation unit 34-1 calculates and selects the first orthogonal component signal Q850 corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850. Output to unit 35 (step S26).
  • the second I_Q signal calculation unit 34-2 calculates and selects the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940. Output to unit 35 (step S27).
  • the selection signal generation unit 33 compares the difference P with the predetermined threshold value th, and determines the difference P.
  • P ⁇ th In the case of, the selection signal sel for selecting the first common mode component signal I850 and the first orthogonal component signal Q850 is output to the selection unit 35, assuming that the influence of sunlight is small.
  • P> th In the case of, the selection signal sel for selecting the second common mode component signal I940 and the second orthogonal component signal Q940, which are less affected by sunlight, is output to the selection unit 35 (assuming that the influence of sunlight is large).
  • the selection unit 35 depths either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940 based on the selection signal sel. Output to the conversion unit 36 (step S29).
  • the depth conversion unit 36 is based on either the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 or the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940.
  • the depth signal DP corresponding to the distance to the object OBJ is calculated and output for each pixel (step S30).
  • each pixel is affected by sunlight, and when it is determined that the pixel is less affected by sunlight, it is scattered.
  • the depth signal SDP is calculated based on the first in-phase component signal I850 and the first orthogonal component signal Q850 corresponding to the first ranging light L1 having a first wavelength ⁇ 1. Further, when it is determined that the pixel is greatly affected by sunlight, the second in-phase component signal I940 and the second in-phase component signal I940 corresponding to the second ranging light L2 having the second wavelength ⁇ 2, which is less affected by sunlight, are used. 2 The depth signal DP is calculated based on the orthogonal component signal Q940. Therefore, the obtained depth image is highly accurate.
  • the depth signal SDP was generated by using either the second in-phase component signal I940 or the second orthogonal component signal Q940 corresponding to the second ranging light L2 having the second wavelength ⁇ 2.
  • the first in-phase component signal I850 and the first orthogonal component signal Q850 corresponding to the first ranging light L1 of the first wavelength ⁇ 1 and the second ranging of the second wavelength ⁇ 2 are used.
  • any depth signal is selected as the depth signal SDP based on the region determination feature amount. is there.
  • FIG. 10 is an explanatory diagram of an example of a functional block of the signal processing unit of the second embodiment. In this case, since the overall configuration is the same as that of the first embodiment, it will be described with reference to FIG.
  • the signal processing unit 23 in the second embodiment is roughly classified into the first RAW image storage unit 40-1, the second RAW image storage unit 40-2, the first I_Q signal calculation unit 41-1, and the second I_Q signal calculation unit 41-2. , 1st depth conversion unit 42-1, 2nd depth conversion unit 42-2, 1st area determination feature amount calculation unit 43-1, 2nd area determination feature amount calculation unit 43-2, comparison unit 44 and selection unit 45 It has.
  • the first orthogonal component signal Q850 corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 is calculated and output to the first depth conversion unit 42-1.
  • the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 is calculated and output to the second depth conversion unit 42-2.
  • the first depth conversion unit 42-1 calculates the first depth signal DP1 corresponding to the distance to the object OBJ for each pixel based on the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850, and selects the unit. Output to 45.
  • the second depth conversion unit 42-2 calculates for each second depth signal DP2 pixel corresponding to the distance to the object OBJ based on the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940, and selects the selection unit 45. Output to.
  • the first region determination feature amount calculation unit 43-1 is based on the first depth image corresponding to the first depth signal DP1, and the edge depiction degree ED1 (edge degree: signal is buried in noise) in the first depth image.
  • the SN ratio ⁇ 1 of the flat portion of the first depth image is calculated and output to the comparison unit 44.
  • the second region determination feature amount calculation unit 43-2 is based on the second depth image corresponding to the second depth signal DP2, and the edge depiction degree ED2 (edge degree: signal is buried in noise) in the second depth image.
  • the SN ratio ⁇ 2 of the flat portion of the second depth image is calculated and output to the comparison unit 44.
  • the comparison unit 44 determines the reliability of the edge degree in the first depth image, the SN ratio ⁇ of the flat portion of the first depth image GDP1, the edge degree in the second depth image GDP2, and the SN ratio ⁇ of the flat portion of the second depth image, respectively. As a result, a selection signal sel for selecting a more reliable depth image from the first depth image GDP1 and the second depth image GDP2 is output to the selection unit 45.
  • the selection unit 45 outputs either the first depth signal DP1 or the second depth signal DP2 as the depth signal DP for each pixel based on the selection signal sel.
  • FIG. 11 is a processing flowchart of the depth measurement process of the second embodiment.
  • the light receiving unit 22 receives the reflected light of the first ranging light L1 and the reflected light of the second ranging light L2, and the RAW image data RAW850
  • the RAW image data RAW940 is generated, the RAW image data RAW850 is output to the first I_Q signal calculation unit 41-1, and the RAW image data RAW940 is output to the second I_Q signal calculation unit 41-2.
  • the first RAW image storage unit 40-1 of the signal processing unit 23 acquires the RAW image data RAW850 (step S41).
  • the first I_Q signal calculation unit 41-1 has a first orthogonality corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850 read from the first RAW image storage unit 40-1.
  • the component signal Q850 is calculated and output to the first depth conversion unit 42-1 (step S42).
  • the first depth conversion unit 42-1 calculates the first depth signal SDP1 corresponding to the distance to the object OBJ based on the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850 for each pixel. It is output to the area determination feature amount calculation unit 43-1 and the selection unit 45 (step S43).
  • the first region determination feature amount calculation unit 43-1 calculates the reliability (step S44).
  • FIG. 12 is a processing flowchart of the reliability calculation process.
  • the first region determination feature amount calculation unit 43-1 is based on the first depth image corresponding to the first depth signal DP1, and the edge depiction degree (edge extraction degree: the signal is buried in noise) in the first depth image.
  • the degree) is calculated (step S51), and the reliability E with respect to the edge extraction degree is calculated (step S52).
  • the first region determination feature amount calculation unit 43-1 calculates the variance ⁇ of the SN ratio of the flat portion of the first depth image (step S53), and calculates the reliability F with respect to the variance ⁇ (step). S54).
  • the first region determination feature amount calculation unit 43-1 integrates both reliability E and F and outputs the integrated reliability RL1 to the comparison unit 44 (step S55).
  • the second RAW image storage unit 40-2 acquires the RAW image data RAW940 (step S45).
  • the second I_Q signal calculation unit 41-2 calculates the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940.
  • Output to the 2-depth conversion unit 42-2 step S46).
  • the second depth conversion unit 42-2 calculates the second depth signal SDP2 corresponding to the distance to the object OBJ for each pixel based on the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940, and the second It is output to the area determination feature amount calculation unit 43-2 and the selection unit 45 (step S47). As a result, the second region determination feature amount calculation unit 43-2 calculates the reliability (step S48).
  • the second region determination feature amount calculation unit 43-2 uses the second depth image corresponding to the second depth signal SDP2 to draw an edge in the second depth image (edge extraction degree: the signal is The degree of being buried in noise) is calculated (step S51), and the reliability E with respect to the edge extraction degree is calculated (step S52).
  • step S53 the variance ⁇ of the SN ratio of the flat portion of the second depth image is calculated (step S53), and the reliability F with respect to the variance ⁇ is calculated (step S54). Subsequently, both reliability E and F are integrated and the integrated reliability RL2 is output to the comparison unit 44 (step S55).
  • the comparison unit 44 is based on the integrated reliability RL1 output by the first area determination feature amount calculation unit 43-1 and the integrated reliability RL2 output by the second area determination feature amount calculation unit 43-2. It is determined which of the integrated reliability RL1 and the integrated reliability RL2 has the higher integrated reliability (step S49). Then, as a result of the determination, the comparison unit 44 outputs the depth signal SDP1 and the depth signal SDP2 to the selection unit 45 as a selection signal sel for selecting a more reliable depth signal.
  • the selection unit 45 outputs either the first depth signal SDP1 or the second depth signal SDP2 as the depth signal SDP for each pixel based on the selection signal sel (step S50).
  • the second embodiment it is determined whether or not each pixel is affected by sunlight based on the reliability of the depth image. Then, when it is determined that the pixel is less affected by sunlight, the first in-phase component signal I850 and the first orthogonal component signal Q850 corresponding to the first ranging light L1 having the first wavelength ⁇ 1 with less scattering are used. Based on this, the depth signal SDP is calculated. Further, when it is determined that the pixel is greatly affected by sunlight, the second in-phase component signal I940 and the second in-phase component signal I940 corresponding to the second ranging light L2 of the second frequency f2, which is less affected by sunlight, are used. 2 A more reliable depth signal SDP is calculated based on the orthogonal component signal Q940. Therefore, according to the second embodiment, the obtained depth image has higher accuracy.
  • FIG. 13 is an explanatory diagram of an example of a functional block of the signal processing unit of the third embodiment.
  • the signal processing unit 23 in the third embodiment can be roughly classified into a first RAW image storage unit 40-1, a second RAW image storage unit 40-2, a first I_Q signal calculation unit 41-1, and a second I_Q signal calculation unit 41-2.
  • 1st depth conversion unit 42-1, 2nd depth conversion unit 42-2, 1st area determination feature amount calculation unit 43-1, 2nd area determination feature amount calculation unit 43-2, 1st reflection intensity calculation unit 31 -1, the second reflection intensity calculation unit 31-2, the intensity signal difference calculation unit 32, the selection signal generation unit 51, and the selection unit 45 are provided.
  • the light receiving unit 22 receives the reflected light of the first ranging light L1 and the reflected light of the second ranging light L2, and the RAW image data RAW850
  • the RAW image data RAW940 is generated, the RAW image data RAW850 is output to the first I_Q signal calculation unit 41-1, and the RAW image data RAW940 is output to the second I_Q signal calculation unit 41-2.
  • the first RAW image storage unit 40-1 of the signal processing unit 23 acquires the RAW image data RAW850.
  • the first I_Q signal calculation unit 41-1 has a first orthogonality corresponding to the first in-phase component signal I850 and the first in-phase component signal I850 for each pixel based on the RAW image data RAW850 read from the first RAW image storage unit 40-1.
  • the component signal Q850 is calculated and output to the first depth conversion unit 42-1.
  • the first depth conversion unit 42-1 calculates the first depth signal SDP1 corresponding to the distance to the object OBJ for each pixel based on the combination of the first in-phase component signal I850 and the first orthogonal component signal Q850.
  • the output is output to the first region determination feature amount calculation unit 43-1 and the selection unit 45.
  • the first region determination feature amount calculation unit 43-1 calculates the reliability and outputs the integrated reliability RL1 to the selection signal generation unit 51.
  • the second RAW image storage unit 40-2 acquires the RAW image data RAW940 (step S45).
  • the second I_Q signal calculation unit 41-2 calculates the second orthogonal component signal Q940 corresponding to the second in-phase component signal I940 and the second in-phase component signal I940 for each pixel based on the RAW image data RAW940.
  • Output to the 2-depth conversion unit 42-2 step S46).
  • the second depth conversion unit 42-2 calculates the second depth signal SDP2 corresponding to the distance to the object OBJ for each pixel based on the combination of the second in-phase component signal I940 and the second orthogonal component signal Q940, and the second It is output to the area determination feature amount calculation unit 43-2 and the selection unit 45 (step S47). As a result, the second region determination feature amount calculation unit 43-2 calculates the reliability and outputs the integrated reliability RL2 to the selection signal generation unit 51.
  • the intensity signal difference calculation unit 32 calculates the difference P between the input reflection intensity signal C850 and the reflection intensity signal C940 by the following equation and outputs it to the selection signal generation unit 51.
  • P (C850-C940) / (C850 + C940)
  • the comparison unit 44 sets the integrated reliability RL1 output by the first area determination feature amount calculation unit 43-1, the integrated reliability RL2 output by the second area determination feature amount calculation unit 43-2, and the difference P. Based on this, it is determined which of the depth signal SDP1 and the depth signal SDP2 should be selected, and the selection signal sel is output to the selection unit 45.
  • the selection result is adopted.
  • the comparison result of the integrated reliability RL1 and the integrated reliability RL2 and the selection result of the difference P is adopted as the selection result.
  • the selection result based on the difference P is adopted. Further, when the difference between the integrated reliability RL1 and the integrated reliability RL2 is small and the difference P is also small, it is considered that the result is not so different regardless of which one is selected, so it is arbitrarily selected (for example, the depth signal SDP1 is depthed in advance). Set to select as signal SDP).
  • the selection unit 45 outputs either the first depth signal SDP1 or the second depth signal SDP2 as the depth signal SDP for each pixel based on the selection signal sel.
  • the first range-finding light L1 (for example, wavelength 850 nm), which is a component contained in a large amount in sunlight, and the sunlight spectrum are spectroscopic.
  • the case where the second ranging light L2 (for example, wavelength 940 nm), which is a component having a relatively low radiation intensity, that is, which is not so contained in sunlight, is used has been described.
  • the first ranging light L1 in the frequency band having high reflectance and the second ranging light L2 in the frequency band having low reflectance, which are specific to the type of the object to be measured are used. It is an embodiment when it is used.
  • the reflectance of the first ranging light L1 is the reflectance of the second ranging light L2 (reflectance of the ranging object at the second wavelength).
  • the first ranging light L1 and the second ranging light L2 are set so as to be significantly higher than.
  • the intensity of reflection in each wavelength band differs depending on the type of substance to be measured (for example, plant, soil, water, etc.).
  • the object to be measured exists in a region where the difference between the reflected signal in the frequency band having high reflectance and the reflected signal in the frequency band having low reflectance is large depending on the substance of the object to be measured. Can be regarded as.
  • FIG. 14 is an explanatory diagram of the fourth embodiment.
  • FIG. 14 shows the relationship between the wavelength and the reflection intensity on plants, soil, water, the ground surface, and the water surface as measurement objects.
  • a reflection signal near 800 nm corresponding to the reflectance of the first ranging light L1 having a high reflectance of the plant and the reflection of the plant.
  • the difference of the reflected signal corresponding to the reflectance of the second ranging light L2
  • the difference of the reflected signal near 500 nm, which has a low reflectance, if the difference is large, it can be determined that a plant exists in the region. It will be possible.
  • the distance (depth) to the plant can be stabilized by selecting the reflected signal near 800 nm, which has a high reflectance of the plant. You can get it.
  • a reflected signal near 400 nm (corresponding to the reflectance of the first distance measuring light L1) having a high reflectance of water and a reflection of water.
  • the difference of the reflected signal (corresponding to the reflectance of the second ranging light L2) near 800 nm, which has a low reflectance, is taken, if the difference is large, it can be determined that water exists in the region. It will be possible.
  • the distance (depth) to the water surface can be stabilized by selecting a reflected signal near 400 nm, which has a high reflectance of water. You can get it.
  • the existence of the measurement object is present. It is possible to accurately determine the presence / absence and the distance (depth) to the measurement object if it exists.
  • the distance measuring light L is the first distance measuring light L1 (for example, a wavelength of 850 nm) which is a component that is abundantly contained in sunlight. ) And the case where the second ranging light L2 (for example, wavelength 940 nm), which is a component not so much contained in sunlight, is used.
  • the wavelength is not limited to this, and any combination of wavelengths that can eliminate the influence of sunlight can be appropriately applied.
  • this technology can also adopt the following configurations.
  • a difference calculation unit that calculates the difference between the reflection intensity of the light for distance measurement and the reflection intensity of the second distance measurement light. Based on the difference, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the said
  • An output unit that outputs a depth signal corresponding to the orthogonal component signal, Image processing device equipped with.
  • the output unit is a selection unit that selects and outputs either a received signal of the reflected light of the first ranging light or a received signal of the reflected light of the second ranging light based on the difference.
  • a calculation unit that calculates an in-phase component signal and an orthogonal component signal from the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light selected by the selection unit. Based on the calculation result of the calculation unit, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the orthogonality
  • a conversion unit that performs depth conversion on the component signal and outputs the depth signal, The image processing apparatus according to (1).
  • the output unit includes a calculation unit that calculates an in-phase component signal and an orthogonal component signal from the received signal of the reflected light of the first ranging light and the received signal of the reflected light of the second ranging light, respectively. Depth conversion is performed on the in-phase component signal and the orthogonal component signal calculated from the received signal of the reflected light of the first range-finding light, and the first depth signal as the depth signal and the reflected light of the second range-finding light are performed.
  • a conversion unit that performs depth conversion on the in-phase component signal and the orthogonal component signal calculated from the received signal of the above and outputs the second depth signal as the depth signal.
  • a selection unit that selects and outputs either the first depth signal or the second depth signal based on the difference.
  • the image processing apparatus according to (1).
  • the first ranging light has a relatively high spectral radiant intensity in the sunlight spectrum, the second ranging light has a relatively low spectral radiant intensity, and the second wavelength has a second wavelength. Longer than 1 wavelength, The image processing apparatus according to any one of (1) to (3).
  • the first wavelength has a center wavelength of 850 nm, and the second wavelength has a center wavelength of 940 nm.
  • the first wavelength and the second wavelength are set so that the reflectance of the distance measuring object at the first wavelength is significantly higher than the reflectance at the second wavelength.
  • the image processing apparatus according to any one of (1) to (4).
  • a method performed by an image processor The first ranging at the same ranging position based on the received signal of the reflected light of the first ranging light of the first wavelength and the received signal of the reflected light of the second ranging light of the second wavelength.
  • the process of calculating the difference between the reflection intensity of the light for distance measurement and the reflection intensity of the second ranging light, and Based on the difference, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the said The process of outputting the depth signal corresponding to the orthogonal component signal and A method equipped with.
  • An irradiation unit that irradiates the first ranging light of the first wavelength and the second ranging light of the second wavelength, The reflected light of the first ranging light and the reflected light of the second ranging light are received, and the received signal of the reflected light of the first ranging light and the reflected light of the second ranging light are received.
  • the image pickup unit that outputs the received signal of Based on the received signal of the reflected light of the first ranging light and the received signal of the reflected light of the second ranging light, the reflection intensity of the first ranging light at the same ranging position and the said A difference calculation unit that calculates the difference between the reflection intensity of the second distance measuring light and Based on the difference, the in-phase component signal corresponding to either the received signal of the reflected light of the first ranging light or the received signal of the reflected light of the second ranging light and the said An output unit that outputs a depth signal corresponding to the orthogonal component signal, Electronic equipment equipped with.
  • Image processing device 11 Difference calculation unit 12 Output unit 20 Image processing device 21 Irradiation unit 22 Light receiving unit 22A Light receiving lens 22B Beam splitter 22C-1, 22C-2 Filter 22D-1 1st TOF sensor 22D-2 2nd TOF sensor 23 Signal processing Part 25 TOF sensor 25A Lens array unit 25B Filter unit 25C Light receiving unit 30-1 1st RAW image storage unit 30-2 2nd RAW image storage unit 31-1 1st reflection intensity calculation unit 31-2 2nd reflection intensity calculation unit 32 Strength Signal difference calculation unit 33 Selection signal generation unit 34-1, 41-1 1st I_Q signal calculation unit 34-2, 41-2 2nd I_Q signal calculation unit 35 Selection unit 36 Depth conversion unit 40-1 1st RAW image storage unit 40- 2 2nd RAW image storage unit 42-1 1st depth conversion unit 42-2 2nd depth conversion unit 43-1 1st area judgment feature amount calculation unit 4-3 2nd area judgment feature amount calculation unit 44 Comparison unit 45 Selection unit 51 Selection signal generator C1 Pixel cell C2 Pixel cell C850 Reflection intensity signal C940 Reflection

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Optical Distance (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'image (10) qui comprend : une unité de calcul de différence (11) qui, sur la base d'un signal de réception de lumière qui concerne de la lumière de réflexion d'une première lumière de mesure de distance (L1) qui a une première longueur d'onde (f1) et d'un signal de réception de lumière qui concerne de la lumière de réflexion d'une seconde lumière de mesure de distance (L2) qui a une seconde longueur d'onde (f2), calcule la différence (d) entre l'intensité de réflexion de la première lumière de mesure de distance (L1) et l'intensité de réflexion de la seconde lumière de mesure de distance (L2) à la même position de mesure de distance ; et une unité de sortie (12) qui, sur la base de la distance (d), produit en sortie un signal de profondeur (SDP) sur la base de signaux de composante de même phase (I1. I2) et de signaux de composante orthogonale (Q1. Q2) qui correspondent à un signal de réception de lumière (SL1) qui concerne la lumière de réflexion de la première lumière de mesure de distance (L1) ou à un signal de réception de lumière (SL2) qui concerne la lumière de réflexion de la seconde lumière de mesure de distance (L2) à chaque position de mesure de distance.
PCT/JP2020/019368 2019-05-22 2020-05-14 Dispositif de traitement d'image, procédé, et appareil électronique WO2020235458A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-096216 2019-05-22
JP2019096216 2019-05-22

Publications (1)

Publication Number Publication Date
WO2020235458A1 true WO2020235458A1 (fr) 2020-11-26

Family

ID=73458897

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/019368 WO2020235458A1 (fr) 2019-05-22 2020-05-14 Dispositif de traitement d'image, procédé, et appareil électronique

Country Status (1)

Country Link
WO (1) WO2020235458A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023119797A1 (fr) * 2021-12-23 2023-06-29 株式会社Jvcケンウッド Dispositif d'imagerie et procédé d'imagerie

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003014430A (ja) * 2001-07-03 2003-01-15 Minolta Co Ltd 3次元測定方法および3次元測定装置
JP2008032427A (ja) * 2006-07-26 2008-02-14 Fujifilm Corp 距離画像作成方法及び距離画像センサ、及び撮影装置
JP2008175538A (ja) * 2007-01-16 2008-07-31 Fujifilm Corp 撮影装置および方法並びにプログラム
US20110032508A1 (en) * 2009-08-06 2011-02-10 Irvine Sensors Corporation Phase sensing and scanning time of flight LADAR using atmospheric absorption bands
US20110158481A1 (en) * 2009-12-30 2011-06-30 Hon Hai Precision Industry Co., Ltd. Distance measuring system
US20170234977A1 (en) * 2016-02-17 2017-08-17 Electronics And Telecommunications Research Institute Lidar system and multiple detection signal processing method thereof
WO2018104464A1 (fr) * 2016-12-07 2018-06-14 Sony Semiconductor Solutions Corporation Appareil et procédé
WO2019078074A1 (fr) * 2017-10-20 2019-04-25 Sony Semiconductor Solutions Corporation Appareil d'acquisition d'image en profondeur, procédé de commande et système d'acquisition d'image en profondeur

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003014430A (ja) * 2001-07-03 2003-01-15 Minolta Co Ltd 3次元測定方法および3次元測定装置
JP2008032427A (ja) * 2006-07-26 2008-02-14 Fujifilm Corp 距離画像作成方法及び距離画像センサ、及び撮影装置
JP2008175538A (ja) * 2007-01-16 2008-07-31 Fujifilm Corp 撮影装置および方法並びにプログラム
US20110032508A1 (en) * 2009-08-06 2011-02-10 Irvine Sensors Corporation Phase sensing and scanning time of flight LADAR using atmospheric absorption bands
US20110158481A1 (en) * 2009-12-30 2011-06-30 Hon Hai Precision Industry Co., Ltd. Distance measuring system
US20170234977A1 (en) * 2016-02-17 2017-08-17 Electronics And Telecommunications Research Institute Lidar system and multiple detection signal processing method thereof
WO2018104464A1 (fr) * 2016-12-07 2018-06-14 Sony Semiconductor Solutions Corporation Appareil et procédé
WO2019078074A1 (fr) * 2017-10-20 2019-04-25 Sony Semiconductor Solutions Corporation Appareil d'acquisition d'image en profondeur, procédé de commande et système d'acquisition d'image en profondeur

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023119797A1 (fr) * 2021-12-23 2023-06-29 株式会社Jvcケンウッド Dispositif d'imagerie et procédé d'imagerie

Similar Documents

Publication Publication Date Title
JP5448617B2 (ja) 距離推定装置、距離推定方法、プログラム、集積回路およびカメラ
JP7086001B2 (ja) 適応性のある光レイダー受信機
JP6863342B2 (ja) 光測距装置
KR102561099B1 (ko) ToF(time of flight) 촬영 장치 및 다중 반사에 의한 깊이 왜곡 저감 방법
US7511801B1 (en) Method and system for automatic gain control of sensors in time-of-flight systems
US10670719B2 (en) Light detection system having multiple lens-receiver units
US9258548B2 (en) Apparatus and method for generating depth image
KR102112298B1 (ko) 컬러 영상 및 깊이 영상을 생성하는 방법 및 장치
KR101145132B1 (ko) 3차원 영상화 펄스 레이저 레이더 시스템 및 이 시스템에서의 자동 촛점 방법
KR20230003089A (ko) 안개 검출 및 적응형 응답 기능이 있는 라이다 시스템
JP2020020612A (ja) 測距装置、測距方法、プログラム、移動体
EP3798670B1 (fr) Dispositif de mesure de distance et procédé de mesure de distance l'utilisant
WO2020235458A1 (fr) Dispositif de traitement d'image, procédé, et appareil électronique
CN111045030B (zh) 一种深度测量装置和方法
US8805075B2 (en) Method and apparatus for identifying a vibrometry spectrum in imaging applications
CN111562588A (zh) 用于检测大气颗粒的存在的方法、装置和计算机程序
JP6135871B2 (ja) 光源位置検出装置、光源追尾装置、制御方法およびプログラム
JP2024153778A (ja) 信号処理装置
CN115248440A (zh) 基于点阵光投射的tof深度相机
US10845469B2 (en) Laser scanning devices and methods for extended range depth mapping
KR20150133086A (ko) 깊이 영상 획득 방법 및 그 영상 획득 장치
US20230243974A1 (en) Method And Device For The Dynamic Extension of a Time-of-Flight Camera System
JP2004325202A (ja) レーザレーダ装置
CN111445507B (zh) 一种非视域成像的数据处理方法
JP7262064B2 (ja) 測距撮像システム、測距撮像方法、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20809749

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20809749

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP