WO2021059699A1 - Distance measurement device, distance measurement device control method, and electronic device - Google Patents
Distance measurement device, distance measurement device control method, and electronic device Download PDFInfo
- Publication number
- WO2021059699A1 WO2021059699A1 PCT/JP2020/027983 JP2020027983W WO2021059699A1 WO 2021059699 A1 WO2021059699 A1 WO 2021059699A1 JP 2020027983 W JP2020027983 W JP 2020027983W WO 2021059699 A1 WO2021059699 A1 WO 2021059699A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- distance
- unit
- measuring device
- depth
- distance measuring
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/32—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
- G01S17/36—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
- G01B11/026—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring distance between sensor and object
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/02—Details
- G01C3/06—Use of electric means to obtain final indication
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/491—Details of non-pulse systems
- G01S7/4912—Receivers
- G01S7/4913—Circuits for detection, sampling, integration or read-out
- G01S7/4914—Circuits for detection, sampling, integration or read-out of detector arrays, e.g. charge-transfer gates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/491—Details of non-pulse systems
- G01S7/493—Extracting wanted echo signals
Definitions
- the present disclosure relates to a distance measuring device, a control method of the distance measuring device, and an electronic device.
- distance measuring device As a distance measuring device (distance measuring device) that acquires distance information (distance image information) to the subject, there is a device (sensor) that uses the ToF (Time of Flight) method.
- the ToF method irradiates a subject (measurement object) with light from a light source, and detects the flight time of the light until the irradiated light is reflected by the subject and returned to the light detection unit to reach the subject. This is a method for measuring distance.
- pulsed light of a predetermined cycle emitted from a light source is reflected by a subject, and the cycle when the reflected light is received by the light detection unit is detected, and the position of the light emission cycle and the light reception cycle.
- an indirect ToF method that measures the distance to the subject by measuring the light flight time from the phase difference (see, for example, Patent Document 1).
- the maximum distance that can be measured is determined according to the emission frequency (emission cycle) of the laser light emitted from the light source, but if the maximum distance is exceeded, aliasing distortion called aliasing is performed. (Aliased distance) will occur. Then, for an object at a distance where aliasing occurs (alias distance), the actual measurement distance becomes closer than the original correct distance (true distance). As a result, the distance measurement result of the object at the alias distance is output as an erroneous distance measurement result, and in the subsequent system that performs various controls based on the distance measurement result, the erroneous control is performed. ..
- the present disclosure is a distance measuring device and a distance measuring device capable of invalidating the distance measuring result of an object at an alias distance and outputting only the distance measuring result of an object whose correct distance (true distance) is measured. It is an object of the present invention to provide a control method of a device and an electronic device having the distance measuring device.
- the ranging device of the present disclosure for achieving the above object is Photodetector that receives light from the subject, A depth calculation unit that calculates the depth information of the subject based on the output of the light detection unit, and An artifact removal unit that divides an image into segments based on the depth information calculated by the depth calculation unit, enables segments in which the number of pixels in each segment exceeds a predetermined threshold value, and invalidates segments below a predetermined threshold value. To be equipped.
- a photodetector that receives light from the subject, and Depth calculation unit, which calculates the depth information of the subject based on the output of the light detection unit,
- Depth calculation unit which calculates the depth information of the subject based on the output of the light detection unit
- the electronic device of the present disclosure for achieving the above object has a distance measuring device having the above configuration.
- FIG. 1 is a conceptual diagram of a ToF type ranging system.
- FIG. 2 is a block diagram showing an example of the configuration of a ToF type distance measuring device to which the technique of the present disclosure is applied.
- FIG. 3 is a block diagram showing an example of the configuration of the photodetector in the distance measuring device.
- FIG. 4 is a circuit diagram showing an example of a pixel circuit configuration in the photodetector.
- FIG. 5 is a timing waveform diagram for explaining the calculation of the distance in the ToF type distance measuring device.
- FIG. 6 is a block diagram showing an example of the configuration of the distance image calculation unit of the distance measuring unit in the distance measuring device.
- FIG. 7 is a diagram showing the relationship between the true depth and the distance measurement depth when the laser drive frequency is 100 MHz.
- FIG. 8A is a diagram showing the size of the subject reflected in the image when the true depth is 1.0 m and the distance measurement depth is 1.0 m
- FIG. 8B is a diagram showing the true depth of 2.5 m and the distance measurement depth of 1.0 m. It is a figure which shows the size of the subject which appears in the image in the case of.
- FIG. 9 is a flowchart showing an example of a process for solving an aliasing problem executed by the distance measuring device according to the embodiment of the present disclosure.
- FIG. 10 is a block diagram showing a modified example of the embodiment of the present disclosure.
- FIG. 11A is an external view of the smartphone according to the specific example 1 of the electronic device of the present disclosure as viewed from the front side
- FIG. 11B is an external view as viewed from the back side.
- FIG. 12A is an external view of the digital still camera according to the second embodiment of the electronic device of the present disclosure as viewed from the front side
- FIG. 12B is an external view as viewed from the back side.
- Aliasing issues 4 Embodiments of the present disclosure 4-1. Processing example for solving the aliasing problem 4-2. Threshold setting for the number of pixels in a segment 5. Modification example 6. Application example 7. Electronic device of the present disclosure 7-1. Specific example 1 (example of smartphone) 7-2. Specific example 2 (example of digital still camera) 8. Configuration that can be taken by this disclosure
- each segment of the artifact removing unit can be labeled, and a component object can be created for each label.
- the artifact removing unit can be configured to be labeled differently for each segment by the process of labeling the nearby pixels and the pixels having continuous depth information with the same label.
- the component object can be configured to have the number of pixels of the segment and the average depth.
- the artifact removing unit may be configured to change a predetermined threshold value according to the distance of the segment from the distance measuring device. Further, the artifact removing unit can be configured such that the distance of the segment from the distance measuring device sets a large threshold value for a relatively short short distance and a small threshold value for a relatively long long distance. ..
- FIG. 1 is a conceptual diagram of a ToF type ranging system.
- the ToF method is adopted as a measuring method for measuring the distance to the subject 10 which is a measurement target.
- the ToF method is a method of measuring the time until the light emitted toward the subject 10 is reflected by the subject 10 and returned.
- the distance measuring device 1 emits light emitted toward the subject 10 (for example, a laser beam having a peak wavelength in the infrared wavelength region), and a light source 20 and the subject.
- the light detection unit 30 for detecting the reflected light reflected by 10 and returned is provided.
- FIG. 2 is a block diagram showing an example of the configuration of a ToF type distance measuring device to which the technique of the present disclosure is applied.
- the ranging device 1 according to the present application example (that is, the ranging device 1 of the present disclosure) performs exposure control based on a signal value output by the light detection unit 30 in addition to the light source 20 and the light detection unit 30. (Automatic Exposure) A control unit 40 and a distance measuring unit 50 are provided. Then, the ToF type distance measuring device 1 according to this application example detects the distance information for each pixel of the photodetector 30, and acquires a highly accurate distance image (Deptth Map: depth map) in units of imaging frames. be able to.
- Deptth Map depth map
- the distance measuring device 1 is an indirect ToF type distance measuring device (so-called indirect ToF type distance image sensor).
- the indirect ToF type ranging device 1 detects the cycle when the pulsed light of a predetermined cycle emitted from the light source 20 is reflected by the measurement object (subject) and the reflected light is received by the photodetector 30. Then, the distance to the object to be measured is measured by measuring the light flight time from the phase difference between the period of light emission and the period of light reception.
- the light source 20 irradiates light toward the object to be measured by repeating the on / off operation at a predetermined cycle under the control of the AE control unit 40.
- As the irradiation light of the light source 20 for example, near-infrared light in the vicinity of 850 nm is often used.
- the light detection unit 30 receives the light emitted from the light source 20 after being reflected by the object to be measured and returns, and detects the distance information for each pixel.
- the light detection unit 30 outputs the RAW image data of the current frame including the distance information detected for each pixel and the light emission / exposure setting information, and supplies the RAW image data to the AE control unit 40 and the distance measuring unit 50.
- the AE control unit 40 has a configuration including a next frame light emission / exposure condition calculation unit 41 and a next frame light emission / exposure control unit 42.
- the next frame emission / exposure condition calculation unit 41 calculates the emission / exposure condition of the next frame based on the RAW image data of the current frame supplied from the light detection unit 30 and the emission / exposure setting information.
- the light emission / exposure conditions of the next frame are the light emission time and light emission intensity of the light source 20 when acquiring the distance image of the next frame, and the exposure time of the photodetector 30.
- the next frame light emission / exposure control unit 42 detects the light emission time, light emission intensity, and light detection of the light source 20 of the next frame based on the light emission / exposure conditions of the next frame calculated by the next frame light emission / exposure condition calculation unit 41.
- the exposure time of the part 30 is controlled.
- the distance measuring unit 50 has a configuration including a distance image calculation unit 51 that calculates a distance image.
- the distance image calculation unit 51 calculates a distance image by performing a calculation using the RAW image data of the current frame including the distance information detected for each pixel of the light detection unit 30, and obtains the depth, which is the depth information of the subject, and the depth, which is the depth information of the subject.
- the distance image information including each information of the reliability value which is the light receiving information of the light detection unit 30 is output to the outside of the distance measuring device 1.
- the distance image is, for example, an image in which a distance value (depth / depth value) based on the distance information detected for each pixel is reflected in each pixel.
- FIG. 3 is a block diagram showing an example of the configuration of the photodetector 30.
- the photodetector 30 has a laminated structure including a sensor chip 31 and a circuit chip 32 laminated to the sensor chip 31.
- the sensor chip 31 and the circuit chip 32 are electrically connected through a connecting portion (not shown) such as a via (VIA) or a Cu—Cu junction.
- FIG. 3 illustrates a state in which the wiring of the sensor chip 31 and the wiring of the circuit chip 32 are electrically connected via the above-mentioned connection portion.
- a pixel array unit 33 is formed on the sensor chip 31.
- the pixel array unit 33 includes a plurality of pixels 34 arranged in a matrix (array shape) in a two-dimensional grid pattern on the sensor chip 31.
- each of the plurality of pixels 34 receives incident light (for example, near-infrared light), performs photoelectric conversion, and outputs an analog pixel signal.
- Two vertical signal lines VSL 1 and VSL 2 are wired in the pixel array unit 33 for each pixel row. Assuming that the number of pixel rows of the pixel array unit 33 is M (M is an integer), a total of (2 ⁇ M) vertical signal lines VSL are wired to the pixel array unit 33.
- Each of the plurality of pixels 34 has a first tap A and a second tap B (details thereof will be described later).
- the vertical signal line VSL 1 outputs an analog pixel signal AIN P1 based on the charge of the first tap A of the pixel 34 of the corresponding pixel sequence.
- an analog pixel signal AIN P2 based on the charge of the second tap B of the pixel 34 of the corresponding pixel sequence is output to the vertical signal line VSL 2.
- the analog pixel signals AIN P1 and AIN P2 will be described later.
- a row selection unit 35, a column signal processing unit 36, an output circuit unit 37, and a timing control unit 38 are arranged on the circuit chip 32.
- the row selection unit 35 drives each pixel 34 of the pixel array unit 33 in units of pixel rows, and outputs pixel signals AIN P1 and AIN P2. Under the drive of the row selection unit 35, the analog pixel signals AIN P1 and AIN P2 output from the pixel 34 of the selected row are supplied to the column signal processing unit 36 through the two vertical signal lines VSL 1 and VSL 2. To.
- the column signal processing unit 36 has a configuration having, for example, a plurality of analog-to-digital converters (ADCs) 39 provided for each pixel array corresponding to the pixel array of the pixel array unit 33.
- the analog-to-digital converter 39 performs analog-to-digital conversion processing on the analog pixel signals AIN P1 and AIN P2 supplied through the vertical signal lines VSL 1 and VSL 2, and outputs them to the output circuit unit 37.
- the output circuit unit 37 executes CDS (Correlated Double Sampling) processing or the like on the digitized pixel signals AIN P1 and AIN P2 output from the column signal processing unit 36, and outside the circuit chip 32. Output to.
- CDS Correlated Double Sampling
- the timing control unit 38 generates various timing signals, clock signals, control signals, etc., and drives the row selection unit 35, the column signal processing unit 36, the output circuit unit 37, etc. based on these signals. Take control.
- FIG. 4 is a circuit diagram showing an example of the circuit configuration of the pixel 34 in the photodetector 30.
- the pixel 34 has, for example, a photodiode 341 as a photoelectric conversion unit.
- the pixel 34 includes overflow transistors 342, two transfer transistors 343, 344, two reset transistors 345, 346, two floating diffusion layers 347, 348, two amplification transistors 349, 350, and two. It has a configuration having one selection transistor 351 and 352.
- the two floating diffusion layers 347 and 348 correspond to the first and second taps A and B (hereinafter, may be simply referred to as "tap A and B") shown in FIG. 3 described above.
- the photodiode 341 photoelectrically converts the received light to generate an electric charge.
- the photodiode 341 may have, for example, a back-illuminated pixel structure that captures light emitted from the back surface side of the substrate.
- the pixel structure is not limited to the back-illuminated pixel structure, and a surface-irradiated pixel structure that captures the light emitted from the surface side of the substrate can also be used.
- the overflow transistor 342 is connected between the cathode electrode of the photodiode 341 and the power supply line of the power supply voltage V DD , and has a function of resetting the photodiode 341. Specifically, the overflow transistor 342 becomes conductive in response to the overflow gate signal OFG supplied from the row selection unit 35, so that the electric charge of the photodiode 341 is sequentially discharged to the power supply line of the power supply voltage V DD. To do.
- the two transfer transistors 343 and 344 are connected between the cathode electrode of the photodiode 341 and each of the two floating diffusion layers 347 and 348 (tap A and B). Then, the transfer transistors 343 and 344 are brought into a conductive state in response to the transfer signal TRG supplied from the row selection unit 35, so that the charges generated by the photodiode 341 are sequentially transferred to the floating diffusion layers 347 and 348, respectively. Transfer to.
- the floating diffusion layers 347 and 348 corresponding to the first and second taps A and B accumulate the electric charge transferred from the photodiode 341 and convert it into a voltage signal having a voltage value corresponding to the amount of the electric charge, and convert it into an analog voltage signal. Generates pixel signals AIN P1 and AIN P2.
- the two reset transistors 345 and 346 are connected between each of the two floating diffusion layers 347 and 348 and the power supply line of the power supply voltage V DD. Then, the reset transistors 345 and 346 are brought into a conductive state in response to the reset signal RST supplied from the row selection unit 35, so that charges are extracted from each of the floating diffusion layers 347 and 348, and the amount of charges is initialized. To do.
- the two amplification transistors 349 and 350 are connected between the power supply line of the power supply voltage V DD and each of the two selection transistors 351 and 352, and are converted from electric charge to voltage by the floating diffusion layers 347 and 348, respectively. Each voltage signal is amplified.
- the two selection transistors 351 and 352 are connected between the two amplification transistors 349 and 350 and the vertical signal lines VSL 1 and VSL 2, respectively. Then, the selection transistors 351 and 352 enter a conductive state in response to the selection signal SEL supplied from the row selection unit 35, so that the voltage signals amplified by the amplification transistors 349 and 350 are converted into analog pixel signals. Output to two vertical signal lines VSL 1 and VSL 2 as AIN P1 and AIN P2.
- the two vertical signal lines VSL 1 and VSL 2 are connected to the input end of one analog-to-digital converter 39 in the column signal processing unit 36 for each pixel row, and are output from the pixel 34 for each pixel row.
- the analog pixel signals AIN P1 and AIN P2 are transmitted to the analog-to-digital converter 39.
- the circuit configuration of the pixel 34 is not limited to the circuit configuration illustrated in FIG. 3 as long as it can generate analog pixel signals AIN P1 and AIN P2 by photoelectric conversion.
- FIG. 5 is a timing waveform diagram for explaining the calculation of the distance in the ToF type distance measuring device 1.
- the light source 20 and the light detection unit 30 in the ToF type distance measuring device 1 operate at the timing shown in the timing waveform diagram of FIG.
- Light source 20 under control by the AE control unit 40 for a predetermined period, for example, only during the period of the pulse emission time T p, for irradiating light to the object of measurement.
- the irradiation light emitted from the light source 20 is reflected by the object to be measured and returned. This reflected light (active light) is received by the photodiode 341.
- the time for the photodiode 341 to receive the reflected light after the irradiation of the irradiation light to the measurement object is started, that is, the light flight time is the time corresponding to the distance from the distance measuring device 1 to the measurement object. ..
- photodiode 341 from the time the irradiation is started in the irradiation light, only during the period of the pulse emission time T p, for receiving reflected light from the object to be measured. At this time, as the light received by the photodiode 341, the light radiated to the object to be measured is reflected by an object, the atmosphere, or the like in addition to the reflected light (active light) that is reflected by the object to be measured and returned. -Scattered ambient light is also included.
- the charge photoelectrically converted by the photodiode 341 is transferred to the tap A (floating diffusion layer 347) and accumulated. Then, a signal n 0 of a voltage value corresponding to the amount of electric charge accumulated in the floating diffusion layer 347 is acquired from the tap A.
- the charge photoelectrically converted by the photodiode 341 is transferred to the tap B (suspended diffusion layer 348) and accumulated. Then, a signal n 1 having a voltage value corresponding to the amount of electric charge accumulated in the floating diffusion layer 348 is acquired from the tap B.
- the tap A and the tap B are driven by 180 degrees out of phase of the accumulation timing (driving with the phases completely reversed), so that the signal n 0 and the signal n 1 are acquired, respectively. To. Then, such driving is repeated a plurality of times, and the signal n 0 and the signal n 1 are accumulated and integrated to acquire the accumulated signal N 0 and the accumulated signal N 1 , respectively.
- the stored signal N 0 and the stored signal N 1 include ambient light that is reflected and scattered by an object or the atmosphere. Ingredients are also included. Therefore, in the above-mentioned operation, in order to remove the influence of the ambient light component and leave the reflected light (active light) component, the signal n 2 based on the ambient light is also accumulated and integrated, and the environment. The accumulated signal N 2 for the optical component is acquired.
- the distance D from the distance measuring device 1 to the object to be measured can be calculated by the arithmetic processing based on the above.
- D represents the distance from the distance measuring device 1 to the object to be measured
- c represents the speed of light
- T p represents the pulse emission time
- the distance image calculation unit 51 shown in FIG. 2 is output from the photodetection unit 30 by using the storage signal N 0 and the storage signal N 1 including the ambient light component and the storage signal N 2 for the ambient light component.
- the distance D from the distance measuring device 1 to the object to be measured is calculated by the arithmetic processing based on the above equations (1) and (2), and is output as the distance image information.
- the distance image information for example, image information colored with a color having a density corresponding to the distance D can be exemplified.
- the calculated distance D is output as the distance image information, but the calculated distance D may be output as the distance information as it is.
- FIG. 6 shows an example of the configuration of the distance image calculation unit 51 of the distance measuring unit 50 in the distance measuring device 1.
- the distance image calculation unit 51 according to this example has a configuration including a depth calculation unit 511, a calibration unit 512, a noise reduction unit 513, and an artifact removal unit 514.
- the depth calculation unit 511 uses the RAW image data given by the light detection unit 30 to reflect the light emitted from the light source 20 on the subject and detect the reflected light. Depth information and reliability value information are calculated from the phase difference reached by the unit 30.
- the "depth” is the depth information (distance information) of the subject
- the "reliability value” is the light receiving information of the photodetector 30, and the light emitted from the light source 20 is the subject. It is the amount (degree) of reflected light that is reflected and returned to the photodetector 30.
- the calibration unit 512 matches the phase of the light emitted from the light source 20 with the light incident on the photodetector 30, and performs calibration processing such as waveform correction and temperature correction.
- the noise reduction unit 513 is composed of, for example, a low-pass filter, and performs a process of reducing noise.
- the artifact removing unit 514 has various filter functions, and for each information of the depth and the reliability value passed through the noise reducing unit 513, the distance measurement is incorrect or the light receiving reliability value of the photodetector unit 30 is low. Perform the process of excluding things.
- the configurations in which the calibration unit 512, the noise reduction unit 513, and the artifact removal unit 514 are arranged in that order are illustrated, but the order of arrangement is arbitrary, that is, changed. You may.
- the maximum distance that can be measured is determined according to the emission frequency (emission cycle) of the laser light emitted from the light source 20.
- FIG. 7 shows the relationship between the true depth and the distance measurement depth when the laser drive frequency (emission frequency) of the light source 20 is 100 MHz.
- the maximum distance that can be measured is 1.5 m.
- the distance is measured based on the phase difference, and it returns to the same phase after one round. Therefore, when the distance to the subject exceeds the maximum distance that can be measured, aliasing (folded distance) occurs. It will occur. Aliasing problems are particularly likely to occur in high-reflectivity subjects with a large specular reflection component, such as metal and glass.
- the actual measurement distance (distance measurement depth) is closer than the original correct distance (true depth).
- the actual measurement distance (distance measurement depth) of a subject at a distance of 2.5 m is 1.0 m.
- the distance measurement result of the object at the alias distance is output from the distance measurement device 1 as an erroneous distance measurement result.
- FIG. 8A The size of the subject reflected in the image when the true depth is 1.0 m and the distance measuring depth is 1.0 m is shown in FIG. 8A, and the size of the subject reflected in the image when the true depth is 2.5 m and the distance measuring depth is 1.0 m is shown.
- the size is shown in FIG. 8B.
- FIG. 8B is an example in the case of a subject at an alias distance.
- the distance measuring distance (distance measuring depth) is the same 1.0 m.
- the distance is measured at the same distance of 1.0 m, but the size of the subject is different. That is, the size of the subject reflected in the image is smaller in the case of the true depth of 2.5 m (FIG. 8B) than in the case of the true depth of 1.0 m (FIG. 8A). This means that the size of the subject reflected in the image differs depending on the distance of the subject from the distance measuring device 1.
- the processor constituting the artifact removal unit 514 divides the image into segments (parts / objects) based on the depth information calculated by the depth calculation unit 511 in FIG. Perform the process (step S11).
- process constituting the artifact removal unit 514 divides the image into segments (parts / objects) based on the depth information calculated by the depth calculation unit 511 in FIG. Perform the process (step S11).
- this segmentation process taking the cases of FIGS. 8A and 8B as an example, a process of dividing a large background portion and a round portion in the center into segments is performed.
- the processor performs a labeling process for assigning labels 1 to n (n is an integer of 2 or more) to each segment (step S12).
- n is an integer of 2 or more
- the image is scanned line by line, and the same label is attached to pixels in which neighboring pixels and depth information are continuous, so that different labels are assigned to each segment (part / object). ..
- the processor creates a component object for each label 1 to n (step S13).
- the "component object” is the number of pixels of the segment and the depth average value (distance average value).
- the threshold value of the number of pixels of the segment is determined in consideration of the distance for each segment. More specifically, for a short distance where the distance of the segment from the distance measuring device 1 is relatively short. Set the threshold value large and set the threshold value for relatively long distances small.
- a predetermined distance for example, 1 m
- a reference for example, 1 m
- the threshold value of the number of pixels of the segment is not limited to the two patterns of short distance / long distance, and may be further subdivided and increased to three or more patterns. Details of setting the threshold value for the number of pixels in the segment will be described later.
- the processor determines whether or not the number of pixels exceeds the threshold value for the segment of label 1 (step S16). In this determination process, when the processor determines that the number of pixels exceeds the threshold value (YES in S16), the segment is valid (step S17), and when it determines that the number of pixels is equal to or less than the threshold value (YES in S16). NO in S16), the segment is invalidated (step S18).
- step S19 the processor determines whether or not the process of valid / invalidating the segment is completed for all the labels 1 to n (step S19), and the process is completed. If (YES in S19), the above-mentioned series of processes for solving the aliasing problem is completed. If the processor has not finished for all labels 1 to n (NO in S19), the processor returns to step S14 and increments the label counter i, and thereafter, the processes of steps S15 to S19 are performed for all labels 1 to n. Is repeatedly executed until it is determined that the valid / invalid processing of the segment is completed.
- the depth calculation unit 511 calculates the depth information of the subject based on the RAW image data given by the light detection unit 30, and segmentation is performed based on the depth information. Processing is performed and processing is performed to invalidate segments whose number of pixels is smaller than the threshold value.
- the aliasing problem which is a problem peculiar to the indirect ToF type distance measuring device, can be solved. That is, the distance measurement result of the object at the alias distance is not output from the distance measurement device 1 as an erroneous distance measurement result, and only the distance measurement result of the object for which the correct distance (true distance) is measured. Can be output.
- the size of the subject (object) to be distance-measured is defined, the number of pixels of the segment to be imaged for each distance is calculated from the focal length and pixel pitch of the light detection unit 30, and the calculated number of pixels is used as a threshold value. be able to.
- -For the subject to be distance-measured select an object that is particularly prone to aliasing tasks, such as metal or glass, which has a high specular reflection component and high reflectance.
- aliasing problem does not occur in the entire object where the aliasing problem is likely to occur, but tends to occur in the vicinity of the region where the laser beam faces. Therefore, even if it is a large object (subject), the size of the area where the aliasing problem occurs may be set.
- the technique of the present disclosure has been described above based on the preferred embodiment, the technique of the present disclosure is not limited to the embodiment.
- the configuration and structure of the distance measuring device described in each of the above embodiments are examples, and can be changed as appropriate.
- the threshold value of the number of pixels of the segment is determined based on the depth average value, but the present invention is not limited to this, and the following configuration can be adopted.
- the RGB camera 61 is used, and the threshold value for each subject is set in advance. Then, the object recognition unit 62 recognizes the subject based on the image pickup output of the RGB camera 61, and the parameter control unit 63 determines a preset threshold value based on the recognition result.
- the subject recognition is not performed based on the output of the RGB camera 61, but is performed based on the final output of the distance measuring device 1, that is, the distance image information including each information of the depth and the reliability value. Alternatively, it may be performed based on both the output of the RGB camera 61 and the final output of the distance measuring device 1.
- the ranging device of the present disclosure is a moving object of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, a robot, a construction machine, and an agricultural machine (tractor). It can be used as a distance measuring device mounted on the vehicle. Further, in the above embodiment, the case where the distance measuring device of the present disclosure is used as a means for acquiring a distance image (depth map) has been described as an example, but it is not only used as a means for acquiring a distance image. , Can be applied to autofocus, which automatically focuses the camera.
- the distance measuring device of the present disclosure described above can be used as a distance measuring device mounted on various electronic devices.
- Examples of electronic devices equipped with a distance measuring device include mobile devices such as smartphones, digital cameras, and tablets. However, it is not limited to mobile devices.
- smartphones and digital still cameras will be illustrated as specific examples of electronic devices using the distance measuring device of the present disclosure.
- the distance image information (depth information) of the distance measuring device mounted on the smartphone and the digital still camera is used as the lens driving information for autofocus.
- the specific examples illustrated here are only examples, and are not limited to these specific examples.
- FIG. 11A shows an external view of the smartphone according to the specific example 1 of the electronic device of the present disclosure as seen from the front side
- FIG. 11B shows an external view as seen from the back side.
- the smartphone 100 according to this specific example includes a display unit 120 on the front side of the housing 110. Further, the smartphone 100 is provided with an image pickup unit 130 on the upper side of the back surface side of the housing 110, for example.
- the distance measuring device 1 can be mounted on, for example, a smartphone 100 which is an example of a mobile device having the above configuration.
- the light source 20 and the light detection unit 30 of the distance measuring device 1 can be arranged in the vicinity of the imaging unit 130, for example, as shown in FIG. 11B.
- the arrangement example of the light source 20 and the light detection unit 30 shown in FIG. 11B is an example, and is not limited to this arrangement example.
- the smartphone 100 according to the specific example 1 is manufactured by mounting the distance measuring device 1 according to the embodiment of the present disclosure. Then, since the smartphone 100 according to the specific example 1 can obtain an accurate distance measurement result while solving the aliasing problem by mounting the above distance measurement device 1, the focus can be obtained based on the distance measurement result. A suitable captured image can be obtained.
- FIG. 12B is an external view of the interchangeable lens type single-lens reflex type digital still camera according to Specific Example 2 of the electronic device of the present disclosure as viewed from the front side, and FIG. 12B is an external view as viewed from the back side.
- the interchangeable lens single-lens reflex type digital still camera 200 has, for example, an interchangeable photographing lens unit (interchangeable lens) 212 on the front right side of the camera body (camera body) 211, and is gripped by the photographer on the front left side. It has a grip portion 213 for using the lens.
- a monitor 214 is provided at substantially the center of the back surface of the camera body 211.
- a viewfinder (eyepiece window) 215 is provided above the monitor 214. By looking into the viewfinder 215, the photographer can visually recognize the optical image of the subject guided by the photographing lens unit 212 and determine the composition.
- the ranging device 1 can be mounted on, for example, a digital still camera 200 which is an example of a mobile device having the above configuration.
- the light source 20 and the light detection unit 30 of the distance measuring device 1 can be arranged in the vicinity of the photographing lens unit 212, for example, as shown in FIG. 12A.
- the arrangement example of the light source 20 and the light detection unit 30 shown in FIG. 12A is an example, and is not limited to this arrangement example.
- the digital still camera 200 according to the second embodiment is manufactured by mounting the distance measuring device 1 according to the embodiment of the present disclosure. Then, since the digital still camera 200 according to the specific example 2 can obtain an accurate distance measurement result while solving the aliasing problem by mounting the above distance measurement device 1, it is possible to obtain an accurate distance measurement result based on the distance measurement result. An in-focus captured image can be obtained.
- the present disclosure may also have the following configuration.
- A. Distance measuring device ⁇ [A-1] A photodetector that receives light from a subject, A depth calculation unit that calculates the depth information of the subject based on the output of the light detection unit, and An artifact removal unit that divides an image into segments based on the depth information calculated by the depth calculation unit, enables segments in which the number of pixels in each segment exceeds a predetermined threshold value, and invalidates segments below a predetermined threshold value. A distance measuring device equipped with. [A-2] The artifact removal unit labels each segment and creates a component object for each label. The distance measuring device according to the above [A-1].
- the artifact removal unit assigns a different label to each segment by a process of labeling neighboring pixels and pixels having continuous depth information with the same label.
- the component object is the number of pixels and the average depth of the segment.
- the artifact removal unit changes a predetermined threshold value according to the distance of the segment from the distance measuring device.
- [A-6] The artifact removing unit sets a large threshold value for a relatively short distance and a small threshold value for a relatively long distance for the distance of the segment from the distance measuring device.
- ⁇ B. Electronic equipment [B-1] Photodetector that receives light from the subject, A depth calculation unit that calculates the depth information of the subject based on the output of the light detection unit, and An artifact removal unit that divides an image into segments based on the depth information calculated by the depth calculation unit, enables segments in which the number of pixels in each segment exceeds a predetermined threshold value, and invalidates segments below a predetermined threshold value. An electronic device having a ranging device. [B-2] The artifact removal unit labels each segment and creates a component object for each label. The electronic device according to the above [B-1].
- the artifact removal unit assigns a different label to each segment by a process of labeling neighboring pixels and pixels having continuous depth information with the same label.
- the component object is the number of pixels and the average depth of the segment.
- the artifact removal unit changes a predetermined threshold value according to the distance of the segment from the distance measuring device.
- the artifact removing unit sets a large threshold value for a relatively short distance and a small threshold value for a relatively long distance for the distance of the segment from the distance measuring device.
- 1 ... Distance measuring device, 10 ... Subject (measurement object), 20 ... Light source, 30 ... Light detection unit, 40 ... AE control unit, 41 ... Next frame light emission / exposure Condition calculation unit, 42 ... Next frame light emission / exposure control unit, 50 ... Distance measurement unit, 51 ... Distance image calculation unit, 61 ... RGB camera, 62 ... Object recognition unit, 63. ⁇ ⁇ Parameter control unit
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
This distance measurement device is provided with an optical detection unit which receives light from a subject, a depth calculation unit which calculates depth information of the subject on the basis of output of the optical detection unit, and an artifact removal unit which divides an image into segments on the basis of the depth information, activates segments in which the number of pixels in the segment exceeds a prescribed threshold value, and deactivates segments in which the number of pixels is less than or equal to the threshold value.
Description
本開示は、測距装置及び測距装置の制御方法、並びに、電子機器に関する。
The present disclosure relates to a distance measuring device, a control method of the distance measuring device, and an electronic device.
被写体までの距離情報(距離画像情報)を取得する測距装置(距離測定装置)として、ToF(Time of Flight:光飛行時間)方式を利用した装置(センサ)がある。ToF方式は、被写体(測定対象物)に対して光源から光を照射し、その照射光が被写体で反射されて光検出部に戻ってまでの光の飛行時間を検出することにより、被写体までの距離を測定する方式である。
As a distance measuring device (distance measuring device) that acquires distance information (distance image information) to the subject, there is a device (sensor) that uses the ToF (Time of Flight) method. The ToF method irradiates a subject (measurement object) with light from a light source, and detects the flight time of the light until the irradiated light is reflected by the subject and returned to the light detection unit to reach the subject. This is a method for measuring distance.
ToF方式の一つとして、光源から発した所定の周期のパルス光が被写体で反射し、その反射光を光検出部が受光した際の周期を検出し、発光の周期と受光の周期との位相差から光飛行時間を計測することにより、被写体までの距離を測定する間接(indirect)ToF方式がある(例えば、特許文献1参照)。
As one of the ToF methods, pulsed light of a predetermined cycle emitted from a light source is reflected by a subject, and the cycle when the reflected light is received by the light detection unit is detected, and the position of the light emission cycle and the light reception cycle. There is an indirect ToF method that measures the distance to the subject by measuring the light flight time from the phase difference (see, for example, Patent Document 1).
間接ToF方式の測距装置では、光源から出射されるレーザ光の発光周波数(発光周期)に応じて測距可能な最大距離が決まるが、その最大距離を超えると、エイリアシングと呼称される折り返しひずみ(折り返した距離)が発生してしまう。そして、エイリアシングが発生する距離(エイリアス距離)にある物体については、実際の測定距離が、本来の正しい距離(真の距離)よりも近くなる。これにより、エイリアス距離にある物体の測距結果については、誤った測距結果として出力され、当該測距結果に基づいて各種の制御を行う後段のシステムでは、誤った制御が行われることになる。
In the indirect ToF type distance measuring device, the maximum distance that can be measured is determined according to the emission frequency (emission cycle) of the laser light emitted from the light source, but if the maximum distance is exceeded, aliasing distortion called aliasing is performed. (Aliased distance) will occur. Then, for an object at a distance where aliasing occurs (alias distance), the actual measurement distance becomes closer than the original correct distance (true distance). As a result, the distance measurement result of the object at the alias distance is output as an erroneous distance measurement result, and in the subsequent system that performs various controls based on the distance measurement result, the erroneous control is performed. ..
本開示は、エイリアス距離にある物体の測距結果を無効にし、正しい距離(真の距離)が測定されている物体の測距結果のみを出力することができる測距装置測距装置及び測距装置の制御方法、並びに、当該測距装置を有する電子機器を提供することを目的とする。
The present disclosure is a distance measuring device and a distance measuring device capable of invalidating the distance measuring result of an object at an alias distance and outputting only the distance measuring result of an object whose correct distance (true distance) is measured. It is an object of the present invention to provide a control method of a device and an electronic device having the distance measuring device.
上記の目的を達成するための本開示の測距装置は、
被写体からの光を受光する光検出部、
光検出部の出力に基づいて、被写体の深度情報を計算する深度計算部、及び、
深度計算部が算出した深度情報を基に画像をセグメントに分け、各セグメントのピクセル数が所定の閾値を超えるセグメントを有効とし、所定の閾値以下のセグメントを無効とするアーティファクト除去部、
を備える。 The ranging device of the present disclosure for achieving the above object is
Photodetector that receives light from the subject,
A depth calculation unit that calculates the depth information of the subject based on the output of the light detection unit, and
An artifact removal unit that divides an image into segments based on the depth information calculated by the depth calculation unit, enables segments in which the number of pixels in each segment exceeds a predetermined threshold value, and invalidates segments below a predetermined threshold value.
To be equipped.
被写体からの光を受光する光検出部、
光検出部の出力に基づいて、被写体の深度情報を計算する深度計算部、及び、
深度計算部が算出した深度情報を基に画像をセグメントに分け、各セグメントのピクセル数が所定の閾値を超えるセグメントを有効とし、所定の閾値以下のセグメントを無効とするアーティファクト除去部、
を備える。 The ranging device of the present disclosure for achieving the above object is
Photodetector that receives light from the subject,
A depth calculation unit that calculates the depth information of the subject based on the output of the light detection unit, and
An artifact removal unit that divides an image into segments based on the depth information calculated by the depth calculation unit, enables segments in which the number of pixels in each segment exceeds a predetermined threshold value, and invalidates segments below a predetermined threshold value.
To be equipped.
上記の目的を達成するための本開示の測距装置の制御方法は、
被写体からの光を受光する光検出部、及び、
光検出部の出力に基づいて、被写体の深度情報を計算する深度計算部、
を備える測距装置において、
深度情報を基に画像をセグメントに分け、各セグメントのピクセル数が所定の閾値を超えるセグメントを有効とし、所定の閾値以下のセグメントを無効とする。 The control method of the ranging device of the present disclosure for achieving the above object is described.
A photodetector that receives light from the subject, and
Depth calculation unit, which calculates the depth information of the subject based on the output of the light detection unit,
In a distance measuring device equipped with
The image is divided into segments based on the depth information, the segments in which the number of pixels of each segment exceeds a predetermined threshold value are valid, and the segments below the predetermined threshold value are invalidated.
被写体からの光を受光する光検出部、及び、
光検出部の出力に基づいて、被写体の深度情報を計算する深度計算部、
を備える測距装置において、
深度情報を基に画像をセグメントに分け、各セグメントのピクセル数が所定の閾値を超えるセグメントを有効とし、所定の閾値以下のセグメントを無効とする。 The control method of the ranging device of the present disclosure for achieving the above object is described.
A photodetector that receives light from the subject, and
Depth calculation unit, which calculates the depth information of the subject based on the output of the light detection unit,
In a distance measuring device equipped with
The image is divided into segments based on the depth information, the segments in which the number of pixels of each segment exceeds a predetermined threshold value are valid, and the segments below the predetermined threshold value are invalidated.
また、上記の目的を達成するための本開示の電子機器は、上記の構成の測距装置を有する。
Further, the electronic device of the present disclosure for achieving the above object has a distance measuring device having the above configuration.
以下、本開示の技術を実施するための形態(以下、「実施形態」と記述する)について図面を用いて詳細に説明する。本開示の技術は実施形態に限定されるものではなく、実施形態における種々の数値などは例示である。以下の説明において、同一要素又は同一機能を有する要素には同一符号を用いることとし、重複する説明は省略する。尚、説明は以下の順序で行う。
1.本開示の測距装置及び電子機器、全般に関する説明
2.ToF方式の測距システム
3.本開示の技術が適用される測距装置
3-1.システム構成
3-2.光検出部の構成例
3-3.画素の回路構成例
3-4.距離画像計算部の構成例
3-5.エイリアシング課題について
4.本開示の実施形態
4-1.エイリアシング課題を解決するための処理例
4-2.セグメントのピクセル数の閾値設定
5.変形例
6.応用例
7.本開示の電子機器
7-1.具体例1(スマートフォンの例)
7-2.具体例2(デジタルスチルカメラの例)
8.本開示がとることができる構成 Hereinafter, embodiments for carrying out the technique of the present disclosure (hereinafter, referred to as “embodiments”) will be described in detail with reference to the drawings. The technique of the present disclosure is not limited to the embodiment, and various numerical values and the like in the embodiment are examples. In the following description, the same reference numerals will be used for the same elements or elements having the same function, and duplicate description will be omitted. The explanation will be given in the following order.
1. 1. Description of the ranging device and electronic device of the present disclosure, andgeneral description 2. ToF type ranging system 3. Distance measuring device to which the technology of the present disclosure is applied 3-1. System configuration 3-2. Configuration example of photodetector 3-3. Pixel circuit configuration example 3-4. Configuration example of the distance image calculation unit 3-5. Aliasing issues 4. Embodiments of the present disclosure 4-1. Processing example for solving the aliasing problem 4-2. Threshold setting for the number of pixels in a segment 5. Modification example 6. Application example 7. Electronic device of the present disclosure 7-1. Specific example 1 (example of smartphone)
7-2. Specific example 2 (example of digital still camera)
8. Configuration that can be taken by this disclosure
1.本開示の測距装置及び電子機器、全般に関する説明
2.ToF方式の測距システム
3.本開示の技術が適用される測距装置
3-1.システム構成
3-2.光検出部の構成例
3-3.画素の回路構成例
3-4.距離画像計算部の構成例
3-5.エイリアシング課題について
4.本開示の実施形態
4-1.エイリアシング課題を解決するための処理例
4-2.セグメントのピクセル数の閾値設定
5.変形例
6.応用例
7.本開示の電子機器
7-1.具体例1(スマートフォンの例)
7-2.具体例2(デジタルスチルカメラの例)
8.本開示がとることができる構成 Hereinafter, embodiments for carrying out the technique of the present disclosure (hereinafter, referred to as “embodiments”) will be described in detail with reference to the drawings. The technique of the present disclosure is not limited to the embodiment, and various numerical values and the like in the embodiment are examples. In the following description, the same reference numerals will be used for the same elements or elements having the same function, and duplicate description will be omitted. The explanation will be given in the following order.
1. 1. Description of the ranging device and electronic device of the present disclosure, and
7-2. Specific example 2 (example of digital still camera)
8. Configuration that can be taken by this disclosure
<本開示の測距装置及び電子機器、全般に関する説明>
本開示の測距装置及び電子機器にあっては、アーティファクト除去部について、各セグメントに対してラベル付けを行い、各ラベルについて、コンポーネントオブジェクトを作成する構成とすることができる。また、アーティファクト除去部について、近傍画素と深度情報が連続する画素に対して同じラベルを付ける処理により、セグメント毎に異なるラベルを付ける構成とすることができる。また、コンポーネントオブジェクトについて、セグメントのピクセル数及び深度平均値である構成とすることができる。 <Explanation of the ranging device and electronic device of the present disclosure, in general>
In the distance measuring device and the electronic device of the present disclosure, each segment of the artifact removing unit can be labeled, and a component object can be created for each label. Further, the artifact removing unit can be configured to be labeled differently for each segment by the process of labeling the nearby pixels and the pixels having continuous depth information with the same label. Further, the component object can be configured to have the number of pixels of the segment and the average depth.
本開示の測距装置及び電子機器にあっては、アーティファクト除去部について、各セグメントに対してラベル付けを行い、各ラベルについて、コンポーネントオブジェクトを作成する構成とすることができる。また、アーティファクト除去部について、近傍画素と深度情報が連続する画素に対して同じラベルを付ける処理により、セグメント毎に異なるラベルを付ける構成とすることができる。また、コンポーネントオブジェクトについて、セグメントのピクセル数及び深度平均値である構成とすることができる。 <Explanation of the ranging device and electronic device of the present disclosure, in general>
In the distance measuring device and the electronic device of the present disclosure, each segment of the artifact removing unit can be labeled, and a component object can be created for each label. Further, the artifact removing unit can be configured to be labeled differently for each segment by the process of labeling the nearby pixels and the pixels having continuous depth information with the same label. Further, the component object can be configured to have the number of pixels of the segment and the average depth.
上述した好ましい構成を含む本開示の測距装置及び電子機器にあっては、アーティファクト除去部について、測距装置からのセグメントの距離に応じて所定の閾値を変更する構成とすることができる。また、アーティファクト除去部について、測距装置からのセグメントの距離が、相対的に近い近距離用の閾値を大きく設定し、相対的に遠い遠距離用の閾値を小さく設定する構成とすることができる。
In the distance measuring device and the electronic device of the present disclosure including the above-mentioned preferable configuration, the artifact removing unit may be configured to change a predetermined threshold value according to the distance of the segment from the distance measuring device. Further, the artifact removing unit can be configured such that the distance of the segment from the distance measuring device sets a large threshold value for a relatively short short distance and a small threshold value for a relatively long long distance. ..
<ToF方式の測距システム>
図1は、ToF方式の測距システムの概念図である。本例に係る測距装置1では、測定対象物である被写体10までの距離を測定する測定方式として、ToF方式が採用されている。ToF方式は、被写体10に向けて照射した光が、当該被写体10で反射されて戻ってくるまでの時間を測定する方式である。ToF方式による距離測定を実現するために、測距装置1は、被写体10に向けて照射する光(例えば、赤外の波長領域にピーク波長を有するレーザ光)を出射する光源20、及び、被写体10で反射されて戻ってくる反射光を検出する光検出部30を備えている。 <ToF distance measurement system>
FIG. 1 is a conceptual diagram of a ToF type ranging system. In thedistance measuring device 1 according to this example, the ToF method is adopted as a measuring method for measuring the distance to the subject 10 which is a measurement target. The ToF method is a method of measuring the time until the light emitted toward the subject 10 is reflected by the subject 10 and returned. In order to realize the distance measurement by the ToF method, the distance measuring device 1 emits light emitted toward the subject 10 (for example, a laser beam having a peak wavelength in the infrared wavelength region), and a light source 20 and the subject. The light detection unit 30 for detecting the reflected light reflected by 10 and returned is provided.
図1は、ToF方式の測距システムの概念図である。本例に係る測距装置1では、測定対象物である被写体10までの距離を測定する測定方式として、ToF方式が採用されている。ToF方式は、被写体10に向けて照射した光が、当該被写体10で反射されて戻ってくるまでの時間を測定する方式である。ToF方式による距離測定を実現するために、測距装置1は、被写体10に向けて照射する光(例えば、赤外の波長領域にピーク波長を有するレーザ光)を出射する光源20、及び、被写体10で反射されて戻ってくる反射光を検出する光検出部30を備えている。 <ToF distance measurement system>
FIG. 1 is a conceptual diagram of a ToF type ranging system. In the
<本開示の技術が適用される測距装置>
[システム構成]
図2は、本開示の技術が適用されるToF方式の測距装置の構成の一例を示すブロック図である。本適用例に係る測距装置1(即ち、本開示の測距装置1)は、光源20及び光検出部30の他に、光検出部30が出力する信号値に基づいて露光制御を行うAE(Automatic Exposure:自動露光)制御部40、及び、測距部50を備えている。そして、本適用例に係るToF方式の測距装置1は、光検出部30の画素毎に距離情報を検出し、高精度な距離画像(Depth Map:深度マップ)を撮像フレームの単位で取得することができる。 <Distance measuring device to which the technology of the present disclosure is applied>
[System configuration]
FIG. 2 is a block diagram showing an example of the configuration of a ToF type distance measuring device to which the technique of the present disclosure is applied. The rangingdevice 1 according to the present application example (that is, the ranging device 1 of the present disclosure) performs exposure control based on a signal value output by the light detection unit 30 in addition to the light source 20 and the light detection unit 30. (Automatic Exposure) A control unit 40 and a distance measuring unit 50 are provided. Then, the ToF type distance measuring device 1 according to this application example detects the distance information for each pixel of the photodetector 30, and acquires a highly accurate distance image (Deptth Map: depth map) in units of imaging frames. be able to.
[システム構成]
図2は、本開示の技術が適用されるToF方式の測距装置の構成の一例を示すブロック図である。本適用例に係る測距装置1(即ち、本開示の測距装置1)は、光源20及び光検出部30の他に、光検出部30が出力する信号値に基づいて露光制御を行うAE(Automatic Exposure:自動露光)制御部40、及び、測距部50を備えている。そして、本適用例に係るToF方式の測距装置1は、光検出部30の画素毎に距離情報を検出し、高精度な距離画像(Depth Map:深度マップ)を撮像フレームの単位で取得することができる。 <Distance measuring device to which the technology of the present disclosure is applied>
[System configuration]
FIG. 2 is a block diagram showing an example of the configuration of a ToF type distance measuring device to which the technique of the present disclosure is applied. The ranging
本適用例に係る測距装置1は、間接(Indirect)ToF方式の測距装置(所謂、間接ToF方式の距離画像センサ)である。間接ToF方式の測距装置1は、光源20から発した所定の周期のパルス光が測定対象物(被写体)で反射し、その反射光を光検出部30が受光した際の周期を検出する。そして、発光の周期と受光の周期との位相差から光飛行時間を計測することで、測定対象物までの距離を測定する。
The distance measuring device 1 according to this application example is an indirect ToF type distance measuring device (so-called indirect ToF type distance image sensor). The indirect ToF type ranging device 1 detects the cycle when the pulsed light of a predetermined cycle emitted from the light source 20 is reflected by the measurement object (subject) and the reflected light is received by the photodetector 30. Then, the distance to the object to be measured is measured by measuring the light flight time from the phase difference between the period of light emission and the period of light reception.
光源20は、AE制御部40による制御の下に、オン/オフ動作を所定の周期で繰り返すことによって測定対象物に向けて光を照射する。光源20の照射光としては、例えば、850nm付近の近赤外光が利用されることが多い。光検出部30は、光源20からの照射光が測定対象物で反射されて戻ってくる光を受光し、画素毎に距離情報を検出する。光検出部30からは、画素毎に検出した距離情報を含む現フレームのRAW画像データ、及び、発光・露光設定情報が出力され、AE制御部40及び測距部50に供給される。
The light source 20 irradiates light toward the object to be measured by repeating the on / off operation at a predetermined cycle under the control of the AE control unit 40. As the irradiation light of the light source 20, for example, near-infrared light in the vicinity of 850 nm is often used. The light detection unit 30 receives the light emitted from the light source 20 after being reflected by the object to be measured and returns, and detects the distance information for each pixel. The light detection unit 30 outputs the RAW image data of the current frame including the distance information detected for each pixel and the light emission / exposure setting information, and supplies the RAW image data to the AE control unit 40 and the distance measuring unit 50.
AE制御部40は、次フレーム発光・露光条件計算部41及び次フレーム発光・露光制御部42を有する構成となっている。次フレーム発光・露光条件計算部41は、光検出部30から供給される現フレームのRAW画像データ、及び、発光・露光設定情報に基づいて、次フレームの発光・露光条件を計算する。次フレームの発光・露光条件は、次フレームの距離画像を取得する際の光源20の発光時間や発光強度、及び、光検出部30の露光時間である。次フレーム発光・露光制御部42は、次フレーム発光・露光条件計算部41で算出された次フレームの発光・露光条件に基づいて、次フレームの光源20の発光時間や発光強度、及び、光検出部30の露光時間を制御する。
The AE control unit 40 has a configuration including a next frame light emission / exposure condition calculation unit 41 and a next frame light emission / exposure control unit 42. The next frame emission / exposure condition calculation unit 41 calculates the emission / exposure condition of the next frame based on the RAW image data of the current frame supplied from the light detection unit 30 and the emission / exposure setting information. The light emission / exposure conditions of the next frame are the light emission time and light emission intensity of the light source 20 when acquiring the distance image of the next frame, and the exposure time of the photodetector 30. The next frame light emission / exposure control unit 42 detects the light emission time, light emission intensity, and light detection of the light source 20 of the next frame based on the light emission / exposure conditions of the next frame calculated by the next frame light emission / exposure condition calculation unit 41. The exposure time of the part 30 is controlled.
測距部50は、距離画像を計算する距離画像計算部51を有する構成となっている。距離画像計算部51は、光検出部30の画素毎に検出した距離情報を含む現フレームのRAW画像データを用いて計算を行うことによって距離画像を算出し、被写体の奥行き情報である深度、及び、光検出部30の受光情報である信頼値の各情報を含む距離画像情報として測距装置1外へ出力する。ここで、距離画像とは、例えば、画素毎に検出した距離情報に基づく距離値(深度/奥行きの値)がそれぞれ画素に反映された画像である。
The distance measuring unit 50 has a configuration including a distance image calculation unit 51 that calculates a distance image. The distance image calculation unit 51 calculates a distance image by performing a calculation using the RAW image data of the current frame including the distance information detected for each pixel of the light detection unit 30, and obtains the depth, which is the depth information of the subject, and the depth, which is the depth information of the subject. , The distance image information including each information of the reliability value which is the light receiving information of the light detection unit 30 is output to the outside of the distance measuring device 1. Here, the distance image is, for example, an image in which a distance value (depth / depth value) based on the distance information detected for each pixel is reflected in each pixel.
[光検出部の構成例]
ここで、光検出部30の具体的な構成例について、図3を用いて説明する。図3は、光検出部30の構成の一例を示すブロック図である。 [Configuration example of photodetector]
Here, a specific configuration example of thephotodetector 30 will be described with reference to FIG. FIG. 3 is a block diagram showing an example of the configuration of the photodetector 30.
ここで、光検出部30の具体的な構成例について、図3を用いて説明する。図3は、光検出部30の構成の一例を示すブロック図である。 [Configuration example of photodetector]
Here, a specific configuration example of the
光検出部30は、センサチップ31、及び、当該センサチップ31に対して積層された回路チップ32を含む積層構造を有している。この積層構造において、センサチップ31と回路チップ32とは、ビア(VIA)やCu-Cu接合などの接続部(図示せず)を通して電気的に接続される。尚、図3では、センサチップ31の配線と回路チップ32の配線とが、上記の接続部を介して電気的に接続された状態を図示している。
The photodetector 30 has a laminated structure including a sensor chip 31 and a circuit chip 32 laminated to the sensor chip 31. In this laminated structure, the sensor chip 31 and the circuit chip 32 are electrically connected through a connecting portion (not shown) such as a via (VIA) or a Cu—Cu junction. Note that FIG. 3 illustrates a state in which the wiring of the sensor chip 31 and the wiring of the circuit chip 32 are electrically connected via the above-mentioned connection portion.
センサチップ31上には、画素アレイ部33が形成されている。画素アレイ部33は、センサチップ31上に2次元のグリッドパターンで行列状(アレイ状)に配置された複数の画素34を含んでいる。画素アレイ部33において、複数の画素34はそれぞれ、入射光(例えば、近赤外光)を受光し、光電変換を行ってアナログ画素信号を出力する。画素アレイ部33には、画素列毎に、2本の垂直信号線VSL1,VSL2が配線されている。画素アレイ部33の画素列の数をM(Mは、整数)とすると、合計で(2×M)本の垂直信号線VSLが画素アレイ部33に配線されている。
A pixel array unit 33 is formed on the sensor chip 31. The pixel array unit 33 includes a plurality of pixels 34 arranged in a matrix (array shape) in a two-dimensional grid pattern on the sensor chip 31. In the pixel array unit 33, each of the plurality of pixels 34 receives incident light (for example, near-infrared light), performs photoelectric conversion, and outputs an analog pixel signal. Two vertical signal lines VSL 1 and VSL 2 are wired in the pixel array unit 33 for each pixel row. Assuming that the number of pixel rows of the pixel array unit 33 is M (M is an integer), a total of (2 × M) vertical signal lines VSL are wired to the pixel array unit 33.
複数の画素34はそれぞれ、第1のタップA及び第2のタップB(その詳細については後述する)を有している。2本の垂直信号線VSL1,VSL2のうち、垂直信号線VSL1には、対応する画素列の画素34の第1のタップAの電荷に基づくアナログの画素信号AINP1が出力される。また、垂直信号線VSL2には、対応する画素列の画素34の第2のタップBの電荷に基づくアナログの画素信号AINP2が出力される。アナログの画素信号AINP1,AINP2については後述する。
Each of the plurality of pixels 34 has a first tap A and a second tap B (details thereof will be described later). Of the two vertical signal lines VSL 1 and VSL 2 , the vertical signal line VSL 1 outputs an analog pixel signal AIN P1 based on the charge of the first tap A of the pixel 34 of the corresponding pixel sequence. Further, an analog pixel signal AIN P2 based on the charge of the second tap B of the pixel 34 of the corresponding pixel sequence is output to the vertical signal line VSL 2. The analog pixel signals AIN P1 and AIN P2 will be described later.
回路チップ32上には、行選択部35、カラム信号処理部36、出力回路部37、及び、タイミング制御部38が配置されている。行選択部35は、画素アレイ部33の各画素34を画素行の単位で駆動し、画素信号AINP1,AINP2を出力させる。行選択部35による駆動の下に、選択行の画素34から出力されたアナログの画素信号AINP1,AINP2は、2本の垂直信号線VSL1,VSL2を通してカラム信号処理部36に供給される。
A row selection unit 35, a column signal processing unit 36, an output circuit unit 37, and a timing control unit 38 are arranged on the circuit chip 32. The row selection unit 35 drives each pixel 34 of the pixel array unit 33 in units of pixel rows, and outputs pixel signals AIN P1 and AIN P2. Under the drive of the row selection unit 35, the analog pixel signals AIN P1 and AIN P2 output from the pixel 34 of the selected row are supplied to the column signal processing unit 36 through the two vertical signal lines VSL 1 and VSL 2. To.
カラム信号処理部36は、画素アレイ部33の画素列に対応して、例えば、画素列毎に設けられた複数のアナログ-デジタル変換器(ADC)39を有する構成となっている。アナログ-デジタル変換器39は、垂直信号線VSL1,VSL2を通して供給されるアナログの画素信号AINP1,AINP2に対して、アナログ-デジタル変換処理を施し、出力回路部37に出力する。出力回路部37は、カラム信号処理部36から出力されるデジタル化された画素信号AINP1,AINP2に対してCDS(Correlated Double Sampling:相関二重サンプリング)処理などを実行し、回路チップ32外へ出力する。
The column signal processing unit 36 has a configuration having, for example, a plurality of analog-to-digital converters (ADCs) 39 provided for each pixel array corresponding to the pixel array of the pixel array unit 33. The analog-to-digital converter 39 performs analog-to-digital conversion processing on the analog pixel signals AIN P1 and AIN P2 supplied through the vertical signal lines VSL 1 and VSL 2, and outputs them to the output circuit unit 37. The output circuit unit 37 executes CDS (Correlated Double Sampling) processing or the like on the digitized pixel signals AIN P1 and AIN P2 output from the column signal processing unit 36, and outside the circuit chip 32. Output to.
タイミング制御部38は、各種のタイミング信号、クロック信号、及び、制御信号等を生成し、これらの信号を基に、行選択部35、カラム信号処理部36、及び、出力回路部37等の駆動制御を行う。
The timing control unit 38 generates various timing signals, clock signals, control signals, etc., and drives the row selection unit 35, the column signal processing unit 36, the output circuit unit 37, etc. based on these signals. Take control.
[画素の回路構成例]
図4は、光検出部30における画素34の回路構成の一例を示す回路図である。 [Pixel circuit configuration example]
FIG. 4 is a circuit diagram showing an example of the circuit configuration of thepixel 34 in the photodetector 30.
図4は、光検出部30における画素34の回路構成の一例を示す回路図である。 [Pixel circuit configuration example]
FIG. 4 is a circuit diagram showing an example of the circuit configuration of the
本例に係る画素34は、光電変換部として、例えば、フォトダイオード341を有している。画素34は、フォトダイオード341の他、オーバーフロートランジスタ342、2つの転送トランジスタ343,344、2つのリセットトランジスタ345,346、2つの浮遊拡散層347,348、2つの増幅トランジスタ349、350、及び、2つの選択トランジスタ351,352を有する構成となっている。2つの浮遊拡散層347,348は、先述した図3に示す第1,第2のタップA,B(以下、単に、「タップA,B」と記述する場合がある)に相当する。
The pixel 34 according to this example has, for example, a photodiode 341 as a photoelectric conversion unit. In addition to the photodiode 341, the pixel 34 includes overflow transistors 342, two transfer transistors 343, 344, two reset transistors 345, 346, two floating diffusion layers 347, 348, two amplification transistors 349, 350, and two. It has a configuration having one selection transistor 351 and 352. The two floating diffusion layers 347 and 348 correspond to the first and second taps A and B (hereinafter, may be simply referred to as "tap A and B") shown in FIG. 3 described above.
フォトダイオード341は、受光した光を光電変換して電荷を生成する。フォトダイオード341については、例えば、基板裏面側から照射される光を取り込む裏面照射型の画素構造とすることができる。但し、画素構造については、裏面照射型の画素構造に限られるものではなく、基板表面側から照射される光を取り込む表面照射型の画素構造とすることもできる。
The photodiode 341 photoelectrically converts the received light to generate an electric charge. The photodiode 341 may have, for example, a back-illuminated pixel structure that captures light emitted from the back surface side of the substrate. However, the pixel structure is not limited to the back-illuminated pixel structure, and a surface-irradiated pixel structure that captures the light emitted from the surface side of the substrate can also be used.
オーバーフロートランジスタ342は、フォトダイオード341のカソード電極と電源電圧VDDの電源ラインとの間に接続されており、フォトダイオード341をリセットする機能を持つ。具体的には、オーバーフロートランジスタ342は、行選択部35から供給されるオーバーフローゲート信号OFGに応答して導通状態になることで、フォトダイオード341の電荷をシーケンシャルに電源電圧VDDの電源ラインに排出する。
The overflow transistor 342 is connected between the cathode electrode of the photodiode 341 and the power supply line of the power supply voltage V DD , and has a function of resetting the photodiode 341. Specifically, the overflow transistor 342 becomes conductive in response to the overflow gate signal OFG supplied from the row selection unit 35, so that the electric charge of the photodiode 341 is sequentially discharged to the power supply line of the power supply voltage V DD. To do.
2つの転送トランジスタ343,344は、フォトダイオード341のカソード電極と2つの浮遊拡散層347,348(タップA,B)のそれぞれとの間に接続されている。そして、転送トランジスタ343,344は、行選択部35から供給される転送信号TRGに応答して導通状態になることで、フォトダイオード341で生成された電荷を、浮遊拡散層347,348にそれぞれシーケンシャルに転送する。
The two transfer transistors 343 and 344 are connected between the cathode electrode of the photodiode 341 and each of the two floating diffusion layers 347 and 348 (tap A and B). Then, the transfer transistors 343 and 344 are brought into a conductive state in response to the transfer signal TRG supplied from the row selection unit 35, so that the charges generated by the photodiode 341 are sequentially transferred to the floating diffusion layers 347 and 348, respectively. Transfer to.
第1,第2のタップA,Bに相当する浮遊拡散層347,348は、フォトダイオード341から転送された電荷を蓄積し、その電荷量に応じた電圧値の電圧信号に変換し、アナログの画素信号AINP1,AINP2を生成する。
The floating diffusion layers 347 and 348 corresponding to the first and second taps A and B accumulate the electric charge transferred from the photodiode 341 and convert it into a voltage signal having a voltage value corresponding to the amount of the electric charge, and convert it into an analog voltage signal. Generates pixel signals AIN P1 and AIN P2.
2つのリセットトランジスタ345,346は、2つの浮遊拡散層347,348のそれぞれと電源電圧VDDの電源ラインとの間に接続されている。そして、リセットトランジスタ345,346は、行選択部35から供給されるリセット信号RSTに応答して導通状態になることで、浮遊拡散層347,348のそれぞれから電荷を引き抜いて、電荷量を初期化する。
The two reset transistors 345 and 346 are connected between each of the two floating diffusion layers 347 and 348 and the power supply line of the power supply voltage V DD. Then, the reset transistors 345 and 346 are brought into a conductive state in response to the reset signal RST supplied from the row selection unit 35, so that charges are extracted from each of the floating diffusion layers 347 and 348, and the amount of charges is initialized. To do.
2つの増幅トランジスタ349、350は、電源電圧VDDの電源ラインと2つの選択トランジスタ351,352のそれぞれとの間に接続されており、浮遊拡散層347,348のそれぞれで電荷から電圧に変換された電圧信号をそれぞれ増幅する。
The two amplification transistors 349 and 350 are connected between the power supply line of the power supply voltage V DD and each of the two selection transistors 351 and 352, and are converted from electric charge to voltage by the floating diffusion layers 347 and 348, respectively. Each voltage signal is amplified.
2つの選択トランジスタ351,352は、2つの増幅トランジスタ349、350のそれぞれと垂直信号線VSL1,VSL2のそれぞれとの間に接続されている。そして、選択トランジスタ351,352は、行選択部35から供給される選択信号SELに応答して導通状態になることで、増幅トランジスタ349、350のそれぞれで増幅された電圧信号を、アナログの画素信号AINP1,AINP2として2本の垂直信号線VSL1,VSL2に出力する。
The two selection transistors 351 and 352 are connected between the two amplification transistors 349 and 350 and the vertical signal lines VSL 1 and VSL 2, respectively. Then, the selection transistors 351 and 352 enter a conductive state in response to the selection signal SEL supplied from the row selection unit 35, so that the voltage signals amplified by the amplification transistors 349 and 350 are converted into analog pixel signals. Output to two vertical signal lines VSL 1 and VSL 2 as AIN P1 and AIN P2.
2本の垂直信号線VSL1,VSL2は、画素列毎に、カラム信号処理部36内の1つのアナログ-デジタル変換器39の入力端に接続されており、画素列毎に画素34から出力されるアナログの画素信号AINP1,AINP2をアナログ-デジタル変換器39に伝送する。
The two vertical signal lines VSL 1 and VSL 2 are connected to the input end of one analog-to-digital converter 39 in the column signal processing unit 36 for each pixel row, and are output from the pixel 34 for each pixel row. The analog pixel signals AIN P1 and AIN P2 are transmitted to the analog-to-digital converter 39.
尚、画素34の回路構成については、光電変換によってアナログの画素信号AINP1,AINP2を生成することができる回路構成であれば、図3に例示した回路構成に限定されるものではない。
The circuit configuration of the pixel 34 is not limited to the circuit configuration illustrated in FIG. 3 as long as it can generate analog pixel signals AIN P1 and AIN P2 by photoelectric conversion.
ここで、ToF方式による距離の算出について図5を用いて説明する。図5は、ToF方式の測距装置1における距離の算出について説明するためのタイミング波形図である。ToF方式の測距装置1における光源20及び光検出部30は、図5のタイミング波形図に示すタイミングで動作する。
Here, the calculation of the distance by the ToF method will be described with reference to FIG. FIG. 5 is a timing waveform diagram for explaining the calculation of the distance in the ToF type distance measuring device 1. The light source 20 and the light detection unit 30 in the ToF type distance measuring device 1 operate at the timing shown in the timing waveform diagram of FIG.
光源20は、AE制御部40による制御の下に、所定の期間、例えば、パルス発光時間Tpの期間だけ、測定対象物に対して光を照射する。光源20から発せられた照射光は、測定対象物で反射されて戻ってくる。この反射光(active光)が、フォトダイオード341によって受光される。測定対象物への照射光の照射が開始されてから、フォトダイオード341が反射光を受光する時間、即ち、光飛行時間は、測距装置1から測定対象物までの距離に応じた時間となる。
Light source 20, under control by the AE control unit 40 for a predetermined period, for example, only during the period of the pulse emission time T p, for irradiating light to the object of measurement. The irradiation light emitted from the light source 20 is reflected by the object to be measured and returned. This reflected light (active light) is received by the photodiode 341. The time for the photodiode 341 to receive the reflected light after the irradiation of the irradiation light to the measurement object is started, that is, the light flight time is the time corresponding to the distance from the distance measuring device 1 to the measurement object. ..
図4において、フォトダイオード341は、照射光の照射が開始された時点から、パルス発光時間Tpの期間だけ、測定対象物からの反射光を受光する。このとき、フォトダイオード341が受光する光には、測定対象物に照射された光が、当該測定対象物で反射されて戻ってくる反射光(active光)の他に、物体や大気などで反射・散乱された環境光(ambient光)も含まれている。
4, photodiode 341, from the time the irradiation is started in the irradiation light, only during the period of the pulse emission time T p, for receiving reflected light from the object to be measured. At this time, as the light received by the photodiode 341, the light radiated to the object to be measured is reflected by an object, the atmosphere, or the like in addition to the reflected light (active light) that is reflected by the object to be measured and returned. -Scattered ambient light is also included.
1回の受光の際に、フォトダイオード341で光電変換された電荷が、タップA(浮遊拡散層347)に転送され、蓄積される。そして、タップAから、浮遊拡散層347に蓄積した電荷量に応じた電圧値の信号n0が取得される。タップAの蓄積タイミングが終了した時点で、フォトダイオード341で光電変換された電荷が、タップB(浮遊拡散層348)に転送され、蓄積される。そして、タップBから、浮遊拡散層348に蓄積した電荷量に応じた電圧値の信号n1が取得される。
At the time of one light reception, the charge photoelectrically converted by the photodiode 341 is transferred to the tap A (floating diffusion layer 347) and accumulated. Then, a signal n 0 of a voltage value corresponding to the amount of electric charge accumulated in the floating diffusion layer 347 is acquired from the tap A. When the accumulation timing of the tap A is completed, the charge photoelectrically converted by the photodiode 341 is transferred to the tap B (suspended diffusion layer 348) and accumulated. Then, a signal n 1 having a voltage value corresponding to the amount of electric charge accumulated in the floating diffusion layer 348 is acquired from the tap B.
このように、タップA及びタップBに対して、蓄積タイミングの位相を180度異ならせた駆動(位相を全く逆にした駆動)が行われることで、信号n0及び信号n1がそれぞれ取得される。そして、このような駆動が複数回繰り返され、信号n0及び信号n1の蓄積、積算が行われることで、蓄積信号N0及び蓄積信号N1がそれぞれ取得される。
In this way, the tap A and the tap B are driven by 180 degrees out of phase of the accumulation timing (driving with the phases completely reversed), so that the signal n 0 and the signal n 1 are acquired, respectively. To. Then, such driving is repeated a plurality of times, and the signal n 0 and the signal n 1 are accumulated and integrated to acquire the accumulated signal N 0 and the accumulated signal N 1 , respectively.
例えば、1つの画素34について、1つのフェーズに2回受光が行われ、タップA及びタップBに4回ずつ、即ち、0度、90度、180度、270度の信号が蓄積される。このようにして取得した蓄積信号N0及び蓄積信号N1を基に、測距装置1から測定対象物までの距離Dが算出される。
For example, for one pixel 34, light reception is performed twice in one phase, and signals of 0 degree, 90 degree, 180 degree, and 270 degree are accumulated in tap A and tap B four times each. Based on the accumulated signal N 0 and the accumulated signal N 1 acquired in this way, the distance D from the distance measuring device 1 to the object to be measured is calculated.
蓄積信号N0及び蓄積信号N1には、測定対象物で反射されて戻ってくる反射光(active光)の成分の他に、物体や大気などで反射・散乱された環境光(ambient光)の成分も含まれている。従って、上述した動作では、この環境光(ambient光)の成分の影響を除き、反射光(active光)の成分を残すため、環境光に基づく信号n2に関しても蓄積、積算が行われ、環境光成分についての蓄積信号N2が取得される。
In addition to the components of the reflected light (active light) that is reflected by the object to be measured and returned, the stored signal N 0 and the stored signal N 1 include ambient light that is reflected and scattered by an object or the atmosphere. Ingredients are also included. Therefore, in the above-mentioned operation, in order to remove the influence of the ambient light component and leave the reflected light (active light) component, the signal n 2 based on the ambient light is also accumulated and integrated, and the environment. The accumulated signal N 2 for the optical component is acquired.
このようにして取得された、環境光成分を含む蓄積信号N0及び蓄積信号N1、並びに、環境光成分についての蓄積信号N2を用いて、以下の式(1)及び式(2)に基づく演算処理により、測距装置1から測定対象物までの距離Dを算出することができる。
Using the stored signal N 0 and the stored signal N 1 including the ambient light component and the stored signal N 2 for the ambient light component obtained in this way, the following equations (1) and (2) are used. The distance D from the distance measuring device 1 to the object to be measured can be calculated by the arithmetic processing based on the above.
式(1)及び式(2)において、Dは測距装置1から測定対象物までの距離を表し、cは光速を表し、Tpはパルス発光時間を表している。
In the formulas (1) and (2), D represents the distance from the distance measuring device 1 to the object to be measured, c represents the speed of light, and T p represents the pulse emission time.
図2に示した距離画像計算部51は、環境光成分を含む蓄積信号N0及び蓄積信号N1、並びに、環境光成分についての蓄積信号N2を用いて、光検出部30から出力される、上記した式(1)及び式(2)に基づく演算処理により、測距装置1から測定対象物までの距離Dを算出し、距離画像情報として出力する。距離画像情報としては、例えば、距離Dに応じた濃度の色で色付けされた画像情報を例示することができる。尚、ここでは、算出した距離Dを距離画像情報として出力するとしたが、算出した距離Dをそのまま距離情報として出力するようにしてもよい。
The distance image calculation unit 51 shown in FIG. 2 is output from the photodetection unit 30 by using the storage signal N 0 and the storage signal N 1 including the ambient light component and the storage signal N 2 for the ambient light component. The distance D from the distance measuring device 1 to the object to be measured is calculated by the arithmetic processing based on the above equations (1) and (2), and is output as the distance image information. As the distance image information, for example, image information colored with a color having a density corresponding to the distance D can be exemplified. Here, the calculated distance D is output as the distance image information, but the calculated distance D may be output as the distance information as it is.
[距離画像計算部の構成例]
測距装置1における測距部50の距離画像計算部51の構成の一例を図6に示す。本例に係る距離画像計算部51は、深度計算部511、較正部512、ノイズ低減部513、及び、アーティファクト除去部514を有する構成となっている。 [Structure example of distance image calculation unit]
FIG. 6 shows an example of the configuration of the distanceimage calculation unit 51 of the distance measuring unit 50 in the distance measuring device 1. The distance image calculation unit 51 according to this example has a configuration including a depth calculation unit 511, a calibration unit 512, a noise reduction unit 513, and an artifact removal unit 514.
測距装置1における測距部50の距離画像計算部51の構成の一例を図6に示す。本例に係る距離画像計算部51は、深度計算部511、較正部512、ノイズ低減部513、及び、アーティファクト除去部514を有する構成となっている。 [Structure example of distance image calculation unit]
FIG. 6 shows an example of the configuration of the distance
上記の構成の距離画像計算部51において、深度計算部511は、光検出部30から与えられるRAW画像データを用いて、光源20から出射された光が被写体で反射し、その反射光の光検出部30への到達位相差から深度情報及び信頼値情報を計算する。ここで、「深度」とは、被写体の奥行き情報(距離情報)のことであり、「信頼値」とは、光検出部30の受光情報であり、光源20から出射された光が、被写体で反射されて光検出部30に戻ってくる反射光の量(度合い)のことである。
In the distance image calculation unit 51 having the above configuration, the depth calculation unit 511 uses the RAW image data given by the light detection unit 30 to reflect the light emitted from the light source 20 on the subject and detect the reflected light. Depth information and reliability value information are calculated from the phase difference reached by the unit 30. Here, the "depth" is the depth information (distance information) of the subject, the "reliability value" is the light receiving information of the photodetector 30, and the light emitted from the light source 20 is the subject. It is the amount (degree) of reflected light that is reflected and returned to the photodetector 30.
較正部512は、例えば、光源20から出射された光と光検出部30に入射する光との位相を合わせたり、波形補正や温度補正等の較正処理を行ったりする。ノイズ低減部513は、例えば、ローパスフィルタから成り、ノイズを低減する処理を行う。アーティファクト除去部514は、種々のフィルタ機能を備えており、ノイズ低減部513を経た深度及び信頼値の各情報について、測距が間違っているものや、光検出部30の受光の信頼値の低いものを排除する処理を行う。
The calibration unit 512, for example, matches the phase of the light emitted from the light source 20 with the light incident on the photodetector 30, and performs calibration processing such as waveform correction and temperature correction. The noise reduction unit 513 is composed of, for example, a low-pass filter, and performs a process of reducing noise. The artifact removing unit 514 has various filter functions, and for each information of the depth and the reliability value passed through the noise reducing unit 513, the distance measurement is incorrect or the light receiving reliability value of the photodetector unit 30 is low. Perform the process of excluding things.
尚、ここでは、較正部512、ノイズ低減部513、及び、アーティファクト除去部514について、その順番で配置された構成を例示しているが、その配置の順番については任意である、即ち、変更してもよい。
Here, the configurations in which the calibration unit 512, the noise reduction unit 513, and the artifact removal unit 514 are arranged in that order are illustrated, but the order of arrangement is arbitrary, that is, changed. You may.
[エイリアシング課題について]
ところで、上述した間接ToF方式の測距装置では、光源20から出射されるレーザ光の発光周波数(発光周期)に応じて測距可能な最大距離が決まる。一例として、光源20のレーザ駆動周波数(発光周波数)が100MHzのときの真の深度と測距深度との関係を図7に示す。この例の場合、測距可能な最大距離は1.5mである。 [About aliasing issues]
By the way, in the above-mentioned indirect ToF type distance measuring device, the maximum distance that can be measured is determined according to the emission frequency (emission cycle) of the laser light emitted from thelight source 20. As an example, FIG. 7 shows the relationship between the true depth and the distance measurement depth when the laser drive frequency (emission frequency) of the light source 20 is 100 MHz. In the case of this example, the maximum distance that can be measured is 1.5 m.
ところで、上述した間接ToF方式の測距装置では、光源20から出射されるレーザ光の発光周波数(発光周期)に応じて測距可能な最大距離が決まる。一例として、光源20のレーザ駆動周波数(発光周波数)が100MHzのときの真の深度と測距深度との関係を図7に示す。この例の場合、測距可能な最大距離は1.5mである。 [About aliasing issues]
By the way, in the above-mentioned indirect ToF type distance measuring device, the maximum distance that can be measured is determined according to the emission frequency (emission cycle) of the laser light emitted from the
間接ToF方式の測距装置では、位相差に基づく測距であり、1周すると同じ位相に戻ることから、被写体までの距離が測距可能な最大距離を超えると、エイリアシング(折り返した距離)が発生してしまう。エイリアシング課題は、金属やガラスなど、鏡面反射成分が大きい高反射率な被写体で特に発生しやすい。
In the indirect ToF type distance measuring device, the distance is measured based on the phase difference, and it returns to the same phase after one round. Therefore, when the distance to the subject exceeds the maximum distance that can be measured, aliasing (folded distance) occurs. It will occur. Aliasing problems are particularly likely to occur in high-reflectivity subjects with a large specular reflection component, such as metal and glass.
そして、エイリアス距離(エイリアシングが発生する距離)にある物体については、実際の測定距離(測距深度)が、本来の正しい距離(真の深度)よりも近くなる。例えば、2.5mの距離にある被写体の実際の測定距離(測距深度)が1.0mとなる。このように、エイリアス距離にある物体の測距結果については、誤った測距結果として測距装置1から出力されることになる。
And for objects at the alias distance (distance where aliasing occurs), the actual measurement distance (distance measurement depth) is closer than the original correct distance (true depth). For example, the actual measurement distance (distance measurement depth) of a subject at a distance of 2.5 m is 1.0 m. As described above, the distance measurement result of the object at the alias distance is output from the distance measurement device 1 as an erroneous distance measurement result.
その結果、測距装置1の測距結果(深度及び信頼値を含む距離画像情報)に基づいて各種の制御を行う後段のシステムでは、測距装置1の誤った測距結果に基づいて誤った制御が行われることになる。一例として、測距装置1の測距結果を、自動的にカメラの焦点(ピント)を合わせるオートフォーカスに応用する場合には、正確なフォーカス制御が行われないことになる。
As a result, in the subsequent system that performs various controls based on the distance measurement result (distance image information including the depth and the reliability value) of the distance measurement device 1, an error is made based on the incorrect distance measurement result of the distance measurement device 1. Control will be done. As an example, when the distance measurement result of the distance measuring device 1 is applied to autofocus that automatically focuses the camera, accurate focus control is not performed.
ここで、画像に映る被写体の大きさについて考察する。真の深度1.0m、測距深度1.0mの場合の画像に映る被写体の大きさを図8Aに示し、真の深度2.5m、測距深度1.0mの場合の画像に映る被写体の大きさを図8Bに示す。図8Bは、エイリアス距離にある被写体の場合の例である。
Here, consider the size of the subject reflected in the image. The size of the subject reflected in the image when the true depth is 1.0 m and the distance measuring depth is 1.0 m is shown in FIG. 8A, and the size of the subject reflected in the image when the true depth is 2.5 m and the distance measuring depth is 1.0 m is shown. The size is shown in FIG. 8B. FIG. 8B is an example in the case of a subject at an alias distance.
図8Aの場合も、図8Bの場合も、測距距離(測距深度)が同じ1.0mである。図8Aと図8Bとの対比から明らかなように、同じ1.0mの距離として測距されるが、被写体の映る大きさが異なる。すなわち、真の深度2.5mの場合(図8B)の方が、真の深度1.0mの場合(図8A)よりも、画像に映る被写体の大きさが小さくなる。これは、画像に映る被写体の大きさが、測距装置1からの被写体の距離によって異なることを意味する。
In both the case of FIG. 8A and the case of FIG. 8B, the distance measuring distance (distance measuring depth) is the same 1.0 m. As is clear from the comparison between FIGS. 8A and 8B, the distance is measured at the same distance of 1.0 m, but the size of the subject is different. That is, the size of the subject reflected in the image is smaller in the case of the true depth of 2.5 m (FIG. 8B) than in the case of the true depth of 1.0 m (FIG. 8A). This means that the size of the subject reflected in the image differs depending on the distance of the subject from the distance measuring device 1.
<本開示の実施形態>
本開示の実施形態では、上記のように、画像に映る被写体の大きさが、測距装置1からの被写体の距離によって変化する性質を利用して、間接ToF方式の測距装置特有の課題であるエイリアシング課題を解決するようにする。このエイリアシング課題を解決する処理は、本実施形態にあっては、測距部50の距離画像計算部51内のアーティファクト除去部514において、フィルタ機能の一つとして実行される。 <Embodiment of the present disclosure>
In the embodiment of the present disclosure, as described above, it is a problem peculiar to the indirect ToF type distance measuring device by utilizing the property that the size of the subject reflected in the image changes depending on the distance of the subject from thedistance measuring device 1. Try to solve an aliasing problem. In the present embodiment, the process of solving this aliasing problem is executed as one of the filter functions in the artifact removing unit 514 in the distance image calculation unit 51 of the distance measuring unit 50.
本開示の実施形態では、上記のように、画像に映る被写体の大きさが、測距装置1からの被写体の距離によって変化する性質を利用して、間接ToF方式の測距装置特有の課題であるエイリアシング課題を解決するようにする。このエイリアシング課題を解決する処理は、本実施形態にあっては、測距部50の距離画像計算部51内のアーティファクト除去部514において、フィルタ機能の一つとして実行される。 <Embodiment of the present disclosure>
In the embodiment of the present disclosure, as described above, it is a problem peculiar to the indirect ToF type distance measuring device by utilizing the property that the size of the subject reflected in the image changes depending on the distance of the subject from the
[エイリアシング課題を解決するための処理例]
以下に、アーティファクト除去部514において実行される、本開示の測距装置の制御方法の処理の一例である、エイリアシング課題を解決するための処理の一例について、図9のフローチャートを用いて説明する。 [Processing example for solving aliasing problem]
Hereinafter, an example of a process for solving the aliasing problem, which is an example of the process of the control method of the ranging device of the present disclosure, which is executed by theartifact removing unit 514, will be described with reference to the flowchart of FIG.
以下に、アーティファクト除去部514において実行される、本開示の測距装置の制御方法の処理の一例である、エイリアシング課題を解決するための処理の一例について、図9のフローチャートを用いて説明する。 [Processing example for solving aliasing problem]
Hereinafter, an example of a process for solving the aliasing problem, which is an example of the process of the control method of the ranging device of the present disclosure, which is executed by the
以下では、アーティファクト除去部514の機能をプロセッサによって実現する構成の場合において、アーティファクト除去部514を構成するプロセッサによる制御の下に、エイリアシング課題を解決するための一連の処理が実行されることとする。
In the following, in the case of a configuration in which the function of the artifact removing unit 514 is realized by a processor, a series of processes for solving the aliasing problem will be executed under the control of the processor constituting the artifact removing unit 514. ..
アーティファクト除去部514を構成するプロセッサ(以下、単に「プロセッサ」と記述する)は、図6の深度計算部511で算出された深度情報を基に、画像をセグメント(部分/物体)に分けるセグメント化処理を行う(ステップS11)。このセグメント化処理では、一例として、図8A、図8Bの場合を例に挙げると、背景の大きい部分と、真ん中の丸い部分とをそれぞれセグメントとして分ける処理が行われる。
The processor (hereinafter, simply referred to as “processor”) constituting the artifact removal unit 514 divides the image into segments (parts / objects) based on the depth information calculated by the depth calculation unit 511 in FIG. Perform the process (step S11). In this segmentation process, taking the cases of FIGS. 8A and 8B as an example, a process of dividing a large background portion and a round portion in the center into segments is performed.
次に、プロセッサは、各セグメントに対してラベル1~n(nは2以上の整数)を付けるラベル付け処理を行う(ステップS12)。このラベル付け処理では、画像を1行ずつ走査し、近傍画素と深度情報が連続する画素に対して同じラベルを付ける処理により、セグメント(部分/物体)毎に異なるラベルを付与する処理が行われる。
Next, the processor performs a labeling process for assigning labels 1 to n (n is an integer of 2 or more) to each segment (step S12). In this labeling process, the image is scanned line by line, and the same label is attached to pixels in which neighboring pixels and depth information are continuous, so that different labels are assigned to each segment (part / object). ..
次に、プロセッサは、各ラベル1~nについて、コンポーネントオブジェクトを作成する(ステップS13)。ここで、「コンポーネントオブジェクト」とは、セグメントのピクセル数、及び、深度平均値(距離平均値)のことである。
Next, the processor creates a component object for each label 1 to n (step S13). Here, the "component object" is the number of pixels of the segment and the depth average value (distance average value).
次に、プロセッサは、ラベルカウンタiをインクリメントし(ステップS14)、次いで、ラベル1(i=1)について深度平均値から、セグメントのピクセル数の閾値を決定する(ステップS15)。ステップS15の処理では、各セグメント毎に距離を考慮してセグメントのピクセル数の閾値を決定する、より具体的には、測距装置1からのセグメントの距離が、相対的に近い近距離用の閾値を大きく設定し、相対的に遠い遠距離用の閾値を小さく設定する。
Next, the processor increments the label counter i (step S14), and then determines the threshold number of pixels of the segment from the depth mean value for label 1 (i = 1) (step S15). In the process of step S15, the threshold value of the number of pixels of the segment is determined in consideration of the distance for each segment. More specifically, for a short distance where the distance of the segment from the distance measuring device 1 is relatively short. Set the threshold value large and set the threshold value for relatively long distances small.
近距離/遠距離については、所定の距離(例えば、1m)を基準(閾値)として定義することができる。但し、セグメントのピクセル数の閾値については、近距離/遠距離の2パターンに限られるものではなく、更に細分化して3パターン以上に増やすようにしてもよい。セグメントのピクセル数の閾値設定の詳細については後述する。
For short distance / long distance, a predetermined distance (for example, 1 m) can be defined as a reference (threshold value). However, the threshold value of the number of pixels of the segment is not limited to the two patterns of short distance / long distance, and may be further subdivided and increased to three or more patterns. Details of setting the threshold value for the number of pixels in the segment will be described later.
続いて、プロセッサは、ラベル1のセグメントについてピクセル数が閾値を超えるか否かを判断する(ステップS16)。この判断処理において、プロセッサは、ピクセル数が閾値を超えると判断した場合には(S16のYES)、そのセグメントを有効とし(ステップS17)、ピクセル数が閾値以下であると判断した場合には(S16のNO)、そのセグメントを無効とする(ステップS18)。
Subsequently, the processor determines whether or not the number of pixels exceeds the threshold value for the segment of label 1 (step S16). In this determination process, when the processor determines that the number of pixels exceeds the threshold value (YES in S16), the segment is valid (step S17), and when it determines that the number of pixels is equal to or less than the threshold value (YES in S16). NO in S16), the segment is invalidated (step S18).
ステップS17の処理、又は、ステップS18の処理の終了後、プロセッサは、全ラベル1~nについて、セグメントの有効/無効の処理が終了したか否かを判断し(ステップS19)、終了していれば(S19のYES)、エイリアシング課題を解決するための上述した一連の処理を終了する。また、プロセッサは、全ラベル1~nについて終了していなければ(S19のNO)、ステップS14に戻ってラベルカウンタiをインクリメントし、以降、ステップS15~ステップS19の処理を、全ラベル1~nについて、セグメントの有効/無効の処理が終了したと判断するまで繰り返して実行する。
After the process of step S17 or the process of step S18 is completed, the processor determines whether or not the process of valid / invalidating the segment is completed for all the labels 1 to n (step S19), and the process is completed. If (YES in S19), the above-mentioned series of processes for solving the aliasing problem is completed. If the processor has not finished for all labels 1 to n (NO in S19), the processor returns to step S14 and increments the label counter i, and thereafter, the processes of steps S15 to S19 are performed for all labels 1 to n. Is repeatedly executed until it is determined that the valid / invalid processing of the segment is completed.
上述したように、本実施形態に係る測距装置1では、光検出部30から与えられるRAW画像データに基づいて深度計算部511で被写体の深度情報を算出し、当該深度情報を基にセグメント化処理を行い、ピクセル数が閾値よりも小さいセグメントを無効にする処理が行われる。この処理により、間接ToF方式の測距装置特有の課題であるエイリアシング課題を解決することができる。すなわち、エイリアス距離にある物体の測距結果については、誤った測距結果として測距装置1から出力されることはなく、正しい距離(真の距離)が測定されている物体の測距結果のみを出力することができる。
As described above, in the distance measuring device 1 according to the present embodiment, the depth calculation unit 511 calculates the depth information of the subject based on the RAW image data given by the light detection unit 30, and segmentation is performed based on the depth information. Processing is performed and processing is performed to invalidate segments whose number of pixels is smaller than the threshold value. By this processing, the aliasing problem, which is a problem peculiar to the indirect ToF type distance measuring device, can be solved. That is, the distance measurement result of the object at the alias distance is not output from the distance measurement device 1 as an erroneous distance measurement result, and only the distance measurement result of the object for which the correct distance (true distance) is measured. Can be output.
[セグメントのピクセル数の閾値設定]
ここで、セグメントの有効/無効の判断基準となる、セグメントのピクセル数の閾値設定について説明する。 [Threshold setting for the number of pixels in a segment]
Here, the threshold setting of the number of pixels of the segment, which is the criterion for determining the validity / invalidity of the segment, will be described.
ここで、セグメントの有効/無効の判断基準となる、セグメントのピクセル数の閾値設定について説明する。 [Threshold setting for the number of pixels in a segment]
Here, the threshold setting of the number of pixels of the segment, which is the criterion for determining the validity / invalidity of the segment, will be described.
測距対象の被写体(物体)の大きさを定義し、光検出部30の焦点距離や画素ピッチから、距離毎に撮像されるセグメントのピクセル数を算出し、この算出したピクセル数を閾値とすることができる。
・測距対象の被写体については、エイリアシング課題が特に発生しやすい物体、例えば、金属やガラスなど、鏡面反射成分が大きい高反射率な物体から選ぶようにする。
・エイリアシング課題は、エイリアシング課題が発生しやすい物体全体で発生する訳ではなく、レーザ光が正対する領域付近で発生しやすい。そのため、大きなる物体(被写体)であっても、エイリアシング課題が発生する領域の大きさを設定する場合もある。 The size of the subject (object) to be distance-measured is defined, the number of pixels of the segment to be imaged for each distance is calculated from the focal length and pixel pitch of thelight detection unit 30, and the calculated number of pixels is used as a threshold value. be able to.
-For the subject to be distance-measured, select an object that is particularly prone to aliasing tasks, such as metal or glass, which has a high specular reflection component and high reflectance.
-The aliasing problem does not occur in the entire object where the aliasing problem is likely to occur, but tends to occur in the vicinity of the region where the laser beam faces. Therefore, even if it is a large object (subject), the size of the area where the aliasing problem occurs may be set.
・測距対象の被写体については、エイリアシング課題が特に発生しやすい物体、例えば、金属やガラスなど、鏡面反射成分が大きい高反射率な物体から選ぶようにする。
・エイリアシング課題は、エイリアシング課題が発生しやすい物体全体で発生する訳ではなく、レーザ光が正対する領域付近で発生しやすい。そのため、大きなる物体(被写体)であっても、エイリアシング課題が発生する領域の大きさを設定する場合もある。 The size of the subject (object) to be distance-measured is defined, the number of pixels of the segment to be imaged for each distance is calculated from the focal length and pixel pitch of the
-For the subject to be distance-measured, select an object that is particularly prone to aliasing tasks, such as metal or glass, which has a high specular reflection component and high reflectance.
-The aliasing problem does not occur in the entire object where the aliasing problem is likely to occur, but tends to occur in the vicinity of the region where the laser beam faces. Therefore, even if it is a large object (subject), the size of the area where the aliasing problem occurs may be set.
<変形例>
以上、本開示の技術について、好ましい実施形態に基づき説明したが、本開示の技術は当該実施形態に限定されるものではない。上記の各実施形態において説明した測距装置の構成、構造は例示であり、適宜、変更することができる。例えば、上記の実施形態では、セグメントのピクセル数の閾値について、深度平均値を基に決定するとしたが、これに限られるものではなく、次のような構成をとることもできる。 <Modification example>
Although the technique of the present disclosure has been described above based on the preferred embodiment, the technique of the present disclosure is not limited to the embodiment. The configuration and structure of the distance measuring device described in each of the above embodiments are examples, and can be changed as appropriate. For example, in the above embodiment, the threshold value of the number of pixels of the segment is determined based on the depth average value, but the present invention is not limited to this, and the following configuration can be adopted.
以上、本開示の技術について、好ましい実施形態に基づき説明したが、本開示の技術は当該実施形態に限定されるものではない。上記の各実施形態において説明した測距装置の構成、構造は例示であり、適宜、変更することができる。例えば、上記の実施形態では、セグメントのピクセル数の閾値について、深度平均値を基に決定するとしたが、これに限られるものではなく、次のような構成をとることもできる。 <Modification example>
Although the technique of the present disclosure has been described above based on the preferred embodiment, the technique of the present disclosure is not limited to the embodiment. The configuration and structure of the distance measuring device described in each of the above embodiments are examples, and can be changed as appropriate. For example, in the above embodiment, the threshold value of the number of pixels of the segment is determined based on the depth average value, but the present invention is not limited to this, and the following configuration can be adopted.
例えば、図10に示すように、RGBカメラ61を用いるとともに、各被写体毎の閾値を予め設定しておく。そして、RGBカメラ61の撮像出力を基に、物体認識部62で被写体認識を行い、その認識結果を基にパラメータ制御部63によって、予め設定した閾値を決めるようにする。但し、被写体認識については、RGBカメラ61の出力を基に行うのではなく、測距装置1の最終出力、即ち、深度及び信頼値の各情報を含む距離画像情報を基に行うようにしてもよいし、RGBカメラ61の出力、及び、測距装置1の最終出力の両方を基に行うようにしてもよい。
For example, as shown in FIG. 10, the RGB camera 61 is used, and the threshold value for each subject is set in advance. Then, the object recognition unit 62 recognizes the subject based on the image pickup output of the RGB camera 61, and the parameter control unit 63 determines a preset threshold value based on the recognition result. However, the subject recognition is not performed based on the output of the RGB camera 61, but is performed based on the final output of the distance measuring device 1, that is, the distance image information including each information of the depth and the reliability value. Alternatively, it may be performed based on both the output of the RGB camera 61 and the final output of the distance measuring device 1.
<応用例>
本開示の測距装置は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット、建設機械、農業機械(トラクター)等のいずれかの種類の移動体に搭載される測距装置として用いることができる。また、上記の実施形態では、本開示の測距装置について、距離画像(深度マップ)を取得する手段として用いる場合を例に挙げて説明したが、単に距離画像を取得する手段として用いるだけでなく、自動的にカメラの焦点(ピント)を合わせるオートフォーカスに応用することができる。 <Application example>
The ranging device of the present disclosure is a moving object of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, a robot, a construction machine, and an agricultural machine (tractor). It can be used as a distance measuring device mounted on the vehicle. Further, in the above embodiment, the case where the distance measuring device of the present disclosure is used as a means for acquiring a distance image (depth map) has been described as an example, but it is not only used as a means for acquiring a distance image. , Can be applied to autofocus, which automatically focuses the camera.
本開示の測距装置は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット、建設機械、農業機械(トラクター)等のいずれかの種類の移動体に搭載される測距装置として用いることができる。また、上記の実施形態では、本開示の測距装置について、距離画像(深度マップ)を取得する手段として用いる場合を例に挙げて説明したが、単に距離画像を取得する手段として用いるだけでなく、自動的にカメラの焦点(ピント)を合わせるオートフォーカスに応用することができる。 <Application example>
The ranging device of the present disclosure is a moving object of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, a robot, a construction machine, and an agricultural machine (tractor). It can be used as a distance measuring device mounted on the vehicle. Further, in the above embodiment, the case where the distance measuring device of the present disclosure is used as a means for acquiring a distance image (depth map) has been described as an example, but it is not only used as a means for acquiring a distance image. , Can be applied to autofocus, which automatically focuses the camera.
<本開示の電子機器>
以上説明した本開示の測距装置は、種々の電子機器に搭載される測距装置として用いることができる。測距装置を搭載する電子機器としては、例えば、スマートフォン、デジタルカメラ、タブレット等のモバイル機器を例示することができる。但し、モバイル機器に限定されるものではない。 <Electronic device of the present disclosure>
The distance measuring device of the present disclosure described above can be used as a distance measuring device mounted on various electronic devices. Examples of electronic devices equipped with a distance measuring device include mobile devices such as smartphones, digital cameras, and tablets. However, it is not limited to mobile devices.
以上説明した本開示の測距装置は、種々の電子機器に搭載される測距装置として用いることができる。測距装置を搭載する電子機器としては、例えば、スマートフォン、デジタルカメラ、タブレット等のモバイル機器を例示することができる。但し、モバイル機器に限定されるものではない。 <Electronic device of the present disclosure>
The distance measuring device of the present disclosure described above can be used as a distance measuring device mounted on various electronic devices. Examples of electronic devices equipped with a distance measuring device include mobile devices such as smartphones, digital cameras, and tablets. However, it is not limited to mobile devices.
以下に、本開示の測距装置を用いる電子機器の具体例として、スマートフォン及びデジタルスチルカメラを例示する。スマートフォン及びデジタルスチルカメラに搭載される測距装置の距離画像情報(深度情報)は、オートフォーカスのためのレンズ駆動情報として用いられる。但し、ここで例示する具体例は一例に過ぎず、これらの具体例に限られるものではない。
Hereinafter, smartphones and digital still cameras will be illustrated as specific examples of electronic devices using the distance measuring device of the present disclosure. The distance image information (depth information) of the distance measuring device mounted on the smartphone and the digital still camera is used as the lens driving information for autofocus. However, the specific examples illustrated here are only examples, and are not limited to these specific examples.
[具体例1:スマートフォンの例]
本開示の電子機器の具体例1に係るスマートフォンについて、正面側から見た外観図を図11Aに示し、背面側から見た外観図を図11Bに示す。本具体例に係るスマートフォン100は、筐体110の正面側に表示部120を備えている。また、スマートフォン100は、例えば、筐体110の裏面側の上方部に撮像部130を備えている。 [Specific example 1: Example of smartphone]
FIG. 11A shows an external view of the smartphone according to the specific example 1 of the electronic device of the present disclosure as seen from the front side, and FIG. 11B shows an external view as seen from the back side. Thesmartphone 100 according to this specific example includes a display unit 120 on the front side of the housing 110. Further, the smartphone 100 is provided with an image pickup unit 130 on the upper side of the back surface side of the housing 110, for example.
本開示の電子機器の具体例1に係るスマートフォンについて、正面側から見た外観図を図11Aに示し、背面側から見た外観図を図11Bに示す。本具体例に係るスマートフォン100は、筐体110の正面側に表示部120を備えている。また、スマートフォン100は、例えば、筐体110の裏面側の上方部に撮像部130を備えている。 [Specific example 1: Example of smartphone]
FIG. 11A shows an external view of the smartphone according to the specific example 1 of the electronic device of the present disclosure as seen from the front side, and FIG. 11B shows an external view as seen from the back side. The
先述した本開示の実施形態に係る測距装置1は、例えば、上記の構成のモバイル機器の一例であるスマートフォン100に搭載して用いることができる。この場合、測距装置1の光源20及び光検出部30については、例えば図11Bに示すように、撮像部130の近傍に配置することができる。但し、図11Bに示す光源20及び光検出部30の配置例は、一例であって、この配置例に限られるものではない。
The distance measuring device 1 according to the embodiment of the present disclosure described above can be mounted on, for example, a smartphone 100 which is an example of a mobile device having the above configuration. In this case, the light source 20 and the light detection unit 30 of the distance measuring device 1 can be arranged in the vicinity of the imaging unit 130, for example, as shown in FIG. 11B. However, the arrangement example of the light source 20 and the light detection unit 30 shown in FIG. 11B is an example, and is not limited to this arrangement example.
上述したように、具体例1に係るスマートフォン100は、本開示の実施形態に係る測距装置1を搭載することによって作製される。そして、具体例1に係るスマートフォン100は、上記の測距装置1を搭載することにより、エイリアシング課題を解決しつつ正確な測距結果を得ることができるため、当該測距結果に基づいてピントの合った撮像画像を得ることができる。
As described above, the smartphone 100 according to the specific example 1 is manufactured by mounting the distance measuring device 1 according to the embodiment of the present disclosure. Then, since the smartphone 100 according to the specific example 1 can obtain an accurate distance measurement result while solving the aliasing problem by mounting the above distance measurement device 1, the focus can be obtained based on the distance measurement result. A suitable captured image can be obtained.
[具体例2:デジタルスチルカメラの例]
本開示の電子機器の具体例2に係るレンズ交換式一眼レフレックスタイプのデジタルスチルカメラについて、正面側から見た外観図であり、図12Bは、裏面側から見た外観図である。 [Specific example 2: Example of digital still camera]
FIG. 12B is an external view of the interchangeable lens type single-lens reflex type digital still camera according to Specific Example 2 of the electronic device of the present disclosure as viewed from the front side, and FIG. 12B is an external view as viewed from the back side.
本開示の電子機器の具体例2に係るレンズ交換式一眼レフレックスタイプのデジタルスチルカメラについて、正面側から見た外観図であり、図12Bは、裏面側から見た外観図である。 [Specific example 2: Example of digital still camera]
FIG. 12B is an external view of the interchangeable lens type single-lens reflex type digital still camera according to Specific Example 2 of the electronic device of the present disclosure as viewed from the front side, and FIG. 12B is an external view as viewed from the back side.
レンズ交換式一眼レフレックスタイプのデジタルスチルカメラ200は、例えば、カメラ本体部(カメラボディ)211の正面右側に交換式の撮影レンズユニット(交換レンズ)212を有し、正面左側に撮影者が把持するためのグリップ部213を有している。そして、カメラ本体部211の背面略中央にはモニター214が設けられている。モニター214の上部には、ビューファインダ(接眼窓)215が設けられている。撮影者は、ビューファインダ215を覗くことによって、撮影レンズユニット212によって導かれた被写体の光像を視認して構図決定を行うことが可能である。
The interchangeable lens single-lens reflex type digital still camera 200 has, for example, an interchangeable photographing lens unit (interchangeable lens) 212 on the front right side of the camera body (camera body) 211, and is gripped by the photographer on the front left side. It has a grip portion 213 for using the lens. A monitor 214 is provided at substantially the center of the back surface of the camera body 211. A viewfinder (eyepiece window) 215 is provided above the monitor 214. By looking into the viewfinder 215, the photographer can visually recognize the optical image of the subject guided by the photographing lens unit 212 and determine the composition.
先述した本開示の実施形態に係る測距装置1は、例えば、上記の構成のモバイル機器の一例であるデジタルスチルカメラ200に搭載して用いることができる。この場合、測距装置1の光源20及び光検出部30については、例えば図12Aに示すように、撮影レンズユニット212の近傍に配置することができる。但し、図12Aに示す光源20及び光検出部30の配置例は、一例であって、この配置例に限られるものではない。
The ranging device 1 according to the embodiment of the present disclosure described above can be mounted on, for example, a digital still camera 200 which is an example of a mobile device having the above configuration. In this case, the light source 20 and the light detection unit 30 of the distance measuring device 1 can be arranged in the vicinity of the photographing lens unit 212, for example, as shown in FIG. 12A. However, the arrangement example of the light source 20 and the light detection unit 30 shown in FIG. 12A is an example, and is not limited to this arrangement example.
上述したように、具体例2に係るデジタルスチルカメラ200は、本開示の実施形態に係る測距装置1を搭載することによって作製される。そして、具体例2に係るデジタルスチルカメラ200は、上記の測距装置1を搭載することにより、エイリアシング課題を解決しつつ正確な測距結果を得ることができるため、当該測距結果に基づいてピントの合った撮像画像を得ることができる。
As described above, the digital still camera 200 according to the second embodiment is manufactured by mounting the distance measuring device 1 according to the embodiment of the present disclosure. Then, since the digital still camera 200 according to the specific example 2 can obtain an accurate distance measurement result while solving the aliasing problem by mounting the above distance measurement device 1, it is possible to obtain an accurate distance measurement result based on the distance measurement result. An in-focus captured image can be obtained.
<本開示がとることができる構成>
尚、本開示は、以下のような構成をとることもできる。 <Structure that can be taken by this disclosure>
The present disclosure may also have the following configuration.
尚、本開示は、以下のような構成をとることもできる。 <Structure that can be taken by this disclosure>
The present disclosure may also have the following configuration.
≪A.測距装置≫
[A-1]被写体からの光を受光する光検出部、
光検出部の出力に基づいて、被写体の深度情報を計算する深度計算部、及び、
深度計算部が算出した深度情報を基に画像をセグメントに分け、各セグメントのピクセル数が所定の閾値を超えるセグメントを有効とし、所定の閾値以下のセグメントを無効とするアーティファクト除去部、
を備える測距装置。
[A-2]アーティファクト除去部は、各セグメントに対してラベル付けを行い、各ラベルについて、コンポーネントオブジェクトを作成する、
上記[A-1]に記載の測距装置。
[A-3]アーティファクト除去部は、近傍画素と深度情報が連続する画素に対して同じラベルを付ける処理により、セグメント毎に異なるラベルを付与する、
上記[A-2]に記載の測距装置。
[A-4]コンポーネントオブジェクトは、セグメントのピクセル数及び深度平均値である、
上記[A-2]又は上記[A-3]に記載の測距装置。
[A-5]アーティファクト除去部は、測距装置からのセグメントの距離に応じて所定の閾値を変更する、
上記[A-1]乃至上記[A-4]のいずれかに記載の測距装置。
[A-6]アーティファクト除去部は、測距装置からのセグメントの距離が、相対的に近い近距離用の閾値を大きく設定し、相対的に遠い遠距離用の閾値を小さく設定する、
上記[A-5]に記載の測距装置。 ≪A. Distance measuring device ≫
[A-1] A photodetector that receives light from a subject,
A depth calculation unit that calculates the depth information of the subject based on the output of the light detection unit, and
An artifact removal unit that divides an image into segments based on the depth information calculated by the depth calculation unit, enables segments in which the number of pixels in each segment exceeds a predetermined threshold value, and invalidates segments below a predetermined threshold value.
A distance measuring device equipped with.
[A-2] The artifact removal unit labels each segment and creates a component object for each label.
The distance measuring device according to the above [A-1].
[A-3] The artifact removal unit assigns a different label to each segment by a process of labeling neighboring pixels and pixels having continuous depth information with the same label.
The distance measuring device according to the above [A-2].
[A-4] The component object is the number of pixels and the average depth of the segment.
The distance measuring device according to the above [A-2] or the above [A-3].
[A-5] The artifact removal unit changes a predetermined threshold value according to the distance of the segment from the distance measuring device.
The distance measuring device according to any one of the above [A-1] to the above [A-4].
[A-6] The artifact removing unit sets a large threshold value for a relatively short distance and a small threshold value for a relatively long distance for the distance of the segment from the distance measuring device.
The distance measuring device according to the above [A-5].
[A-1]被写体からの光を受光する光検出部、
光検出部の出力に基づいて、被写体の深度情報を計算する深度計算部、及び、
深度計算部が算出した深度情報を基に画像をセグメントに分け、各セグメントのピクセル数が所定の閾値を超えるセグメントを有効とし、所定の閾値以下のセグメントを無効とするアーティファクト除去部、
を備える測距装置。
[A-2]アーティファクト除去部は、各セグメントに対してラベル付けを行い、各ラベルについて、コンポーネントオブジェクトを作成する、
上記[A-1]に記載の測距装置。
[A-3]アーティファクト除去部は、近傍画素と深度情報が連続する画素に対して同じラベルを付ける処理により、セグメント毎に異なるラベルを付与する、
上記[A-2]に記載の測距装置。
[A-4]コンポーネントオブジェクトは、セグメントのピクセル数及び深度平均値である、
上記[A-2]又は上記[A-3]に記載の測距装置。
[A-5]アーティファクト除去部は、測距装置からのセグメントの距離に応じて所定の閾値を変更する、
上記[A-1]乃至上記[A-4]のいずれかに記載の測距装置。
[A-6]アーティファクト除去部は、測距装置からのセグメントの距離が、相対的に近い近距離用の閾値を大きく設定し、相対的に遠い遠距離用の閾値を小さく設定する、
上記[A-5]に記載の測距装置。 ≪A. Distance measuring device ≫
[A-1] A photodetector that receives light from a subject,
A depth calculation unit that calculates the depth information of the subject based on the output of the light detection unit, and
An artifact removal unit that divides an image into segments based on the depth information calculated by the depth calculation unit, enables segments in which the number of pixels in each segment exceeds a predetermined threshold value, and invalidates segments below a predetermined threshold value.
A distance measuring device equipped with.
[A-2] The artifact removal unit labels each segment and creates a component object for each label.
The distance measuring device according to the above [A-1].
[A-3] The artifact removal unit assigns a different label to each segment by a process of labeling neighboring pixels and pixels having continuous depth information with the same label.
The distance measuring device according to the above [A-2].
[A-4] The component object is the number of pixels and the average depth of the segment.
The distance measuring device according to the above [A-2] or the above [A-3].
[A-5] The artifact removal unit changes a predetermined threshold value according to the distance of the segment from the distance measuring device.
The distance measuring device according to any one of the above [A-1] to the above [A-4].
[A-6] The artifact removing unit sets a large threshold value for a relatively short distance and a small threshold value for a relatively long distance for the distance of the segment from the distance measuring device.
The distance measuring device according to the above [A-5].
≪B.電子機器≫
[B-1]被写体からの光を受光する光検出部、
光検出部の出力に基づいて、被写体の深度情報を計算する深度計算部、及び、
深度計算部が算出した深度情報を基に画像をセグメントに分け、各セグメントのピクセル数が所定の閾値を超えるセグメントを有効とし、所定の閾値以下のセグメントを無効とするアーティファクト除去部、
を備える測距装置を有する電子機器。
[B-2]アーティファクト除去部は、各セグメントに対してラベル付けを行い、各ラベルについて、コンポーネントオブジェクトを作成する、
上記[B-1]に記載の電子機器。
[B-3]アーティファクト除去部は、近傍画素と深度情報が連続する画素に対して同じラベルを付ける処理により、セグメント毎に異なるラベルを付与する、
上記[B-2]に記載の電子機器。
[B-4]コンポーネントオブジェクトは、セグメントのピクセル数及び深度平均値である、
上記[B-2]又は上記[B-3]に記載の電子機器。
[B-5]アーティファクト除去部は、測距装置からのセグメントの距離に応じて所定の閾値を変更する、
上記[B-1]乃至上記[B-4]のいずれかに記載の電子機器。
[B-6]アーティファクト除去部は、測距装置からのセグメントの距離が、相対的に近い近距離用の閾値を大きく設定し、相対的に遠い遠距離用の閾値を小さく設定する、
上記[B-5]に記載の電子機器。 ≪B. Electronic equipment ≫
[B-1] Photodetector that receives light from the subject,
A depth calculation unit that calculates the depth information of the subject based on the output of the light detection unit, and
An artifact removal unit that divides an image into segments based on the depth information calculated by the depth calculation unit, enables segments in which the number of pixels in each segment exceeds a predetermined threshold value, and invalidates segments below a predetermined threshold value.
An electronic device having a ranging device.
[B-2] The artifact removal unit labels each segment and creates a component object for each label.
The electronic device according to the above [B-1].
[B-3] The artifact removal unit assigns a different label to each segment by a process of labeling neighboring pixels and pixels having continuous depth information with the same label.
The electronic device according to the above [B-2].
[B-4] The component object is the number of pixels and the average depth of the segment.
The electronic device according to the above [B-2] or the above [B-3].
[B-5] The artifact removal unit changes a predetermined threshold value according to the distance of the segment from the distance measuring device.
The electronic device according to any one of the above [B-1] to the above [B-4].
[B-6] The artifact removing unit sets a large threshold value for a relatively short distance and a small threshold value for a relatively long distance for the distance of the segment from the distance measuring device.
The electronic device according to the above [B-5].
[B-1]被写体からの光を受光する光検出部、
光検出部の出力に基づいて、被写体の深度情報を計算する深度計算部、及び、
深度計算部が算出した深度情報を基に画像をセグメントに分け、各セグメントのピクセル数が所定の閾値を超えるセグメントを有効とし、所定の閾値以下のセグメントを無効とするアーティファクト除去部、
を備える測距装置を有する電子機器。
[B-2]アーティファクト除去部は、各セグメントに対してラベル付けを行い、各ラベルについて、コンポーネントオブジェクトを作成する、
上記[B-1]に記載の電子機器。
[B-3]アーティファクト除去部は、近傍画素と深度情報が連続する画素に対して同じラベルを付ける処理により、セグメント毎に異なるラベルを付与する、
上記[B-2]に記載の電子機器。
[B-4]コンポーネントオブジェクトは、セグメントのピクセル数及び深度平均値である、
上記[B-2]又は上記[B-3]に記載の電子機器。
[B-5]アーティファクト除去部は、測距装置からのセグメントの距離に応じて所定の閾値を変更する、
上記[B-1]乃至上記[B-4]のいずれかに記載の電子機器。
[B-6]アーティファクト除去部は、測距装置からのセグメントの距離が、相対的に近い近距離用の閾値を大きく設定し、相対的に遠い遠距離用の閾値を小さく設定する、
上記[B-5]に記載の電子機器。 ≪B. Electronic equipment ≫
[B-1] Photodetector that receives light from the subject,
A depth calculation unit that calculates the depth information of the subject based on the output of the light detection unit, and
An artifact removal unit that divides an image into segments based on the depth information calculated by the depth calculation unit, enables segments in which the number of pixels in each segment exceeds a predetermined threshold value, and invalidates segments below a predetermined threshold value.
An electronic device having a ranging device.
[B-2] The artifact removal unit labels each segment and creates a component object for each label.
The electronic device according to the above [B-1].
[B-3] The artifact removal unit assigns a different label to each segment by a process of labeling neighboring pixels and pixels having continuous depth information with the same label.
The electronic device according to the above [B-2].
[B-4] The component object is the number of pixels and the average depth of the segment.
The electronic device according to the above [B-2] or the above [B-3].
[B-5] The artifact removal unit changes a predetermined threshold value according to the distance of the segment from the distance measuring device.
The electronic device according to any one of the above [B-1] to the above [B-4].
[B-6] The artifact removing unit sets a large threshold value for a relatively short distance and a small threshold value for a relatively long distance for the distance of the segment from the distance measuring device.
The electronic device according to the above [B-5].
1・・・測距装置、10・・・被写体(測定対象物)、20・・・光源、30・・・光検出部、40・・・AE制御部、41・・・次フレーム発光・露光条件計算部、42・・・次フレーム発光・露光制御部、50・・・測距部、51・・・距離画像計算部、61・・・RGBカメラ、62・・・物体認識部、63・・・パラメータ制御部
1 ... Distance measuring device, 10 ... Subject (measurement object), 20 ... Light source, 30 ... Light detection unit, 40 ... AE control unit, 41 ... Next frame light emission / exposure Condition calculation unit, 42 ... Next frame light emission / exposure control unit, 50 ... Distance measurement unit, 51 ... Distance image calculation unit, 61 ... RGB camera, 62 ... Object recognition unit, 63.・ ・ Parameter control unit
Claims (8)
- 被写体からの光を受光する光検出部、
光検出部の出力に基づいて、被写体の深度情報を計算する深度計算部、及び、
深度計算部が算出した深度情報を基に画像をセグメントに分け、各セグメントのピクセル数が所定の閾値を超えるセグメントを有効とし、所定の閾値以下のセグメントを無効とするアーティファクト除去部、
を備える測距装置。 Photodetector that receives light from the subject,
A depth calculation unit that calculates the depth information of the subject based on the output of the light detection unit, and
An artifact removal unit that divides an image into segments based on the depth information calculated by the depth calculation unit, enables segments in which the number of pixels in each segment exceeds a predetermined threshold value, and invalidates segments below a predetermined threshold value.
A distance measuring device equipped with. - アーティファクト除去部は、各セグメントに対してラベル付けを行い、各ラベルについて、コンポーネントオブジェクトを作成する、
請求項1に記載の測距装置。 The artifact remover labels each segment and creates a component object for each label.
The ranging device according to claim 1. - アーティファクト除去部は、近傍画素と深度情報が連続する画素に対して同じラベルを付ける処理により、セグメント毎に異なるラベルを付与する、
請求項2に記載の測距装置。 The artifact removal unit assigns a different label to each segment by a process of labeling nearby pixels and pixels having continuous depth information.
The ranging device according to claim 2. - コンポーネントオブジェクトは、セグメントのピクセル数及び深度平均値である、
請求項2に記載の測距装置。 The component object is the number of pixels and the average depth of the segment.
The ranging device according to claim 2. - アーティファクト除去部は、測距装置からのセグメントの距離に応じて所定の閾値を変更する、
請求項1に記載の測距装置。 The artifact remover changes a predetermined threshold value according to the distance of the segment from the distance measuring device.
The ranging device according to claim 1. - アーティファクト除去部は、測距装置からのセグメントの距離が、相対的に近い近距離用の閾値を大きく設定し、相対的に遠い遠距離用の閾値を小さく設定する、
請求項5に記載の測距装置。 The artifact remover sets a large threshold for short distances, which is relatively short, and a small threshold for long distances, which is relatively far from the distance measuring device.
The ranging device according to claim 5. - 被写体からの光を受光する光検出部、及び、
光検出部の出力に基づいて、被写体の深度情報を計算する深度計算部、
を備える測距装置の制御に当たって、
深度情報を基に画像をセグメントに分け、各セグメントのピクセル数が所定の閾値を超えるセグメントを有効とし、所定の閾値以下のセグメントを無効とする、
測距装置の制御方法。 A photodetector that receives light from the subject, and
Depth calculation unit, which calculates the depth information of the subject based on the output of the light detection unit,
In controlling the distance measuring device equipped with
The image is divided into segments based on the depth information, the segments in which the number of pixels of each segment exceeds a predetermined threshold are enabled, and the segments below the predetermined threshold are invalidated.
How to control the ranging device. - 被写体からの光を受光する光検出部、
光検出部の出力に基づいて、被写体の深度情報を計算する深度計算部、及び、
深度情報を基に画像をセグメントに分け、各セグメントのピクセル数が所定の閾値を超えるセグメントを有効とし、所定の閾値以下のセグメントを無効とするアーティファクト除去部、
を備える測距装置を有する電子機器。 Photodetector that receives light from the subject,
A depth calculation unit that calculates the depth information of the subject based on the output of the light detection unit, and
An artifact removal unit that divides an image into segments based on depth information, enables segments in which the number of pixels in each segment exceeds a predetermined threshold, and invalidates segments below a predetermined threshold.
An electronic device having a ranging device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/753,870 US20220373682A1 (en) | 2019-09-25 | 2020-07-20 | Distance measurement device, method of controlling distance measurement device, and electronic apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-173696 | 2019-09-25 | ||
JP2019173696A JP2021050988A (en) | 2019-09-25 | 2019-09-25 | Range finder and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021059699A1 true WO2021059699A1 (en) | 2021-04-01 |
Family
ID=75157671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/027983 WO2021059699A1 (en) | 2019-09-25 | 2020-07-20 | Distance measurement device, distance measurement device control method, and electronic device |
Country Status (3)
Country | Link |
---|---|
US (1) | US20220373682A1 (en) |
JP (1) | JP2021050988A (en) |
WO (1) | WO2021059699A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2022202325A1 (en) | 2021-03-25 | 2022-09-29 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10285582A (en) * | 1997-04-04 | 1998-10-23 | Fuji Heavy Ind Ltd | Vehicle outside monitoring device |
JP2001273494A (en) * | 2000-03-27 | 2001-10-05 | Honda Motor Co Ltd | Object recognizing device |
JP2008064628A (en) * | 2006-09-07 | 2008-03-21 | Fuji Heavy Ind Ltd | Object detector and detecting method |
JP2011128756A (en) * | 2009-12-16 | 2011-06-30 | Fuji Heavy Ind Ltd | Object detection device |
US20140049767A1 (en) * | 2012-08-15 | 2014-02-20 | Microsoft Corporation | Methods and systems for geometric phase unwrapping in time of flight systems |
JP2014056494A (en) * | 2012-09-13 | 2014-03-27 | Omron Corp | Image processor, object detection method, and object detection program |
JP2017219385A (en) * | 2016-06-06 | 2017-12-14 | 株式会社デンソーアイティーラボラトリ | Object detector, object detection system, object detection method, and program |
US20180084240A1 (en) * | 2016-09-16 | 2018-03-22 | Qualcomm Incorporated | Systems and methods for improved depth sensing |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2010021090A1 (en) * | 2008-08-20 | 2012-01-26 | パナソニック株式会社 | Distance estimation device, distance estimation method, program, integrated circuit, and camera |
JP5743390B2 (en) * | 2009-09-15 | 2015-07-01 | 本田技研工業株式会社 | Ranging device and ranging method |
US9779276B2 (en) * | 2014-10-10 | 2017-10-03 | Hand Held Products, Inc. | Depth sensor based auto-focus system for an indicia scanner |
JP6814053B2 (en) * | 2017-01-19 | 2021-01-13 | 株式会社日立エルジーデータストレージ | Object position detector |
-
2019
- 2019-09-25 JP JP2019173696A patent/JP2021050988A/en active Pending
-
2020
- 2020-07-20 WO PCT/JP2020/027983 patent/WO2021059699A1/en active Application Filing
- 2020-07-20 US US17/753,870 patent/US20220373682A1/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10285582A (en) * | 1997-04-04 | 1998-10-23 | Fuji Heavy Ind Ltd | Vehicle outside monitoring device |
JP2001273494A (en) * | 2000-03-27 | 2001-10-05 | Honda Motor Co Ltd | Object recognizing device |
JP2008064628A (en) * | 2006-09-07 | 2008-03-21 | Fuji Heavy Ind Ltd | Object detector and detecting method |
JP2011128756A (en) * | 2009-12-16 | 2011-06-30 | Fuji Heavy Ind Ltd | Object detection device |
US20140049767A1 (en) * | 2012-08-15 | 2014-02-20 | Microsoft Corporation | Methods and systems for geometric phase unwrapping in time of flight systems |
JP2014056494A (en) * | 2012-09-13 | 2014-03-27 | Omron Corp | Image processor, object detection method, and object detection program |
JP2017219385A (en) * | 2016-06-06 | 2017-12-14 | 株式会社デンソーアイティーラボラトリ | Object detector, object detection system, object detection method, and program |
US20180084240A1 (en) * | 2016-09-16 | 2018-03-22 | Qualcomm Incorporated | Systems and methods for improved depth sensing |
Also Published As
Publication number | Publication date |
---|---|
US20220373682A1 (en) | 2022-11-24 |
JP2021050988A (en) | 2021-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10412349B2 (en) | Image sensor including phase detection pixel | |
JP5979500B2 (en) | Stereo imaging device | |
US10182190B2 (en) | Light detecting apparatus, image capturing apparatus and image sensor | |
CN111758047B (en) | Single chip RGB-D camera | |
WO2017150246A1 (en) | Imaging device and solid-state imaging element used in same | |
US8614755B2 (en) | Optical device and signal processor | |
US10277827B2 (en) | Imaging apparatus and electronic apparatus | |
SE514859C2 (en) | Method and apparatus for examining objects on a substrate by taking pictures of the substrate and analyzing them | |
US20210041539A1 (en) | Method and apparatus for determining malfunction, and sensor system | |
JP2007081806A (en) | Image sensing system | |
WO2020170969A1 (en) | Ranging device and ranging device controlling method, and electronic device | |
CN112513676B (en) | Depth acquisition device, depth acquisition method, and recording medium | |
US9910258B2 (en) | Method for simultaneous capture of image data at multiple depths of a sample | |
JP2004126574A (en) | Focusing of electronic imaging apparatus | |
CN114424522B (en) | Image processing device, electronic apparatus, image processing method, and program | |
JP7281775B2 (en) | Depth Acquisition Device, Depth Acquisition Method and Program | |
KR20150135431A (en) | High-speed image capture method and high-speed image capture device | |
WO2021059698A1 (en) | Ranging device, method for controlling ranging device, and electronic apparatus | |
US20180158208A1 (en) | Methods and apparatus for single-chip multispectral object detection | |
WO2021059699A1 (en) | Distance measurement device, distance measurement device control method, and electronic device | |
US20160063307A1 (en) | Image acquisition device and control method therefor | |
JP7237450B2 (en) | Image processing device, image processing method, program, storage medium, and imaging device | |
CN113597567A (en) | Distance image acquisition method and distance detection device | |
WO2021084891A1 (en) | Movement amount estimation device, movement amount estimation method, movement amount estimation program, and movement amount estimation system | |
JP2001312690A (en) | Decision of exposure effective to image forming device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20870093 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20870093 Country of ref document: EP Kind code of ref document: A1 |