WO2022168500A1 - 測距装置およびその制御方法、並びに、測距システム - Google Patents
測距装置およびその制御方法、並びに、測距システム Download PDFInfo
- Publication number
- WO2022168500A1 WO2022168500A1 PCT/JP2021/048507 JP2021048507W WO2022168500A1 WO 2022168500 A1 WO2022168500 A1 WO 2022168500A1 JP 2021048507 W JP2021048507 W JP 2021048507W WO 2022168500 A1 WO2022168500 A1 WO 2022168500A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- distance measuring
- measuring device
- light
- sampling
- depth image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012545 processing Methods 0.000 claims abstract description 59
- 238000005070 sampling Methods 0.000 claims description 184
- 238000005259 measurement Methods 0.000 claims description 50
- 238000001514 detection method Methods 0.000 claims description 37
- 238000005516 engineering process Methods 0.000 abstract description 14
- 230000003044 adaptive effect Effects 0.000 description 25
- 238000005286 illumination Methods 0.000 description 17
- 230000003287 optical effect Effects 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 239000004038 photonic crystal Substances 0.000 description 5
- 238000003708 edge detection Methods 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000005684 electric field Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/02—Details
- G01C3/06—Use of electric means to obtain final indication
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
Definitions
- the present technology relates to a range finder, its control method, and a range finder system. It also relates to a ranging system.
- depth cameras In recent years, attention has been paid to distance measuring devices (hereinafter also referred to as depth cameras) that measure distance by the ToF (Time-of-Flight) method.
- Some depth cameras use SPAD (Single Photon Avalanche Diode) for light receiving pixels.
- SPAD Single Photon Avalanche Diode
- avalanche amplification occurs when a single photon enters a PN junction region with a high electric field while a voltage greater than the breakdown voltage is applied.
- the timing at which the current instantaneously flows by avalanche amplification the timing at which the light reaches can be detected with high accuracy, and the distance can be measured (see, for example, Patent Document 1).
- the accuracy may deteriorate depending on the pixel position of the sparse depth information.
- This technology has been developed in view of this situation, and enables high-resolution depth images to be generated with high accuracy from sparse depth information.
- a distance measuring device includes a light receiving unit having a plurality of pixels that receive reflected light that is reflected by an object, and a sparse depth image obtained by the light receiving unit.
- a high-resolution processing unit that generates a depth image, and a position determination unit that determines active pixels that perform light-receiving operations in the light-receiving unit based on edge information of the high-resolution depth image.
- a method for controlling a distance measuring device is a distance measuring device including a light receiving unit having a plurality of pixels for receiving reflected light of irradiated light reflected by an object.
- a high-resolution depth image is generated from the sparse depth image, and active pixels that perform a light-receiving operation in the light-receiving unit are determined based on edge information of the high-resolution depth image.
- a distance measurement system includes an illumination device that emits irradiation light, and a distance measurement device that receives light reflected by an object from the irradiation light, and the distance measurement device includes: a light receiving unit having a plurality of pixels for receiving reflected light; a high resolution processing unit for generating a high resolution depth image from a sparse depth image acquired by the light receiving unit; and edge information of the high resolution depth image. and a position determination unit that determines an active pixel that performs a light-receiving operation in the light-receiving unit.
- a high-resolution depth image is generated from a sparse depth image acquired by a light receiving unit having a plurality of pixels that receive reflected light reflected by an object. and, based on the edge information of the high-resolution depth image, an active pixel that performs a light-receiving operation in the light-receiving section is determined.
- the ranging device and ranging system may be independent devices or may be modules incorporated into other devices.
- FIG. 1 is a block diagram showing a configuration example of an embodiment of a distance measuring system of the present disclosure
- FIG. It is a figure explaining the structure of an illuminating device. It is a figure which shows the example of the irradiation pattern which an illuminating device irradiates.
- 2 is a block diagram showing a more detailed configuration example of the distance measuring device of FIG. 1; FIG. It is a figure explaining the example of arrangement
- FIG. 4 is a diagram showing examples of a color image and a high-resolution depth image; It is a figure explaining the process of an edge information detection part and an adaptive sampling part. 2 is a flowchart for explaining distance measurement processing by the distance measurement system of FIG. 1;
- FIG. 1 is a block diagram showing a configuration example of an embodiment of a distance measuring system of the present disclosure
- FIG. It is a figure explaining the structure of an illuminating device. It is a figure which shows the example of the irradiation
- FIG. 9 is a detailed flowchart of high-resolution depth image generation processing executed as step S4 in FIG. 8;
- FIG. FIG. 10 is a diagram illustrating processing for determining whether a sampling position is on an edge; It is a figure explaining the determination method of the moving direction and moving amount
- FIG. 1 is a block diagram showing a configuration example of an embodiment of a ranging system of the present disclosure.
- the ranging system 1 in FIG. 1 is a system that measures and outputs the distance to an object using, for example, the ToF (Time-of-Flight) method.
- the distance measurement system 1 performs distance measurement by the direct ToF method among the ToF methods.
- the direct ToF method is a method in which the distance to an object is calculated by directly measuring the flight time from the timing when the irradiation light is emitted to the timing when the reflected light is received.
- the distance measurement system 1 includes an illumination device 11 and a distance measurement device 12, and measures the distance to a predetermined object 13 as a subject. More specifically, when the distance measuring system 1 is supplied with a distance measuring instruction from a higher-level host device, the distance measuring system 1 emits irradiation light and receives the reflected light a predetermined number of times (for example, several times or more). 100 times) repeat. The distance measurement system 1 generates a histogram of the time of flight of the illumination light based on the emission of the illumination light repeatedly performed a predetermined number of times and the reception of the reflected light, and from the time of flight corresponding to the peak of the histogram, A distance to the object 13 is calculated.
- the lighting device 11 irradiates a predetermined object 13 with irradiation light based on the light emission control signal and the light emission trigger supplied from the distance measuring device 12 .
- irradiation light for example, infrared light (IR light) having a wavelength in the range of approximately 850 nm to 940 nm is used.
- the illumination device 11 includes a light emission controller 31 , a light emitter 32 and a diffractive optical element (DOE) 33 .
- DOE diffractive optical element
- the distance measurement device 12 determines the light emission conditions, outputs the light emission control signal and the light emission trigger to the lighting device 11 based on the determined light emission conditions, and irradiates the irradiation light.
- the light emission conditions determined here include, for example, various types of information such as the irradiation method, irradiation area, and irradiation pattern.
- the distance measuring device 12 receives light reflected by the object 13 to calculate the distance to the object 13, and outputs the result as a depth image to a higher-level host device.
- the distance measuring device 12 includes a control section 51 , a light receiving section 52 , a signal processing section 53 and an input/output section 54 .
- This ranging system 1 is used with an RGB camera (not shown) that captures a subject including the object 13 and the like.
- the distance measurement system 1 sets the same range as the imaging range of the RGB camera, which is an external camera, as the distance measurement range, and generates the distance information of the subject captured by the RGB camera.
- the resolution of the light-receiving unit 52 of the distance measuring device 12 is lower than the resolution of the color image generated by the RGB camera, the distance measuring device 12 converts the resolution to the same resolution as the color image in the signal processing unit 53.
- a high-resolution depth image which is a normalized depth image, is generated and output.
- the light emission control unit 31 of the illumination device 11 includes, for example, a microprocessor, an LSI, a laser driver, etc., and controls the light emission unit 32 and the diffractive optical element based on a light emission control signal supplied from the control unit 51 of the distance measuring device 12. 33. Further, the light emission control unit 31 emits irradiation light according to a light emission trigger supplied from the control unit 51 of the distance measuring device 12 .
- the light emission trigger is, for example, a pulse waveform composed of two values of "High (1)" and "Low (0)", and "High" represents the timing of emitting the irradiation light.
- the light emitting unit 32 is composed of, for example, a VCSEL array in which a plurality of VCSELs (Vertical Cavity Surface Emitting Laser) as a light source are arranged in a plane, and each VCSEL turns on and off light emission according to a light emission trigger. I do.
- the light emission unit of the VCSEL (the size of the light source) and the position of the VCSEL to emit light (light emission position) can be varied under the control of the light emission control section 31 .
- the diffractive optical element 33 reproduces, in a direction perpendicular to the optical axis direction, the light emission pattern of a predetermined area that is emitted from the light emission unit 32 and passed through a projection lens (not shown). Expand your area.
- a variable focus lens, a liquid crystal element, or the like can be used to switch between spot irradiation and surface irradiation, or to switch the emission pattern (irradiation area) to a specific pattern. can.
- a photonic crystal surface emitting laser may be used to change the emission pattern with which the object 13 is irradiated to a specific pattern.
- photonic crystal light emitting lasers for example, see ""Feature next-generation laser light source! Photonic crystal lasers that have come this far: Advances in large-area coherent photonic crystal lasers, De Zoysa Menaka, Masahiro Yoshida, Yoshinori Tanaka, Susumu Noda, OPTRONICS (2017) No.5.
- FIG. 3 shows an example of an irradiation pattern with which the lighting device 11 irradiates the object 13 based on the light emission conditions supplied from the control unit 51.
- the illuminating device 11 has, as an irradiation method, a surface irradiation that irradiates a predetermined irradiation area with a uniform emission intensity within a predetermined luminance range, and a spot irradiation that uses a plurality of spots (circles) arranged at predetermined intervals as the irradiation area. It is possible to select and irradiate. Surface illumination enables measurement (light reception) with high resolution, but the emitted light is diffused, resulting in low emission intensity and a short measurement range. On the other hand, spot illumination has a high emission intensity, so it is possible to obtain a noise-robust (highly reliable) depth value, but the resolution is low.
- the lighting device 11 can irradiate a limited irradiation area and change the emission intensity. By emitting light in the required area and emission intensity, it is possible to reduce power consumption and avoid saturation of the light-receiving part at short distances. Reducing the emission intensity also contributes to eye-safety.
- the illumination device 11 illuminates a specific area (eg, central area) at high density and illuminates other areas (eg, peripheral area) at low density, instead of illuminating the illumination area uniformly.
- the irradiation pattern can be switched to a specific pattern for irradiation.
- the control unit 51 of the distance measuring device 12 is composed of, for example, an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), a microprocessor, and the like.
- the control unit 51 acquires a distance measurement instruction from a higher-level host device via the input/output unit 54 , the control unit 51 determines the light emission conditions, and causes the lighting device 11 to emit the light emission control signal and the light emission trigger corresponding to the determined light emission conditions. It is supplied to the control section 31 .
- the control unit 51 also supplies the generated light emission trigger to the signal processing unit 53, and determines which pixel of the light receiving unit 52 is to be the active pixel in accordance with the determined light emission condition. Active pixels are pixels that detect the incidence of photons. Pixels that do not detect incoming photons are referred to as inactive pixels.
- the light receiving unit 52 has a pixel array in which pixels for detecting incident photons are two-dimensionally arranged in a matrix.
- Each pixel of the light receiving section 52 has a SPAD (Single Photon Avalanche Diode) as a photoelectric conversion element.
- SPAD Single Photon Avalanche Diode
- a SPAD instantaneously detects a single photon by multiplying carriers generated by photoelectric conversion in a high electric field PN junction region (multiplication region).
- Each active pixel of the light receiving section 52 outputs a detection signal indicating that the photon is detected to the signal processing section 53 when detecting the incident photon.
- the signal processing unit 53 receives the reflected light after the irradiation light is emitted, based on the emission of the irradiation light and the reception of the reflected light, which are repeatedly executed a predetermined number of times (for example, several times to several hundred times). Generate a histogram of the time (count value) until the Then, the signal processing unit 53 detects the peak of the generated histogram to determine the time until the light emitted from the lighting device 11 is reflected by the object 13 and returns, and the determined time and the speed of light Based on this, the distance to the object 13 is obtained.
- the signal processing unit 53 is composed of, for example, an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), a logic circuit, and the like.
- the signal processing unit 53 performs high resolution processing to generate a high resolution depth image having the same resolution as the color image from the low resolution depth image generated based on the light receiving result of the light receiving unit 52 .
- the generated high-resolution depth image is output to the subsequent device via the input/output unit 54 .
- the input/output unit 54 supplies the control unit 51 with a distance measurement instruction supplied from a higher-level host device. Further, the input/output unit 54 outputs the high-resolution depth image supplied from the signal processing unit 53 to a higher host device.
- the rangefinder 12 has two modes of operation, a rangefinding mode and a luminance observation mode.
- a rangefinding mode some of the plurality of pixels of the light receiving unit 52 are set as active pixels and the remaining pixels are set as inactive pixels, and a low resolution depth image generated based on the active pixels is converted into a high resolution depth image.
- This mode generates and outputs images.
- the luminance observation mode is a mode in which all pixels of the light-receiving unit 52 are set as active pixels, and a luminance image is generated by counting the number of photons input in a certain period as a luminance value (pixel value).
- FIG. 4 is a block diagram showing a more detailed configuration example of the distance measuring device 12 when the operation mode is the distance measuring mode. Note that the diffractive optical element 33 is not shown in FIG.
- the distance measuring device 12 has a control section 51 , a light receiving section 52 , a signal processing section 53 , an input/output section 54 , a pixel driving section 55 and a multiplexer 56 .
- the control unit 51 has a position determining unit 61 and a sampling pattern table 62.
- the signal processing unit 53 includes time measurement units 71 1 to 71 N , histogram generation units 72 1 to 72 N , peak detection units 73 1 to 73 N , distance calculation unit 74, edge information detection unit 75, and adaptive sampling unit. 76.
- the signal processing unit 53 includes N (N>1) each of the time measuring unit 71, the histogram generating unit 72, and the peak detecting unit 73, and is configured to generate N histograms. There is If N is equal to the total number of pixels in the light receiving section 52, a histogram can be generated for each pixel.
- the position determining unit 61 determines the light emitting position of the VCSEL array as the light emitting unit 32 of the illumination device 11 and the light receiving position of the pixel array of the light receiving unit 52 . That is, the position determination unit 61 determines light emission conditions such as surface irradiation, spot irradiation, light emission area, and irradiation pattern, and generates a light emission control signal indicating which VCSEL in the VCSEL array is to emit light based on the determined light emission conditions. It is supplied to the light emission control section 31 of the illumination device 11 . In addition, the position determination unit 61 determines which pixels in the pixel array should be active pixels in accordance with the determined light emission conditions, based on the sampling pattern table 62 stored in the internal memory. The sampling pattern table 62 stores position information indicating the pixel position of each pixel in the pixel array of the light receiving section 52 .
- FIG. 5 shows an example of determining active pixels based on light emission conditions.
- the position determining unit 61 determines pixel positions of active pixels and histogram generation units according to the size and position of the light source (VCSEL) emitted by the light emitting unit 32 . For example, under certain light emission conditions, it is assumed that spotlights emitted from the illumination device 11 are incident on the pixel array of the light receiving section 52 as in areas 111A and 111B. In this case, the position determining unit 61 sets each pixel of the 2 ⁇ 3 pixel regions 101A and 101B corresponding to the regions 111A and 111B as active pixels, and generates one histogram for each of the pixel regions 101A and 101B. Decide on units.
- VCSEL light source
- the position determination unit 61 sets each pixel of the 2 ⁇ 3 pixel area 102A and the pixel area 102B as active pixels, and Each of the region 102A and the pixel region 102B is determined as one histogram generation unit.
- the position determining unit 61 sets each pixel of the 3 ⁇ 4 pixel area 103A and the pixel area 103B as active pixels, and Each of the region 103A and the pixel region 103B is determined as one histogram generation unit. Further, when it is assumed that the spot light is incident on the areas 114A and 114B, the position determination unit 61 sets each pixel of the 3 ⁇ 4 pixel area 104A and the pixel area 104B as active pixels, and 104A and the pixel area 104B are each determined as one histogram generation unit.
- a plurality of active pixels that are used as one histogram generation unit are hereinafter referred to as macropixels.
- the position determining unit 61 supplies the active pixel control information specifying the determined active pixels to the pixel driving unit 55 .
- the position determining unit 61 also supplies the multiplexer 56 with histogram generation control information that specifies the unit of histogram generation.
- the position determination unit 61 is supplied with sampling position information from the signal processing unit 53 .
- the sampling position information supplied from the signal processing section 53 is information indicating the optimum sampling position determined by the signal processing section 53 based on the high-resolution depth image.
- the position determination unit 61 determines whether it is necessary to change the active pixel based on the sampling position information supplied from the signal processing unit 53 and the sampling pattern table 62. If it is determined that the active pixel needs to be changed, the position determining unit 61 changes the active pixel to an inactive pixel and determines the other inactive pixels to be new active pixels. Active pixel control information based on the changed active pixels is supplied to the pixel driving section 55 , and histogram generation control information is supplied to the multiplexer 56 .
- the pixel driving section 55 controls active pixels and non-active pixels based on the active pixel control information supplied from the position determining section 61 . In other words, the pixel driving section 55 controls ON/OFF of the light receiving operation of each pixel of the light receiving section 52 .
- a detection signal indicating that the photon has been detected is output as a pixel signal to the signal processing section 53 via the multiplexer 56 . .
- a trigger is also supplied.
- the time measurement unit 71 i receives the reflected light after the light emission unit 32 emits the irradiation light. Generates a count value corresponding to the time until The generated count value is supplied to the corresponding histogram generator 72i .
- the time measuring unit 71 i is also called a TDC (Time to Digital Converter).
- the histogram generation unit 72i creates a histogram of count values based on the count values supplied from the time measurement unit 71i .
- the generated histogram data is supplied to the corresponding peak detector 73i .
- the peak detector 73i detects the peak of the histogram based on the histogram data supplied from the histogram generator 72i .
- the peak detector 73 i supplies the count value corresponding to the detected histogram peak to the distance calculator 74 .
- the distance calculator 74 calculates the time of flight of the irradiation light based on the count values corresponding to the peaks of the histogram supplied in units of macropixels from the peak detectors 73-1 to 73N .
- the distance calculation unit 74 calculates the distance to the subject from the calculated flight time, and generates a depth image in which the calculated distance is stored as a pixel value. Since the resolution of the depth image generated here is lower than the resolution of the color image generated by the RGB camera, even if N is the total number of pixels of the light receiving unit 52, the color image It is a sparse depth image that is lower than the resolution of .
- the distance calculation unit 74 further performs high-resolution processing to increase the resolution of the generated sparse depth image to the same resolution as the color image generated by the RGB camera. That is, the distance calculator 74 includes a high resolution processor that generates a high resolution depth image from a sparse depth image. The generated high-resolution depth image is output to the outside via the input/output unit 54 and supplied to the edge information detection unit 75 . High-resolution processing can be realized by applying a known technique using DNN (Deep Neural Network), for example.
- DNN Deep Neural Network
- the edge information detection unit 75 detects edge information indicating the boundary of the object based on the high resolution depth image supplied from the distance calculation unit 74 and supplies the detection result to the adaptive sampling unit 76 .
- Technologies for detecting edge information from depth images include, for example, “Holistically-Nested Edge Detection” using DNN, Saining Xie, Zhuowen Tu (https://arxiv.org/pdf/1504.06375.pdf).
- the adaptive sampling section 76 determines whether or not the position currently set as the macro pixel (hereinafter also referred to as the sampling position) is on the edge of the object. , for all sampling positions. Information about the positions set in the macro-pixels is grasped by acquiring the active pixel control information generated by the position determining section 61 .
- the adaptive sampling unit 76 calculates the direction and amount of movement for moving the sampling position from the edge of the object.
- the adaptive sampling unit 76 changes the position information of the sampling positions determined to be on the edge of the object to the position information of the new sampling positions moved by the calculated movement direction and movement amount, and then obtains all the sampling positions. is supplied to the position determination unit 61 of the control unit 51 .
- the adaptive sampling section 76 may supply only the sampling position information of the new sampling position that needs to be changed to the position determining section 61 .
- FIG. 6 The processing of the edge information detection unit 75 and the adaptive sampling unit 76 will be described with reference to FIGS. 6 and 7.
- FIG. 6 The processing of the edge information detection unit 75 and the adaptive sampling unit 76 will be described with reference to FIGS. 6 and 7.
- FIG. 6 The processing of the edge information detection unit 75 and the adaptive sampling unit 76 will be described with reference to FIGS. 6 and 7.
- FIG. 6 The processing of the edge information detection unit 75 and the adaptive sampling unit 76 will be described with reference to FIGS. 6 and 7.
- FIG. 6 shows an example of a color image generated by an RGB camera and a high-resolution depth image that has been increased to the same resolution as the color image.
- a high-resolution depth image is an image in which distance information to an object is indicated by a predetermined number of bits (for example, 10 bits) of gray values. In the high-resolution depth image of FIG. It is shown.
- the edge information detection unit 75 detects the boundary of the gray value corresponding to the distance as edge information.
- Each point MP surrounded by white around a black circle superimposed on the high-resolution depth image and shown at regular intervals indicates the current sampling position in the light receiving unit 52, that is, the position of the macro pixel.
- Region 121 includes a first sampling position 141 and a second sampling position 142 as current sampling positions.
- FIG. 7 is an enlarged view of the region 121 of the high-resolution depth image in FIG.
- adaptive sampling unit 76 determines that first sampling location 141 is on edge 151 and second sampling location 142 is not on any edge.
- the adaptive sampling unit 76 calculates a new sampling position 141' moved away from the edge of the object, and calculates the new sampling position 141'.
- the sampling position information including the position information of the position 141 ′ is supplied to the position determining section 61 of the control section 51 . Calculation of the new sampling position 141' will be described later.
- step S1 the distance measuring device 12 determines the light emission conditions, and based on the determined light emission conditions, outputs the VCSEL to emit light, the light emission control signal for controlling the timing, and the light emission trigger to the lighting device 11. Further, the position determining unit 61 of the distance measuring device 12 supplies active pixel control information for specifying active pixels to the pixel driving unit 55 based on the determined light emission conditions, and histogram generation control information for specifying histogram generation units. to multiplexer 56 .
- step S2 the illumination device 11 starts emitting irradiation light. More specifically, the light emission control section 31 controls the diffractive optical element 33 based on the light emission control signal from the distance measuring device 12, and turns on/off the predetermined VCSEL of the light emitting section 32 based on the light emission trigger.
- step S3 the distance measuring device 12 starts light receiving operation. More specifically, the pixel driving section 55 drives predetermined pixels as active pixels based on the active pixel control information from the position determining section 61 . When a photon is detected in an active pixel, a detection signal indicating that is output to the signal processing section 53 via the multiplexer 56 as a pixel signal.
- step S4 the distance measuring device 12 executes high resolution depth image generation processing for generating a high resolution depth image.
- the distance measuring device 12 outputs the high-resolution depth image obtained as a result of the high-resolution depth image generation processing to the higher-level host device, and ends the distance measurement processing.
- step S12 the distance calculation unit 74 performs high-resolution processing to generate a high-resolution depth image having the same resolution as the color image generated by the RGB camera from the sparse depth image.
- the generated high-resolution depth image is output to the outside via the input/output unit 54 and supplied to the edge information detection unit 75 .
- step S ⁇ b>13 the edge information detection unit 75 detects edge information indicating the boundary of the object based on the high-resolution depth image supplied from the distance calculation unit 74 and supplies the detection result to the adaptive sampling unit 76 .
- step S14 the adaptive sampling unit 76 determines whether the sampling positions are on the edge based on the edge information of the object supplied from the edge information detection unit 75.
- FIG. 10 is a diagram explaining the process of determining whether the sampling positions are on the edge for the first sampling position 141 and the second sampling position 142 of the area 121 shown in FIG.
- the adaptive sampling unit 76 determines whether or not the edge of the object exists within a range of a predetermined threshold value r centered on the sampling position to be determined, thereby determining whether the sampling position to be determined is on the edge. determine whether or not The predetermined threshold r is predetermined.
- FIG. 10A shows an example of setting the predetermined threshold value r as a fixed value. At both the first sampling position 141 and the second sampling position 142, the threshold value r is set to the same value ra.
- the fixed threshold r a can be set, for example, to a value larger than the spot diameter when the lighting device 11 irradiates the subject with irradiation light by spot irradiation.
- a fixed threshold r a can be determined, for example, as a predetermined percentage (eg, 50%) of the spacing between neighboring sampling locations. It may be determined by a predetermined ratio to the average value of the intervals of all sampling positions within the irradiation area, instead of the predetermined ratio to the intervals to neighboring sampling positions.
- the fixed threshold r a can be set to a value greater than the alignment error between the RGB camera and the rangefinder 12 .
- FIG. 10B shows a distance-variable example in which the predetermined threshold value r can change according to the distance (depth value).
- the threshold r is set based on the determination function of the depth value and the threshold r shown in B of FIG.
- the threshold r is set so that the threshold r is inversely proportional to the depth value, that is, the threshold r increases as the distance to the object decreases.
- the depth value of the first sampling position 141 is smaller (the distance is closer) than the depth value of the second sampling position 142.
- the threshold r b at the first sampling position 141 of B of 10 is set larger than the threshold r c at the second sampling position 142 (r b >r c ).
- the first sampling position 141 has an edge within the range of the predetermined threshold value r. is determined to be in A second sampling location 142 is determined not to be on an edge.
- step S15 the adaptive sampling unit 76 calculates the movement direction and movement amount of the sampling position determined to be on the edge.
- FIG. 11 to 13 A method of determining the movement direction and movement amount of the sampling position determined to be on the edge will be described with reference to FIGS. 11 to 13.
- FIG. 11 to 13 for the sake of simplicity, it is assumed that the sampling position is set in units of one pixel.
- points 202a to 202h surrounded by white circles represent active pixels, and each point 203 surrounded by gray circles represents an inactive pixel.
- the adaptive sampling unit 76 determines the center-of-gravity position using the depth values of a plurality of sampling points 202 in the vicinity of the sampling point 201 to be moved. For example, using the positions (x, y) and the depth value d of eight sampling points 202a to 202h adjacent to the sampling point 201 as a neighborhood area of the sampling point 201 to be moved, the position of the center of gravity is determined. . Depth values d for the eight sampling locations 202a-202h are obtained from the high resolution depth image.
- a weight that is inversely proportional to the depth value is used so that the closer the distance, the greater the weight.
- FIG. 12 when the irradiation light hits the boundary of the object, there is no occlusion of the light source in the subject on the front side, which is suitable for measurement, but occlusion of the light source occurs in the subject on the back side. , Accurate measurement is not possible. This is also because it is generally considered that the depth information of the near side is more important than the depth information of the far side as the depth information of the subject.
- the weights of the sampling points 202a, 202d, 202f, and 202g on the front side are set to be greater than the weights of the sampling points 202b, 202c, 202e, and 202h on the back side.
- the adaptive sampling unit 76 calculates the center-of-gravity position using the positions (x, y) of the plurality of sampling points 202 in the neighboring area of the sampling point 201 to be moved and the depth value d.
- the adaptive sampling unit 76 determines the direction from the current position of the sampling point 201 to the calculated position of the center of gravity as the moving direction of the sampling position.
- the adaptive sampling unit 76 determines the amount of movement of the sampling point 201 .
- the adaptive sampling unit 76 determines the amount of movement according to the spot diameter when the lighting device 11 irradiates irradiation light by spot irradiation. More specifically, a predetermined value larger than the spot diameter can be determined as the amount of movement. As a result, a position where the spot diameter does not overlap the edge can be set as a new sampling position.
- the amount of movement may be determined based on the alignment error between the RGB camera and the distance measuring device 12 .
- a predetermined value larger than the alignment error can be determined as the movement amount.
- the irradiation method is surface irradiation
- a predetermined value larger than the alignment error is determined as the amount of movement
- the irradiation method is spot irradiation
- a predetermined value that is greater than the spot diameter and the alignment error is moved. It may be determined as a quantity.
- a movement vector 211 in FIG. 11 indicates the movement direction and movement amount calculated for the sampling point 201 to be moved.
- FIG. 13A shows an example of the movement vector 211 of the sampling point 201 when the sampling point 201 to be moved is on the edge of two objects.
- FIG. 13B shows an example of the movement vector 211 of the sampling point 201 when the sampling point 201 to be moved is on the edge of three objects.
- FIG. 13C shows an example in which the movement vector 211 of the sampling point 201 cannot be determined because the sampling point 201 to be moved is on an elongated object and the position of the center of gravity overlaps with the current sampling point 201. In this way, it may happen that the movement vector 211 of the sampling point 201 cannot be determined.
- the motion vector 211 of the sampling point 201 cannot be determined as shown in C of FIG. It is possible to lower the weight when using the depth value when generating. Alternatively, sampling points for which the motion vector 211 cannot be determined may only be changed from active pixels to inactive pixels.
- step S15 of FIG. 9 the movement direction and movement amount of the sampling position determined to be on the edge are calculated as described above.
- the adaptive sampling unit 76 changes the position information of the sampling position determined to be on the edge of the object to the position information of the new sampling position moved by the calculated movement direction and movement amount. Then, the adaptive sampling section 76 supplies sampling position information of all sampling positions to the position determination section 61 of the control section 51 .
- step S ⁇ b>16 the position determination unit 61 acquires sampling position information of all sampling positions from the adaptive sampling unit 76 . Then, the position determining unit 61 determines new active pixels corresponding to the sampling positions whose positions have been changed (new sampling positions), and deactivates active pixels that do not require light receiving operation due to the change of the sampling positions. Decide on a pixel.
- the position determination unit 61 refers to the sampling pattern table in the internal memory and determines the pixel of the light receiving unit 52 closest to the new sampling position as a new active pixel. For example, if the new sampling position is outside the pixel array and there is no pixel of the light receiving unit 52 that is closest to the new sampling position, there is a possibility that no new active pixel will be set.
- step S17 the control unit 51 determines whether or not to end the distance measurement. For example, the control unit 51 determines to end the distance measurement when the high-resolution depth image is generated a predetermined number of times and output. Further, for example, the control unit 51 may control the sampling position information of all the sampling positions supplied from the adaptive sampling unit 76 when the position information of the new sampling position is not included, that is, when all the active pixels are on the boundary of the object. If a high-resolution depth image is generated in the absence of the distance measurement, it may be determined that the distance measurement is finished.
- step S17 If it is determined in step S17 that the distance measurement is not finished yet, the process returns to step S11, and steps S11 to S17 described above are repeated.
- step S17 if it is determined in step S17 to end the distance measurement, the high-resolution depth generation process of FIG. 9 is ended, and the distance measurement process of FIG. 8 is also ended.
- the sampling position when generating the sparse depth image is on the edge of the object, and the sampling position determined to be on the edge is the location that does not overlap the edge.
- controlled to move to A sparse depth image can be generated with higher accuracy by sampling while avoiding the boundary of the object. For example, when an object exists between sampling positions and the boundary of the object overlaps with the sampling position, the object may be buried when the resolution is increased. Sampling avoiding the boundary of an object can reduce the possibility that the object is buried when the resolution is increased.
- the moving direction of the sampling position determined to be on the edge is moved toward the area of the nearer object of the two objects with different depth directions.
- the influence of occlusion of the light source can be suppressed, and a sparse depth image can be generated with higher accuracy.
- a sparse depth image By generating a sparse depth image with higher accuracy, a high-resolution depth image can also be generated with higher accuracy.
- the amount of movement of the sampling position determined to be on the edge can be set to a predetermined value larger than the spot diameter when the irradiation light is irradiated by spot irradiation.
- a position where the spot diameter does not overlap the edge can be set as a new sampling position, and a sparse depth image can be generated with higher accuracy.
- a high-resolution depth image can also be generated with higher accuracy.
- the amount of movement of the sampling position can be determined as a predetermined value larger than the alignment error between the RGB camera that generates the color image and the distance measuring device 12 .
- a high-resolution depth image is generated to correspond to the color image that the user actually sees. If there is an alignment error between the RGB camera that generates the color image and the distance measuring device 12, the depth will be associated with an incorrect position due to the alignment error.
- the registration error by setting a position that does not overlap with the edge as a new sampling position, it is possible to measure the depth at a position where the correspondence with the object is correct even if there is a registration error. , a high-resolution depth image can be generated with higher accuracy.
- the distance measurement device 12 detects edge information of an object using a high-resolution depth image generated in the distance measurement mode, and controls the sampling position with high precision to obtain a high-resolution depth image. achieved high accuracy.
- the ranging device 12 may detect edge information of an object using not only the high-resolution depth image generated in the ranging mode, but also the luminance image obtained in the luminance observation mode.
- FIG. 14 is a block diagram showing a detailed configuration example of the ranging device 12 when the ranging mode is the luminance observation mode.
- the signal processing section 53 is provided with photon counting sections 301 1 to 301 M and a luminance image generating section 302 .
- the time measurement units 71 1 to 71 N the histogram generation units 72 1 to 72 N , the peak detection units 73 1 to 73 N , and the distance calculation unit 74 are omitted.
- Other configurations of the distance measuring device 12 are the same as those in FIG.
- the operation mode is the observation luminance mode
- all pixels of the light receiving section 52 are set as active pixels, and M photon counting sections 3011 to 301M corresponding to the number of pixels of the light receiving section 52 operate. That is, the photon counting section 301 is provided for each pixel of the light receiving section 52 .
- the multiplexer 56 connects the pixels of the light receiving section 52 and the photon counting section 301 one-to-one, and supplies the pixel signal of each pixel of the light receiving section 52 to the corresponding photon counting section 301 .
- the luminance image generation unit 302 generates a luminance image having a pixel value (luminance value) obtained by counting the photons measured at each pixel, and supplies the luminance image to the edge information detection unit 75 .
- the generated luminance image may also be output to a higher host device via the input/output unit 54 .
- photon count results may be obtained in units of a plurality of pixels instead of in units of pixels. Configured.
- the edge information detection unit 75 detects edge information indicating the boundaries of objects based on the luminance image supplied from the luminance image generation unit 302 .
- a known technique can be used as a technique for detecting edge information from a luminance image. For example, in “Hardware implementation of a novel edge-map generation technique for pupil detection in NIR images”, Vineet Kumar, Abhijit Asati, Anu Gupta, (https://www.sciencedirect.com/science/article/pii/S2215098616305456) technology, etc.
- the edge information detection unit 75 also detects edge information indicating the boundary of the object based on the high-resolution depth image generated in the ranging mode. Then, the edge information detection unit 75 detects the final edge information of the object by integrating the edge information detected from the luminance image and the edge information detected from the high-resolution depth image, and adapts the detection result. It is supplied to the sampling section 76 .
- High-resolution depth image processing for generating a high-resolution depth image using not only the high-resolution depth image generated in the ranging mode but also the luminance image obtained in the luminance observation mode will be described with reference to the flowchart of FIG. .
- the high-resolution depth image processing in FIG. 15 can be executed as step S4 in FIG. 8 instead of the high-resolution depth image processing in FIG. 9 described above.
- the illumination device 11 may stop emitting the irradiation light and generate a luminance image using only ambient light, or uniform light such as surface irradiation may be emitted as the irradiation light. .
- step S42 the distance measuring device 12 sets the operation mode to the distance measuring mode and generates a sparse depth image. This process is the same as the process of step S11 in FIG.
- step S43 the distance calculation unit 74 performs high-resolution processing to generate a high-resolution depth image from the sparse depth image.
- the generated high-resolution depth image is output to the outside via the input/output unit 54 and supplied to the edge information detection unit 75 .
- step S44 the edge information detection unit 75 detects edge information indicating the boundary of an object based on the luminance image obtained in the luminance observation mode and the high-resolution depth image obtained in the distance measurement mode, and detects the edge information.
- the results are supplied to adaptive sampling section 76 .
- steps S45 to S48 is the same as the processing of steps S14 to S17 in FIG. 9, respectively, so description thereof will be omitted.
- the edges of the object are detected based on both the edge information based on the high-resolution depth image and the edge information based on the luminance image, and the sampling position is on the edge of the object. It can be determined whether or not there is
- edge information based on a luminance image in a domain different from the depth image it becomes possible to detect the edges of objects that cannot be picked up by depth information alone, and it is possible to generate even more accurate high-resolution depth images.
- the edge information detected from the luminance image is additionally used to determine whether the sampling position is on the edge of the object.
- a captured color image may also be used. That is, the edge information of the object is detected using the color image, the edge information of both the high-resolution depth image and the color image is used to determine whether the sampling position is on the edge of the object, and the sampling position is determined. You can move it.
- an external camera other than the RGB camera described above.
- other external cameras include an IR camera that captures infrared rays (far-infrared rays, near-infrared rays), distance measurement sensors (distance measurement devices) based on the indirect ToF method, EVS (event-based vision sensor), and the like.
- a ranging sensor based on the indirect ToF method measures the distance to an object by detecting the flight time from the timing at which the irradiated light is emitted to the timing at which the reflected light is received as a phase difference.
- the EVS is a sensor that has pixels that photoelectrically convert optical signals to output pixel signals, and that outputs temporal luminance changes of the optical signals as event signals (event data) based on the pixel signals.
- the EVS captures images in sync with a vertical sync signal and outputs frame data for one frame (screen) at the cycle of the vertical sync signal like a general image sensor. It is an asynchronous (or address-controlled) camera because it outputs event data only at .
- edge information of the luminance image in the luminance mode can be detected and used based on the image of an external camera such as an RGB camera or an IR camera, and the detection accuracy of edge information can be improved.
- the distance measuring device 12 does not need to be driven in the luminance mode (generate a luminance image)
- the frame rate for generating high-resolution depth images can be doubled. etc. can be reduced. Thereby, a high-resolution depth image can be generated with high accuracy.
- edge information based on the high-resolution depth image is used to detect the edge of the object, and the sampling position is the object. may be determined whether it is on the edge of . Also in this case, a high-resolution depth image can be generated with high accuracy.
- the above-described ranging system 1 can be installed in electronic devices such as smartphones, tablet terminals, mobile phones, personal computers, game machines, television receivers, wearable terminals, digital still cameras, and digital video cameras.
- the technology of the present disclosure can be adopted for shooting space recognition of VR (virtual reality) and AR (augmented reality) content, and can also be installed in automobiles to measure the distance between vehicles, etc. It can also be applied to a surveillance camera for monitoring roads, an in-vehicle sensor for photographing the inside of a vehicle, and the like.
- a system means a set of multiple components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a single device housing a plurality of modules in one housing, are both systems. .
- the sampling positions are set by macro-pixels consisting of a plurality of pixels, but the sampling positions may of course be set in units of one pixel.
- this technique can take the following configurations.
- a light-receiving unit having a plurality of pixels for receiving light reflected by an object; a high-resolution processing unit that generates a high-resolution depth image from the sparse depth image acquired by the light-receiving unit;
- a distance measuring device comprising: a position determining unit that determines active pixels that perform light receiving operation in the light receiving unit based on edge information of the high resolution depth image.
- the distance measuring device according to (1) further comprising a sampling unit that determines whether or not the sampling position set in the active pixel is on the edge of the object based on the edge information of the high-resolution depth image.
- the sampling unit determines whether the sampling position is on the edge of the object by determining whether the edge of the object exists within a predetermined range centered on the sampling position. 2) The distance measuring device according to the above. (4) The distance measuring device according to (3), wherein the predetermined range is a value larger than an alignment error between an external camera and the distance measuring device. (5) The distance measuring device according to (3), wherein the predetermined range is a value larger than the spot diameter of the irradiation light. (6) The distance measuring device according to (3), wherein the predetermined range is a value determined by a predetermined ratio with respect to an interval from a nearby sampling position.
- the predetermined range is a value determined by a predetermined ratio with respect to intervals of all sampling positions within the irradiation area.
- the predetermined range is a value that can change according to the depth value of the sampling position.
- the predetermined range is a value that can change in inverse proportion to the depth value of the sampling position.
- the sampling unit determines a movement direction and a movement amount for moving the sampling position when it is determined that the sampling position is on an edge of an object. rangefinder.
- the distance measuring device (11) The distance measuring device according to (10), wherein the sampling unit determines a center-of-gravity position using another sampling position near the sampling position, and determines a direction toward the center-of-gravity position as the moving direction. (12) The sampling unit determines the center-of-gravity position by setting the weight of the other sampling position on the front side to be greater than the weight of the other sampling position on the back side, thereby determining the center-of-gravity position. distance device. (13) The rangefinder according to any one of (10) to (12), wherein the sampling unit sets the movement amount based on a positioning error between an external camera and the rangefinder.
- the distance measuring device according to any one of (10) to (12), wherein the sampling unit determines the amount of movement according to a spot diameter of the irradiation light.
- the distance measuring device according to any one of (1) to (14), further comprising an edge information detection unit that detects the edge information indicating a boundary of an object based on the high-resolution depth image.
- the edge information detection unit detects the edge information based on the high-resolution depth image and an image from an external camera.
- the position determination unit determines the active pixels based on the size and position of the irradiation light.
- a distance measuring device comprising a light-receiving unit having a plurality of pixels for receiving light reflected by an object, generating a high-resolution depth image from the sparse depth image acquired by the light receiving unit;
- a method of controlling a distance measuring device comprising: determining active pixels that perform a light receiving operation in the light receiving unit based on edge information of the high resolution depth image.
- the rangefinder is a light receiving unit having a plurality of pixels that receive the reflected light; a high-resolution processing unit that generates a high-resolution depth image from the sparse depth image acquired by the light-receiving unit;
- a distance measuring system comprising: a position determining unit that determines active pixels that perform light receiving operation in the light receiving unit based on edge information of the high resolution depth image.
- the distance measuring device determines the emission conditions of the irradiation light including the irradiation method, the irradiation area, and the irradiation pattern, The distance measuring system according to (19), wherein the illumination device emits the irradiation light based on the emission condition.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
Description
1.測距システムの構成例
2.測距装置の詳細構成例
3.測距処理のフローチャート
4.高解像度デプス生成処理の他の例
5.変形例
図1は、本開示の測距システムの一実施の形態の構成例を示すブロック図である。
図4は、動作モードが測距モードの場合の、測距装置12のさらに詳細な構成例を示すブロック図である。なお、図4においては照明装置11に関して回折光学素子33の図示が省略されている。
次に、図8のフローチャートを参照して、測距システム1による測距処理について説明する。この処理は、例えば、上位のホスト装置から測距指示が供給された場合に開始される。
上述した測距処理では、測距装置12は、測距モードで生成される高解像度デプス画像を用いて物体のエッジ情報を検出し、サンプリング位置を高精度に制御することで、高解像度デプス画像の高精度化を実現した。
上述した実施の形態では、輝度画像から検出されたエッジ情報を追加的に用いてサンプリング位置が物体のエッジ上にあるか否かを判定したが、測距システム1とともに使用されているRGBカメラで撮影されたカラー画像を用いてもよい。すなわち、カラー画像を用いて物体のエッジ情報を検出し、高解像度デプス画像とカラー画像の両方のエッジ情報を用いて、サンプリング位置が物体のエッジ上にあるか否かを判定し、サンプリング位置を移動させてもよい。カラー画像を用いた物体のエッジ検出には、例えば、“PointRend: Image Segmentation as Rendering ”Alexander Kirillov Yuxin Wu Kaiming He Ross Girshick,(https://arxiv.org/pdf/1807.00275v2.pdf)等に開示されている、カラー画像を用いて物体の境界(領域)を分類する技術などを用いることができる。高解像度デプス画像は、疎なデプス画像から推定された画像であるため、全画素のデプス値が正しい保証はない。RGBカメラで得られたカラー画像は実際に被写体を撮影した画像であるので、エッジ情報の信頼性は高く、物体のエッジの検出精度をさらに高めることができる。これにより、高解像度デプス画像をさらに高精度に生成することができる。
上述した測距システム1は、例えば、スマートフォン、タブレット型端末、携帯電話機、パーソナルコンピュータ、ゲーム機、テレビ受像機、ウェアラブル端末、デジタルスチルカメラ、デジタルビデオカメラなどの電子機器に搭載することができる。
(1)
照射光が物体で反射された反射光を受光する複数の画素を有する受光部と、
前記受光部で取得された疎なデプス画像から、高解像度デプス画像を生成する高解像度処理部と、
前記高解像度デプス画像のエッジ情報に基づいて、前記受光部において受光動作を行うアクティブ画素を決定する位置決定部と
を備える測距装置。
(2)
前記高解像度デプス画像のエッジ情報に基づいて、アクティブ画素に設定されているサンプリング位置が物体のエッジ上にあるか否かを判定するサンプリング部をさらに備える
前記(1)に記載の測距装置。
(3)
前記サンプリング部は、前記サンプリング位置を中心とする所定の範囲内に物体のエッジが存在するか否かを判定することにより、前記サンプリング位置が物体のエッジ上にあるか否かを判定する
前記(2)に記載の測距装置。
(4)
前記所定の範囲は、外部カメラと前記測距装置との位置合わせ誤差よりも大きい値である
前記(3)に記載の測距装置。
(5)
前記所定の範囲は、前記照射光のスポット径より大きい値である
前記(3)に記載の測距装置。
(6)
前記所定の範囲は、近傍のサンプリング位置との間隔に対する所定の比率で決定される値である
前記(3)に記載の測距装置。
(7)
前記所定の範囲は、照射エリア内の全てのサンプリング位置の間隔に対する所定の比率で決定される値である
前記(3)に記載の測距装置。
(8)
前記所定の範囲は、前記サンプリング位置のデプス値に応じて変化し得る値である
前記(3)に記載の測距装置。
(9)
前記所定の範囲は、前記サンプリング位置のデプス値に反比例するように変化し得る値である
前記(3)に記載の測距装置。
(10)
前記サンプリング部は、前記サンプリング位置が物体のエッジ上にあると判定された場合、そのサンプリング位置を移動させるための移動方向および移動量を決定する
前記(2)ないし(9)のいずれかに記載の測距装置。
(11)
前記サンプリング部は、前記サンプリング位置の近傍の他のサンプリング位置を用いて重心位置を決定し、前記重心位置へ向かう方向を、前記移動方向に決定する
前記(10)に記載の測距装置。
(12)
前記サンプリング部は、手前側の前記他のサンプリング位置の重みを、奥側の前記他のサンプリング位置の重みよりも大きな重みに設定して、前記重心位置を決定する
前記(11)に記載の測距装置。
(13)
前記サンプリング部は、外部カメラと前記測距装置との位置合わせ誤差に基づいて、前記移動量を設定する
前記(10)ないし(12)のいずれかに記載の測距装置。
(14)
前記サンプリング部は、前記照射光のスポット径に応じて、前記移動量を決定する
前記(10)ないし(12)のいずれかに記載の測距装置。
(15)
前記高解像度デプス画像に基づいて、物体の境界を示す前記エッジ情報を検出するエッジ情報検出部をさらに備える
前記(1)ないし(14)のいずれかに記載の測距装置。
(16)
前記エッジ情報検出部は、前記高解像度デプス画像と外部カメラの画像とに基づいて前記エッジ情報を検出する
前記(15)に記載の測距装置。
(17)
前記位置決定部は、前記照射光の大きさと位置にも基づいて、前記アクティブ画素を決定する
前記(1)ないし(16)のいずれかに記載の測距装置。
(18)
照射光が物体で反射された反射光を受光する複数の画素を有する受光部を備える測距装置が、
前記受光部で取得された疎なデプス画像から、高解像度デプス画像を生成し、
前記高解像度デプス画像のエッジ情報に基づいて、前記受光部において受光動作を行うアクティブ画素を決定する
測距装置の制御方法。
(19)
照射光を照射する照明装置と、
前記照射光が物体で反射された反射光を受光する測距装置と
を備え、
前記測距装置は、
前記反射光を受光する複数の画素を有する受光部と、
前記受光部で取得された疎なデプス画像から、高解像度デプス画像を生成する高解像度処理部と、
前記高解像度デプス画像のエッジ情報に基づいて、前記受光部において受光動作を行うアクティブ画素を決定する位置決定部と
を備える
測距システム。
(20)
前記測距装置は、照射方式、照射エリア、および、照射パターンを含む前記照射光の発光条件を決定し、
前記照明装置は、前記発光条件に基づく前記照射光を発光させる
前記(19)に記載の測距システム。
Claims (20)
- 照射光が物体で反射された反射光を受光する複数の画素を有する受光部と、
前記受光部で取得された疎なデプス画像から、高解像度デプス画像を生成する高解像度処理部と、
前記高解像度デプス画像のエッジ情報に基づいて、前記受光部において受光動作を行うアクティブ画素を決定する位置決定部と
を備える測距装置。 - 前記高解像度デプス画像のエッジ情報に基づいて、アクティブ画素に設定されているサンプリング位置が物体のエッジ上にあるか否かを判定するサンプリング部をさらに備える
請求項1に記載の測距装置。 - 前記サンプリング部は、前記サンプリング位置を中心とする所定の範囲内に物体のエッジが存在するか否かを判定することにより、前記サンプリング位置が物体のエッジ上にあるか否かを判定する
請求項2に記載の測距装置。 - 前記所定の範囲は、外部カメラと前記測距装置との位置合わせ誤差よりも大きい値である
請求項3に記載の測距装置。 - 前記所定の範囲は、前記照射光のスポット径より大きい値である
請求項3に記載の測距装置。 - 前記所定の範囲は、近傍のサンプリング位置との間隔に対する所定の比率で決定される値である
請求項3に記載の測距装置。 - 前記所定の範囲は、照射エリア内の全てのサンプリング位置の間隔に対する所定の比率で決定される値である
請求項3に記載の測距装置。 - 前記所定の範囲は、前記サンプリング位置のデプス値に応じて変化し得る値である
請求項3に記載の測距装置。 - 前記所定の範囲は、前記サンプリング位置のデプス値に反比例するように変化し得る値である
請求項3に記載の測距装置。 - 前記サンプリング部は、前記サンプリング位置が物体のエッジ上にあると判定された場合、そのサンプリング位置を移動させるための移動方向および移動量を決定する
請求項2に記載の測距装置。 - 前記サンプリング部は、前記サンプリング位置の近傍の他のサンプリング位置を用いて重心位置を決定し、前記重心位置へ向かう方向を、前記移動方向に決定する
請求項10に記載の測距装置。 - 前記サンプリング部は、手前側の前記他のサンプリング位置の重みを、奥側の前記他のサンプリング位置の重みよりも大きな重みに設定して、前記重心位置を決定する
請求項11に記載の測距装置。 - 前記サンプリング部は、外部カメラと前記測距装置との位置合わせ誤差に基づいて、前記移動量を設定する
請求項10に記載の測距装置。 - 前記サンプリング部は、前記照射光のスポット径に応じて、前記移動量を決定する
請求項10に記載の測距装置。 - 前記高解像度デプス画像に基づいて、物体の境界を示す前記エッジ情報を検出するエッジ情報検出部をさらに備える
請求項1に記載の測距装置。 - 前記エッジ情報検出部は、前記高解像度デプス画像と外部カメラの画像とに基づいて前記エッジ情報を検出する
請求項15に記載の測距装置。 - 前記位置決定部は、前記照射光の大きさと位置にも基づいて、前記アクティブ画素を決定する
請求項1に記載の測距装置。 - 照射光が物体で反射された反射光を受光する複数の画素を有する受光部を備える測距装置が、
前記受光部で取得された疎なデプス画像から、高解像度デプス画像を生成し、
前記高解像度デプス画像のエッジ情報に基づいて、前記受光部において受光動作を行うアクティブ画素を決定する
測距装置の制御方法。 - 照射光を照射する照明装置と、
前記照射光が物体で反射された反射光を受光する測距装置と
を備え、
前記測距装置は、
前記反射光を受光する複数の画素を有する受光部と、
前記受光部で取得された疎なデプス画像から、高解像度デプス画像を生成する高解像度処理部と、
前記高解像度デプス画像のエッジ情報に基づいて、前記受光部において受光動作を行うアクティブ画素を決定する位置決定部と
を備える
測距システム。 - 前記測距装置は、照射方式、照射エリア、および、照射パターンを含む前記照射光の発光条件を決定し、
前記照明装置は、前記発光条件に基づく前記照射光を発光させる
請求項19に記載の測距システム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022579386A JPWO2022168500A1 (ja) | 2021-02-02 | 2021-12-27 | |
CN202180092053.9A CN116745644A (zh) | 2021-02-02 | 2021-12-27 | 距离测量装置、该装置的控制方法和距离测量系统 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-014814 | 2021-02-02 | ||
JP2021014814 | 2021-02-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022168500A1 true WO2022168500A1 (ja) | 2022-08-11 |
Family
ID=82740704
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/048507 WO2022168500A1 (ja) | 2021-02-02 | 2021-12-27 | 測距装置およびその制御方法、並びに、測距システム |
Country Status (3)
Country | Link |
---|---|
JP (1) | JPWO2022168500A1 (ja) |
CN (1) | CN116745644A (ja) |
WO (1) | WO2022168500A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024116631A1 (ja) * | 2022-11-29 | 2024-06-06 | ソニーセミコンダクタソリューションズ株式会社 | 検出装置及び検出方法 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015098288A1 (ja) * | 2013-12-27 | 2015-07-02 | ソニー株式会社 | 画像処理装置、および画像処理方法 |
-
2021
- 2021-12-27 CN CN202180092053.9A patent/CN116745644A/zh active Pending
- 2021-12-27 JP JP2022579386A patent/JPWO2022168500A1/ja active Pending
- 2021-12-27 WO PCT/JP2021/048507 patent/WO2022168500A1/ja active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015098288A1 (ja) * | 2013-12-27 | 2015-07-02 | ソニー株式会社 | 画像処理装置、および画像処理方法 |
Non-Patent Citations (1)
Title |
---|
MA FANGCHANG; CAVALHEIRO GUILHERME VENTURELLI; KARAMAN SERTAC: "Self-Supervised Sparse-to-Dense: Self-Supervised Depth Completion from LiDAR and Monocular Camera", 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE, 20 May 2019 (2019-05-20), pages 3288 - 3295, XP033593596, DOI: 10.1109/ICRA.2019.8793637 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024116631A1 (ja) * | 2022-11-29 | 2024-06-06 | ソニーセミコンダクタソリューションズ株式会社 | 検出装置及び検出方法 |
Also Published As
Publication number | Publication date |
---|---|
CN116745644A (zh) | 2023-09-12 |
JPWO2022168500A1 (ja) | 2022-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10921454B2 (en) | System and method for determining a distance to an object | |
US12025745B2 (en) | Photonics device | |
US11662443B2 (en) | Method and apparatus for determining malfunction, and sensor system | |
US10408922B2 (en) | Optoelectronic module with low- and high-power illumination modes | |
WO2015198300A1 (en) | Gated sensor based imaging system with minimized delay time between sensor exposures | |
WO2010149593A1 (en) | Pulsed light optical rangefinder | |
US9097582B2 (en) | Ambient light sensing system and method | |
US11061139B2 (en) | Ranging sensor | |
WO2022168500A1 (ja) | 測距装置およびその制御方法、並びに、測距システム | |
US11659155B2 (en) | Camera | |
US10953813B2 (en) | Image acquisition device to be used by vehicle and vehicle provided with same | |
JP2018204976A (ja) | 距離測定装置、距離測定方法及び撮像装置 | |
WO2022181097A1 (ja) | 測距装置およびその制御方法、並びに、測距システム | |
US11314334B2 (en) | Gesture recognition apparatus, control method thereof, and display apparatus | |
WO2022083301A1 (zh) | 3d图像传感器测距系统及使用该系统进行测距的方法 | |
US11659296B2 (en) | Systems and methods for structured light depth computation using single photon avalanche diodes | |
US9804000B2 (en) | Optical sensor array apparatus | |
WO2022209219A1 (ja) | 測距装置およびその信号処理方法、並びに、測距システム | |
US11438486B2 (en) | 3D active depth sensing with laser pulse train bursts and a gated sensor | |
US20240192375A1 (en) | Guided flash lidar | |
US20230204727A1 (en) | Distance measurement device and distance measurement method | |
WO2022034844A1 (ja) | 面発光レーザ装置及び電子機器 | |
KR20220141006A (ko) | 촬영 장치 | |
CN112887628A (zh) | 光探测和测距设备及增加其动态范围的方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21924885 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022579386 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180092053.9 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21924885 Country of ref document: EP Kind code of ref document: A1 |