WO2022052366A1 - Fused depth measurement method and measurement device - Google Patents

Fused depth measurement method and measurement device Download PDF

Info

Publication number
WO2022052366A1
WO2022052366A1 PCT/CN2020/138128 CN2020138128W WO2022052366A1 WO 2022052366 A1 WO2022052366 A1 WO 2022052366A1 CN 2020138128 W CN2020138128 W CN 2020138128W WO 2022052366 A1 WO2022052366 A1 WO 2022052366A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
parallax
depth
parallax image
image sensor
Prior art date
Application number
PCT/CN2020/138128
Other languages
French (fr)
Chinese (zh)
Inventor
王兆民
黄源浩
肖振中
Original Assignee
奥比中光科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奥比中光科技集团股份有限公司 filed Critical 奥比中光科技集团股份有限公司
Publication of WO2022052366A1 publication Critical patent/WO2022052366A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Definitions

  • the present application relates to the technical field of image processing and optical measurement, and in particular, to a fusion depth measurement method and measurement device.
  • the depth measurement device can be used to obtain the depth image of the object, and further can perform 3D modeling, skeleton extraction, face recognition, etc., and has a very wide range of applications in the fields of 3D measurement and human-computer interaction.
  • the current depth measurement technologies mainly include TOF ranging technology and binocular ranging technology.
  • TOF ranging technology is a technology that achieves precise ranging by measuring the round-trip flight time of a pulsed beam between the transmitting/receiving device and the target object. It is divided into direct measurement distance technology and indirect distance measurement technology. Among them, direct ranging technology measures the time difference between the emission and reception of pulsed beams. Compared with traditional image sensors, direct ranging technology uses single-photon avalanche diode (SPAD) image sensors to achieve high-sensitivity light detection, and uses time-correlated single Photonic technology approach to achieve picosecond time precision.
  • SPAD single-photon avalanche diode
  • the binocular ranging technology uses the triangular ranging method to calculate the distance from the measured object to the camera; specifically, when the same object is observed from two cameras, the position of the observed object in the images captured by the two cameras will be different. There is a certain position difference, that is, parallax; the closer the measured object is to the camera, the greater the parallax; the farther away, the smaller the parallax.
  • the relative positional relationship such as the distance between the two cameras is known, the distance from the subject to the camera can be calculated through the principle of similar triangles.
  • this ranging method has high requirements on processor hardware, and needs to rely on complex algorithms. The algorithm takes a long time to calculate, and the recognition effect is not good for targets with inconspicuous features, and the recognition accuracy is low.
  • the traditional ranging method has technical problems such as long ranging calculation time and low target recognition accuracy due to the complex algorithm.
  • the purpose of the present application is to provide an integrated depth measurement method and measurement device, which aims to solve the technical problems of the traditional ranging method that the algorithm is complex, which leads to long ranging calculation time and low target recognition accuracy.
  • a first aspect of the embodiments of the present application provides a fusion depth measurement method, which measures the distance of a scene area, including:
  • the first parallax image and the second parallax image are fused to obtain a target image.
  • emitting a pulsed light beam to the scene area includes:
  • the pulsed light beam is directed to instruct the pulsed light beam to diverge in various directions to the scene area.
  • a second aspect of the embodiments of the present application provides a fused depth measurement device, including:
  • the emission module is used to emit pulsed beams to the scene area
  • a detection module configured to receive the reflected signal of the pulsed beam, and output an electrical signal of the transit time between the pulsed beam and the reflected signal
  • an acquisition module configured to acquire the left image and the right image of the scene area
  • a processing module configured to process the electrical signal of the transit time to obtain a first depth image, and convert the first depth image into a first parallax image
  • a conversion module that performs stereo matching on the left image and the right image to obtain a second parallax image
  • a fusion module configured to fuse the first parallax image and the second parallax image to obtain a target image.
  • the transmitting module includes:
  • a lens element for adjusting the divergence of the pulsed beam
  • the beam scanning element is used for guiding the pulsed beam to indicate that the pulsed beam diverges to various directions of the scene area.
  • the detection module includes a single-photon avalanche diode image sensor
  • converting the first depth image into a first parallax image specifically includes:
  • PD (x 0 , y 0 ) is the disparity value of the depth value Z (x 0 , y 0 ) at the point (x 0 , y 0 ) on the first depth image
  • f is the single The focal length of the photon avalanche diode image sensor
  • T lt is the baseline length of the depth camera-left camera system
  • H lt is the homography matrix calibrated by the depth camera.
  • converting the first depth image into the first parallax image further includes:
  • parallax surface of the first parallax image is fitted by the following binary cubic equation to obtain a smooth parallax surface:
  • d(x,y) a 1 +a 2 x+a 3 y+a 4 x 2 +a 5 xy+a 6 y 2 +a 7 x 3 +a 8 x 2 y+a 9 xy 2 +a 10 y 3
  • d(x, y) is a three-dimensional parallax surface
  • a 1 , a 2 , ..., a 10 are coefficients
  • x and y are pixel coordinates.
  • the processing module before converting the first depth image into a first parallax image, the processing module further includes:
  • performing stereo matching on the left image and the right image to obtain the second parallax image specifically includes:
  • (x, y 0 ) is the pixel point within each parallax range of the right image, and ⁇ is the selected parameter.
  • the first parallax image and the second parallax image are fused to obtain a depth image, which specifically includes:
  • the first parallax image and the second parallax image are fused to obtain pixel fusion parallax
  • Three-dimensional reconstruction is performed on the scene area according to the pixel fusion disparity, so as to obtain the depth image.
  • the first parallax image and the second parallax image are fused to obtain pixel fusion parallax, which specifically includes:
  • d t is the first disparity value
  • d s is the second disparity value
  • w t is the weight according to the first disparity value
  • ws is the weight according to the second disparity value.
  • a first depth image is acquired by a single-photon avalanche diode image sensor, and the pixel points of the first depth image are used as seed points to guide the left image sensor and the right image sensor.
  • the image sensor performs stereo matching to obtain a second parallax image, and by converting the first depth image into a first parallax image, and forming different weights according to the first credibility function and the second credibility function, to fuse the first
  • the parallax image and the second parallax image are obtained by merging the parallax image to restore the three-dimensional reconstruction of the scene area, and obtain a high-resolution target image with high recognition accuracy.
  • FIG. 1 is a schematic diagram of specific flow steps of a fusion depth measurement method provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of converting a first depth image into a first parallax image in a fusion depth measurement method provided by an embodiment of the present application;
  • FIG. 3 is a schematic flowchart of fusing a first parallax image and a second parallax image in a fusion depth measurement method provided by an embodiment of the present application;
  • FIG. 4 is a schematic structural diagram of a fused depth measurement apparatus according to an embodiment of the present application.
  • Fig. 1 is the concrete flow chart of the depth measurement method of a kind of fusion provided by an embodiment of the application, for the convenience of explanation, only shows the part relevant to the present embodiment, and details are as follows:
  • the first aspect provides a fusion depth measurement method, which measures the distance of a scene area, including the following steps:
  • the emission module includes a light source array, a lens element and a beam scanning element.
  • the light source array is used to generate pulsed beams.
  • the light source array uses a semiconductor LED, a semiconductor laser, or a VCSEL (Vertical-Cavity Surface-Emitting Laser) array as the light source, and the light source array can also use a parallel light source array.
  • the edge-emitting laser on the surface of the resonant cavity is used to emit light beams with wavelengths such as infrared and ultraviolet.
  • the light source array is a VCSEL array.
  • the VCSEL has the characteristics of small size, small pulse beam emission angle, and good stability. Hundreds of thousands of VCSEL sub-light sources are arranged on a semiconductor substrate with an area of 1mm ⁇ 1mm.
  • a VCSEL array light-emitting chip is formed, which is small in size and low in power consumption.
  • the VCSEL array light-emitting chip can be a bare chip with a smaller volume and thickness, or a packaged light-emitting chip with better stability and convenient connection.
  • the lens element is used to adjust the divergence of the pulsed light beam, and the lens element adopts a single lens, a lens combination or a microlens array.
  • Divergence can be determined as an angular value associated with one or more cross-sectional axes of the pulsed beam, and adjustment of the divergence can be at least partially controlled to mitigate non-uniformity in the cross-section of the pulsed beam to improve device detection at various locations in a defined field of view
  • the light source array has certain design requirements for lens elements, such as the emission pulse beam density.
  • the beam scanning element is used to guide the pulsed beam to indicate that the pulsed beam is diverged to various directions in the scene area
  • the beam scanning element is made of MEMS (Micro Electro Mechanical ayatems, Micro Electro Mechanical System) technology.
  • Micromirrors that controllably emit pulsed beams in all directions and scan the target scene area.
  • the emitting module emits a pulsed light beam of a line beam or an area array beam that is pulse-modulated in time sequence
  • the SPAD Single Photon Avalanche Diode, single-photon avalanche diode
  • the SPAD Single Photon Avalanche Diode, single-photon avalanche diode
  • the time when the digital pulse is generated is recorded by the TDC (Time to Digital Convert) circuit, and the accumulated value of the single photon count in the corresponding time interval is added by one, and multiple sets of photon flight time information are output, and the same measurement is repeated in large numbers.
  • the corresponding single photon counting histogram will be obtained through the TCSPC (Time-correlated single-photon counting) circuit. , judge the peak value of the single photon count histogram, determine the electrical signal of the transit time required for the photon to fly between the scene area and the emission module, and output it.
  • TCSPC Time-correlated single-photon counting
  • the left image sensor and the right image sensor respectively collect the left infrared image and the right infrared image of the scene area under the illumination of the emission module; both the left image sensor and the right image sensor can use infrared image sensors, and because the infrared
  • the image sensor needs a continuous active light source (such as infrared light) for illumination during imaging, so the emission module also includes infrared light sources such as infrared floodlights and infrared projectors.
  • the left image sensor and the right image sensor may also use visible light image sensors, the left image sensor and the right image sensor may use two RGB image sensors, or the one RGB image sensor and one grayscale sensor are combined, Under the illumination of ambient light, it is used to collect the left visible light image and the right visible light image of the scene area respectively.
  • S104 Process the electrical signal of the transit time to obtain a first depth image, and convert the first depth image into a first parallax image.
  • the optical axes of the left camera and the right camera formed by the left image sensor and the right image sensor are parallel, so the depth information of the scene area can be obtained according to the principle of triangulation;
  • the projection coordinates of point P in the fixed scene area on the left camera and the right camera are P l (x l , y l ) and P r (x r , y r ) respectively, and the depth of the scene area point P is obtained according to the principle of triangulation ranging information:
  • z is the depth value of point P
  • f is the focal length of the left camera and the right camera
  • b is the baseline between the left image sensor and the right image sensor
  • d is the parallax between the left image and the right image.
  • FIG. 2 is a schematic flowchart of converting a first depth image into a first parallax image in a fusion depth measurement method provided by an embodiment of the present application. For the convenience of description, only parts related to this embodiment are shown. as follows:
  • converting the first depth image into the first parallax image includes the following steps:
  • the left image is used as the reference image, so the SPAD image sensor and the left image sensor are jointly calibrated; the first depth image obtained by the SPAD image sensor is converted into point cloud data, and the point cloud passes through the jointly calibrated transformation matrix [ R,T] maps to the camera coordinate system of the left image sensor, where R is the rotation matrix and T is the translation vector.
  • the sensor is a plane two-dimensional point of reference.
  • PD (x 0 , y 0 ) is the parallax value of the depth value Z (x 0 , y 0 ) at the point (x 0 , y 0 ) on the first depth image
  • f is the focal length of the SPAD image sensor
  • T lt is the baseline length of the depth camera-left camera system
  • H lt is the homography matrix calibrated by the depth camera.
  • the left image is selected as the reference image
  • the right image may also be selected as the reference image, which is not limited here.
  • the parallax surface of the first parallax image is fitted by a binary cubic equation, as follows:
  • d(x,y) a 1 +a 2 x+a 3 y+a 4 x 2 +a 5 xy+a 6 y 2 +a 7 x 3 +a 8 x 2 y+a 9 xy 2 +a 10 y 3 (4)
  • d(x, y) is a three-dimensional parallax surface
  • a 1 , a 2 , ..., a 10 are coefficients
  • x and y are pixel coordinates.
  • performing stereo matching on the left image and the right image to obtain the second parallax image specifically includes:
  • the pixel points on the first parallax image are selected as seed points, and the left image and the right image are guided to perform stereo matching to obtain the second parallax image.
  • the depth distance cost function is:
  • is a parameter selected according to experience, which is used to adjust the range of the depth distance cost function to set the threshold of C(x, y 0 ) . If the value of the depth distance cost function is within the threshold range, the pixel is the seed point, and the disparity value between the left image and the right image is calculated to obtain the second disparity image; if a certain pixel (x, y) on the right image 0 ) The value of the depth distance cost function is outside the threshold range, then the point is a non-seed point, then during binocular stereo matching, it will search for the nearest seed point along its horizontal direction and assign its disparity value to this point.
  • the binocular camera composed of the left image sensor and the right image sensor is placed horizontally, ensuring that the optical axes of the cameras are parallel, and the imaging is also on the same horizontal plane, and there is no parallax in the vertical direction, so the ordinate is y 0 is the same, and the parallax search range is also determined according to the baseline of the binocular camera. 0 , y 0 ), will find another point within the [(x 0 -m,y 0 ),(x 0 +m,y 0 )] disparity search range in the corresponding right image target image.
  • FIG. 3 is a schematic flowchart of a fusion depth measurement method for a first parallax image and a second parallax image in a fusion depth measurement method provided by an embodiment of the present application. For the convenience of description, only parts related to this embodiment are shown. Details are as follows:
  • fusing the first parallax image and the second parallax image to obtain the target image includes the following steps:
  • the depth value Z(x 0 , y 0 ) of the point (x 0 , y 0 ) on the first depth image set its standard deviation as ⁇ z , according to formula (3) , the corresponding disparity standard deviation can be obtained as ⁇ d , namely
  • ⁇ min is the standard deviation of the brightest pixel at the nearest received depth of field
  • ⁇ max is the received standard deviation of the darkest pixel farthest from the depth of field .
  • the standard deviation of the disparity value of a pixel calculated on the first depth image is less than ⁇ min , the point is considered to be completely stable, and the reliability is 1; when the standard deviation is greater than ⁇ max , the point is unreliable and reliable degree is 0; when the standard deviation is between the two thresholds, set the reliability range to (0,1), thus obtaining the first reliability function, as follows:
  • the second parallax image obtained by the binocular camera is calculated according to an adaptive weighting algorithm, and the second reliability function is obtained, that is,
  • T c is a constant parameter, in order to avoid is 0, set to T c >0.
  • the first parallax value obtained by the SPAD image sensor is d t
  • the second parallax value obtained by the left image sensor and the right image sensor is d s
  • the first reliability function and the second reliability function constitute different weights to fuse the two disparities, and obtain the pixel fusion disparity d as
  • w t is the weight of the disparity value obtained from the depth image obtained by the SPAD image sensor
  • ws is the weight of the disparity obtained by stereo matching of the left image and the right image obtained from the left image sensor and the right image sensor.
  • a second aspect of the embodiments of the present application provides a fused depth measurement device, including:
  • FIG. 4 is a schematic structural diagram of a fused depth measurement apparatus provided by an embodiment of the present application. For the convenience of description, only parts related to this embodiment are shown, and the details are as follows:
  • a fused depth measurement apparatus includes a transmitting module 100 , a receiving module 200 and a control and processing module 300 .
  • the transmitting module 100 is used for transmitting the pulsed light beam 10 to the scene area 30 .
  • the emission module 100 includes a light source array 101 , a lens element 102 and a beam scanning element 103 .
  • the light source array 101 is used to generate pulsed beams; the lens element 102 is used to adjust the divergence of the pulsed beams;
  • the receiving module 200 includes a detection module and a collection module.
  • the detection module includes a single-photon avalanche diode image sensor 201, and the single-photon avalanche diode image sensor 201 is used to receive the reflected signal 20 of the pulsed beam 10, and output an electrical signal of the transit time of the round-trip pulsed beam 10 and the reflected signal 20.
  • the acquisition module includes a left image sensor 202 and a right image sensor 203 .
  • the left image sensor 202 is used to collect the left image of the scene area 30 ; the right image sensor 203 is used to collect the right image of the scene area 30 .
  • the control and processing module 300 includes a processing module, a conversion module and a fusion module.
  • the processing module, the conversion module and the fusion module can be independent modules that implement corresponding functions in each module, or can be an integrated processor that implements all functions in each module.
  • the processing module is configured to process the electrical signal of the transit time to obtain a first depth image, and convert the first depth image into a first parallax image.
  • the conversion module is used for stereo matching the left image and the right image to obtain a second parallax image.
  • the fusion module is used for fusing the first parallax image and the second parallax image to obtain the target image.
  • a fused depth measurement device in this embodiment is an embodiment of a measurement device corresponding to the above-mentioned fused depth measurement method. Therefore, the specific implementation of the software method in each module of the measurement device can be Referring to the embodiments in FIG. 1 to FIG. 3 , detailed descriptions are omitted here.
  • a first depth image is acquired by a single-photon avalanche diode image sensor, and the pixel points of the first depth image are used as seed points to guide the left image sensor and the right image sensor.
  • the image sensor performs stereo matching to obtain a second parallax image, and by converting the first depth image into a first parallax image, and forming different weights according to the first credibility function and the second credibility function, to fuse the first
  • the parallax image and the second parallax image are obtained to obtain a fusion parallax image to restore the three-dimensional reconstruction of the scene area, and obtain a high-resolution depth image with high recognition accuracy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

A fused depth measurement method and measurement device, measuring a distance of a scene region (30). Said measurement method comprises: emitting a pulse beam (10) to a scene region (30); receiving a reflected signal (20) of the pulse beam (10), and outputting an electrical signal of a transit time of the pulse beam (10) and the reflected signal (20) back and forth; acquiring a left image of the scene region (30), and acquiring a right image of the scene region (30); processing the electrical signal of the transit time to obtain a first depth image, and converting the first depth image into a first parallax image; performing stereo matching between the left image and the right image, so as to obtain a second parallax image; and fusing the first parallax image and the second parallax image, so as to obtain a target image. The fused depth measurement method and measurement device solve the technical problems of long distance measurement calculation time and low target recognition accuracy caused by a complex algorithm in traditional distance measurement methods.

Description

一种融合的深度测量方法及测量装置A fusion depth measurement method and measurement device 技术领域technical field
本申请涉及图像处理与光学测量技术领域,尤其涉及一种融合的深度测量方法及测量装置。The present application relates to the technical field of image processing and optical measurement, and in particular, to a fusion depth measurement method and measurement device.
背景技术Background technique
深度测量装置可以用来获取物体的深度图像,进一步可以进行3D建模、骨架提取、人脸识别等,在3D测量以及人机交互等领域有着非常广泛的应用。目前的深度测量技术主要有TOF测距技术、双目测距技术等。The depth measurement device can be used to obtain the depth image of the object, and further can perform 3D modeling, skeleton extraction, face recognition, etc., and has a very wide range of applications in the fields of 3D measurement and human-computer interaction. The current depth measurement technologies mainly include TOF ranging technology and binocular ranging technology.
TOF的全称是Time-of-Flight,即飞行时间,TOF测距技术是一种通过测量脉冲光束在发射/接收装置和目标物体间的往返飞行时间来实现精确测距的技术,分为直接测距技术和间接测距技术。其中,直接测距技术测量脉冲光束的发射和接收的时间差,相比于传统的图像传感器,直接测距技术利用单光子雪崩二极管(SPAD)图像传感器实现高灵敏度的光探测,并且采用时间相关单光子技术方法来实现皮秒级的时间精度。但SPAD图像传感器在制造工艺、芯片设计等方面仍然存在诸多限制,以至于图像传感器的分辨率非常低。The full name of TOF is Time-of-Flight, that is, time of flight. TOF ranging technology is a technology that achieves precise ranging by measuring the round-trip flight time of a pulsed beam between the transmitting/receiving device and the target object. It is divided into direct measurement distance technology and indirect distance measurement technology. Among them, direct ranging technology measures the time difference between the emission and reception of pulsed beams. Compared with traditional image sensors, direct ranging technology uses single-photon avalanche diode (SPAD) image sensors to achieve high-sensitivity light detection, and uses time-correlated single Photonic technology approach to achieve picosecond time precision. However, there are still many limitations in the manufacturing process and chip design of the SPAD image sensor, so that the resolution of the image sensor is very low.
双目测距技术利用的是三角测距法计算被测物到相机的距离;具体地说,就是从两个相机观察同一物体,被观测物体在两个相机中拍摄到的图像中的位置会有一定位置差,即视差;被测物离相机越近,视差就越大;距离越远,视差就越小。在已知两个相机间距等相对位置关系的情况下,即可通过相似三角形的原理计算出被摄物到相机的距离。然而这种测距方式对处理器硬件要求较高,且需要依赖于复杂的算法,算法计算时间长,对于特征不明显的目标识别效果不佳,识别精度较低。The binocular ranging technology uses the triangular ranging method to calculate the distance from the measured object to the camera; specifically, when the same object is observed from two cameras, the position of the observed object in the images captured by the two cameras will be different. There is a certain position difference, that is, parallax; the closer the measured object is to the camera, the greater the parallax; the farther away, the smaller the parallax. When the relative positional relationship such as the distance between the two cameras is known, the distance from the subject to the camera can be calculated through the principle of similar triangles. However, this ranging method has high requirements on processor hardware, and needs to rely on complex algorithms. The algorithm takes a long time to calculate, and the recognition effect is not good for targets with inconspicuous features, and the recognition accuracy is low.
因此,传统的测距方法存在着因算法复杂,从而导致测距计算时间较长和 目标识别精度较低的技术问题。Therefore, the traditional ranging method has technical problems such as long ranging calculation time and low target recognition accuracy due to the complex algorithm.
发明内容SUMMARY OF THE INVENTION
本申请的目的在于提供一种融合的深度测量方法及测量装置,旨在解决传统的测距方法存在着因算法复杂,从而导致测距计算时间较长和目标识别精度较低的技术问题。The purpose of the present application is to provide an integrated depth measurement method and measurement device, which aims to solve the technical problems of the traditional ranging method that the algorithm is complex, which leads to long ranging calculation time and low target recognition accuracy.
本申请实施例的第一方面提供了一种融合的深度测量方法,对场景区域的距离进行测量,包括:A first aspect of the embodiments of the present application provides a fusion depth measurement method, which measures the distance of a scene area, including:
对所述场景区域发射脉冲光束;emitting a pulsed light beam to the scene area;
接收所述脉冲光束的反射信号,并输出往返所述脉冲光束与所述反射信号的渡越时间的电信号;receiving the reflected signal of the pulsed beam, and outputting an electrical signal of the transit time between the pulsed beam and the reflected signal;
采集所述场景区域的左图像和右图像;collecting left and right images of the scene area;
对所述渡越时间的电信号进行处理以获取第一深度图像,并将所述第一深度图像转换为第一视差图像;processing the electrical signal of the transit time to obtain a first depth image, and converting the first depth image into a first parallax image;
对所述左图像与所述右图像进行立体匹配,以获取第二视差图像;performing stereo matching on the left image and the right image to obtain a second parallax image;
对所述第一视差图像与所述第二视差图像进行融合,以获取目标图像。The first parallax image and the second parallax image are fused to obtain a target image.
在其中一实施例中,对所述场景区域发射脉冲光束包括:In one embodiment, emitting a pulsed light beam to the scene area includes:
产生所述脉冲光束;generating the pulsed beam;
调整所述脉冲光束的发散度;adjusting the divergence of the pulsed beam;
对所述脉冲光束进行导向,以指示所述脉冲光束发散至所述场景区域的各个方向。The pulsed light beam is directed to instruct the pulsed light beam to diverge in various directions to the scene area.
本申请实施例的第二方面提供了一种融合的深度测量装置,包括:A second aspect of the embodiments of the present application provides a fused depth measurement device, including:
发射模块,用于对场景区域发射脉冲光束;The emission module is used to emit pulsed beams to the scene area;
探测模块,用于接收所述脉冲光束的反射信号,并输出往返所述脉冲光束与所述反射信号的渡越时间的电信号;a detection module, configured to receive the reflected signal of the pulsed beam, and output an electrical signal of the transit time between the pulsed beam and the reflected signal;
采集模块,用于采集所述场景区域的左图像和右图像;an acquisition module, configured to acquire the left image and the right image of the scene area;
处理模块,用于对所述渡越时间的电信号进行处理以获取第一深度图像,并将所述第一深度图像转换为第一视差图像;a processing module, configured to process the electrical signal of the transit time to obtain a first depth image, and convert the first depth image into a first parallax image;
转换模块,对所述左图像与所述右图像进行立体匹配,以获取第二视差图像;以及a conversion module that performs stereo matching on the left image and the right image to obtain a second parallax image; and
融合模块,用于对所述第一视差图像与所述第二视差图像进行融合,以获取目标图像。A fusion module, configured to fuse the first parallax image and the second parallax image to obtain a target image.
在其中一实施例中,所述发射模块包括:In one embodiment, the transmitting module includes:
光源阵列,用于产生脉冲光束;an array of light sources for generating pulsed light beams;
透镜元件,用于调整所述脉冲光束的发散度;以及a lens element for adjusting the divergence of the pulsed beam; and
光束扫描元件,用于对所述脉冲光束进行导向,以指示所述脉冲光束发散至所述场景区域的各个方向。The beam scanning element is used for guiding the pulsed beam to indicate that the pulsed beam diverges to various directions of the scene area.
在其中一实施例中,所述探测模块包括单光子雪崩二极管图像传感器;In one embodiment, the detection module includes a single-photon avalanche diode image sensor;
所述处理模块中,将所述第一深度图像转换为第一视差图像具体包括:In the processing module, converting the first depth image into a first parallax image specifically includes:
以所述左图像为参考图像,计算所述第一深度图像对应的所述第一视差图像:Using the left image as a reference image, calculate the first parallax image corresponding to the first depth image:
Figure PCTCN2020138128-appb-000001
Figure PCTCN2020138128-appb-000001
式中,P D(x 0,y 0)为所述第一深度图像上深度值Z(x 0,y 0)在点(x 0,y 0)处的视差值,f为所述单光子雪崩二极管图像传感器的焦距,T lt是以深度相机-左相机为系统的基线长度,H lt是深度相机标定出来的单应矩阵。 In the formula, PD (x 0 , y 0 ) is the disparity value of the depth value Z (x 0 , y 0 ) at the point (x 0 , y 0 ) on the first depth image, and f is the single The focal length of the photon avalanche diode image sensor, T lt is the baseline length of the depth camera-left camera system, and H lt is the homography matrix calibrated by the depth camera.
在其中一实施例中,将所述第一深度图像转换为所述第一视差图像,还包括:In one of the embodiments, converting the first depth image into the first parallax image further includes:
采用以下二元三次方程对所述第一视差图像的视差曲面进行拟合,以获取平滑的视差曲面:The parallax surface of the first parallax image is fitted by the following binary cubic equation to obtain a smooth parallax surface:
d(x,y)=a 1+a 2x+a 3y+a 4x 2+a 5xy+a 6y 2+a 7x 3+a 8x 2y+a 9xy 2+a 10y 3 d(x,y)=a 1 +a 2 x+a 3 y+a 4 x 2 +a 5 xy+a 6 y 2 +a 7 x 3 +a 8 x 2 y+a 9 xy 2 +a 10 y 3
式中,d(x,y)为一个三维视差曲面,a 1,a 2,…,a 10为系数,x和y为像素 坐标。 In the formula, d(x, y) is a three-dimensional parallax surface, a 1 , a 2 , ..., a 10 are coefficients, and x and y are pixel coordinates.
在其中一实施例中,所述处理模块中,在将所述第一深度图像转化为第一视差图像之前,还包括:In one embodiment, before converting the first depth image into a first parallax image, the processing module further includes:
对所述单光子雪崩二极管图像传感器与所述左图像传感器或者所述右图像传感器进行联合标定:Perform joint calibration on the single-photon avalanche diode image sensor and the left image sensor or the right image sensor:
将所述单光子雪崩二极管图像传感器获得的所述第一深度图像转化为点云数据,将所述点云数据通过联合标定的变换矩阵映射到所述左图像传感器或者所述右图像传感器的相机坐标系中,得到以所述左图像传感器或者所述右图像传感器为参考的平面二维点。Converting the first depth image obtained by the single-photon avalanche diode image sensor into point cloud data, and mapping the point cloud data to the camera of the left image sensor or the right image sensor through a jointly calibrated transformation matrix In the coordinate system, a plane two-dimensional point with the left image sensor or the right image sensor as a reference is obtained.
在其中一实施例中,所述转换模块中,对所述左图像与所述右图像进行立体匹配,以获取第二视差图像具体包括:In one embodiment, in the conversion module, performing stereo matching on the left image and the right image to obtain the second parallax image specifically includes:
选取所述第一视差图像上的像素点作为种子点,引导所述左图像和所述右图像进行立体匹配以获得所述第二视差图像,具体采用以下公式:Select the pixel points on the first parallax image as seed points, and guide the left image and the right image to perform stereo matching to obtain the second parallax image, specifically using the following formula:
Figure PCTCN2020138128-appb-000002
Figure PCTCN2020138128-appb-000002
式中,(x,y 0)为右图像每个视差范围内的像素点,θ为选定的参数。 In the formula, (x, y 0 ) is the pixel point within each parallax range of the right image, and θ is the selected parameter.
在其中一实施例中,对所述第一视差图像与所述第二视差图像进行融合,以获取深度图像,具体包括:In one embodiment, the first parallax image and the second parallax image are fused to obtain a depth image, which specifically includes:
根据所述第一视差图像,获取第一可信度函数;obtaining a first credibility function according to the first parallax image;
根据所述第二视差图像,获取第二可信度函数;obtaining a second credibility function according to the second parallax image;
根据所述第一可信度函数和所述第二可信度函数构成不同的权重,对所述第一视差图像与所述第二视差图像进行融合,以获取像素融合视差;According to the different weights formed by the first credibility function and the second credibility function, the first parallax image and the second parallax image are fused to obtain pixel fusion parallax;
根据所述像素融合视差对场景区域进行三维重建,以获取所述深度图像。Three-dimensional reconstruction is performed on the scene area according to the pixel fusion disparity, so as to obtain the depth image.
在其中一实施例中,对所述第一视差图像与所述第二视差图像进行融合,以获取像素融合视差,具体包括:In one embodiment, the first parallax image and the second parallax image are fused to obtain pixel fusion parallax, which specifically includes:
根据所述第一可信度函数和所述第二可信度函数,构成不同的权重对两种视差进行融合,得到像素融合视差,具体采用以下公式:According to the first credibility function and the second credibility function, different weights are formed to fuse the two parallaxes to obtain the pixel fusion parallax. Specifically, the following formula is used:
d=w t·d t+w s·d s d=w t ·d t +w s ·d s
Figure PCTCN2020138128-appb-000003
Figure PCTCN2020138128-appb-000003
w t=1-w s w t =1-w s
式中,d t为第一视差值,d s为第二视差值,w t为根据第一视差值的权重,w s为根据第二视差值的权重。 In the formula, d t is the first disparity value, d s is the second disparity value, w t is the weight according to the first disparity value, and ws is the weight according to the second disparity value.
本发明实施例中上述的一种融合的深度测量方法及测量装置,通过单光子雪崩二极管图像传感器获取第一深度图像,并将第一深度图像的像素点作为种子点,引导左图像传感器和右图像传感器进行立体匹配,以获取第二视差图像,通过将第一深度图像转化为第一视差图像,并根据第一可信度函数和第二可信度函数构成不同的权重,以融合第一视差图像和第二视差图像,得到融合视差图像以恢复场景区域的三维重建,获取高分辨率的目标图像,识别精度较高。In the above-mentioned fusion depth measurement method and measurement device in the embodiment of the present invention, a first depth image is acquired by a single-photon avalanche diode image sensor, and the pixel points of the first depth image are used as seed points to guide the left image sensor and the right image sensor. The image sensor performs stereo matching to obtain a second parallax image, and by converting the first depth image into a first parallax image, and forming different weights according to the first credibility function and the second credibility function, to fuse the first The parallax image and the second parallax image are obtained by merging the parallax image to restore the three-dimensional reconstruction of the scene area, and obtain a high-resolution target image with high recognition accuracy.
附图说明Description of drawings
图1为本申请一实施例提供的一种融合的深度测量方法的具体流程步骤示意图;FIG. 1 is a schematic diagram of specific flow steps of a fusion depth measurement method provided by an embodiment of the present application;
图2为本申请一实施例提供的一种融合的深度测量方法中将第一深度图像转换为第一视差图像的流程示意图;2 is a schematic flowchart of converting a first depth image into a first parallax image in a fusion depth measurement method provided by an embodiment of the present application;
图3为本申请一实施例提供的一种融合的深度测量方法中对第一视差图像与第二视差图像进行融合的流程示意图;FIG. 3 is a schematic flowchart of fusing a first parallax image and a second parallax image in a fusion depth measurement method provided by an embodiment of the present application;
图4为本申请一实施例提供的一种融合的深度测量装置的结构示意图。FIG. 4 is a schematic structural diagram of a fused depth measurement apparatus according to an embodiment of the present application.
具体实施方式detailed description
为了使本申请所要解决的技术问题、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the technical problems, technical solutions and beneficial effects to be solved by the present application clearer, the present application will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application, but not to limit the present application.
图1为本申请一实施例提供的一种融合的深度测量方法的具体流程步骤示 意图,为了便于说明,仅示出了与本实施例相关的部分,详述如下:Fig. 1 is the concrete flow chart of the depth measurement method of a kind of fusion provided by an embodiment of the application, for the convenience of explanation, only shows the part relevant to the present embodiment, and details are as follows:
在一实施例中,第一方面提供了一种融合的深度测量方法,对场景区域的距离进行测量,包括以下步骤:In an embodiment, the first aspect provides a fusion depth measurement method, which measures the distance of a scene area, including the following steps:
S101.控制发射模块对场景区域发射脉冲光束。S101. Control the emission module to emit a pulse beam to the scene area.
在一个实施例中,发射模块包括光源阵列、透镜元件及光束扫描元件。In one embodiment, the emission module includes a light source array, a lens element and a beam scanning element.
光源阵列用于产生脉冲光束,在一个实施例中,光源阵列采用半导体LED、半导体激光或者VCSEL(Vertical-Cavity Surface-Emitting Laser,垂直腔面激光发射器)阵列作为光源,光源阵列也可以采用平行共振腔表面的边发射激光器,用于向外发射红外、紫外等波长的光束。优选地,本实施例中光源阵列采用VCSEL阵列,VCSEL拥有体积小、脉冲光束发射角小、稳定性好等特点,在面积1mm×1mm的半导体衬底上布置成百上千个VCSEL子光源,由此构成VCSEL阵列发光芯片,其体积小、功耗低。VCSEL阵列发光芯片可以是拥有较小体积和厚度的裸片,也可以具有较好的稳定性以及方便连接的经过封装后的发光芯片。The light source array is used to generate pulsed beams. In one embodiment, the light source array uses a semiconductor LED, a semiconductor laser, or a VCSEL (Vertical-Cavity Surface-Emitting Laser) array as the light source, and the light source array can also use a parallel light source array. The edge-emitting laser on the surface of the resonant cavity is used to emit light beams with wavelengths such as infrared and ultraviolet. Preferably, in this embodiment, the light source array is a VCSEL array. The VCSEL has the characteristics of small size, small pulse beam emission angle, and good stability. Hundreds of thousands of VCSEL sub-light sources are arranged on a semiconductor substrate with an area of 1mm×1mm. Thus, a VCSEL array light-emitting chip is formed, which is small in size and low in power consumption. The VCSEL array light-emitting chip can be a bare chip with a smaller volume and thickness, or a packaged light-emitting chip with better stability and convenient connection.
在一个实施例中,透镜元件用于调整脉冲光束的发散度,透镜元件采用单透镜、透镜组合或者微透镜阵列。发散度可被确定为与脉冲光束一个或多个截面轴相关联的角度值,发散度的调整可至少部分控制缓解脉冲光束截面的不均匀性,以改善装置在确定视野中的各个位置处检测的反射点的属性时的准确度,并改善光源阵列中各组反射点与各个对象的点云的相关性等。光源阵列对透镜元件有一定的设计要求,比如发射脉冲光束密度,因此单透镜元件往往难以达到要求,则需要多个透镜共同组成透镜姿态来满足设计要求。在具体实施过程中,除了需要考虑透镜的基本设计要求之外,还需要考虑透镜元件在使用过程中会遇到的一些其他因素,此处不作限制,可根据具体情况进行设计。In one embodiment, the lens element is used to adjust the divergence of the pulsed light beam, and the lens element adopts a single lens, a lens combination or a microlens array. Divergence can be determined as an angular value associated with one or more cross-sectional axes of the pulsed beam, and adjustment of the divergence can be at least partially controlled to mitigate non-uniformity in the cross-section of the pulsed beam to improve device detection at various locations in a defined field of view The accuracy of the attributes of reflection points, and improve the correlation between each group of reflection points in the light source array and the point cloud of each object, etc. The light source array has certain design requirements for lens elements, such as the emission pulse beam density. Therefore, it is often difficult for a single lens element to meet the requirements, and multiple lenses are required to form a lens posture to meet the design requirements. In the specific implementation process, in addition to the basic design requirements of the lens, it is also necessary to consider some other factors that the lens element will encounter during use, which is not limited here, and can be designed according to specific conditions.
在一个实施例中,光束扫描元件用于对脉冲光束进行导向,以指示脉冲光束发散至场景区域的各个方向,光束扫描元件采用由MEMS(Micro Electro Mechanical ayatems,微电气机械系统)技术制成的微镜子,可控制地将脉冲光束 朝各个方向发射,并对目标场景区域进行扫描。In one embodiment, the beam scanning element is used to guide the pulsed beam to indicate that the pulsed beam is diverged to various directions in the scene area, and the beam scanning element is made of MEMS (Micro Electro Mechanical ayatems, Micro Electro Mechanical System) technology. Micromirrors that controllably emit pulsed beams in all directions and scan the target scene area.
S102.通过单光子雪崩二极管图像传感器接收脉冲光束的反射信号,并输出往返脉冲光束和反射信号的渡越时间的电信号。S102. Receive the reflected signal of the pulsed light beam through the single-photon avalanche diode image sensor, and output the electric signal of the transit time of the round-trip pulsed light beam and the reflected signal.
在一个实施例中,发射模块发射时序上被脉冲调制的线光束或面阵光束的脉冲光束,SPAD(Single Photon Avalanche Diode,单光子雪崩二极管)图像传感器探测到单光子,从而产生一个数字脉冲,通过TDC(Time to Digital Convert,时间数据转换器)电路记录数字脉冲产生的时刻,对相应时间区间内的单光子计数累加值进行加一操作,输出多组光子飞行时间信息,大量重复同样的测量后,会得到很多时间数据,将这些时间内数据按照同样的方式累加在相应的时间区间中,然后通过TCSPC(Time-correlated single-photon counting,单光子计数)电路得到相应的单光子计数直方图,对单光子计数直方图进行峰值判断,确定光子在场景区域与发射模块之间飞行所需要的渡越时间的电信号并输出。In one embodiment, the emitting module emits a pulsed light beam of a line beam or an area array beam that is pulse-modulated in time sequence, and the SPAD (Single Photon Avalanche Diode, single-photon avalanche diode) image sensor detects the single photon, thereby generating a digital pulse, The time when the digital pulse is generated is recorded by the TDC (Time to Digital Convert) circuit, and the accumulated value of the single photon count in the corresponding time interval is added by one, and multiple sets of photon flight time information are output, and the same measurement is repeated in large numbers. After that, a lot of time data will be obtained, and these time data will be accumulated in the corresponding time interval in the same way, and then the corresponding single photon counting histogram will be obtained through the TCSPC (Time-correlated single-photon counting) circuit. , judge the peak value of the single photon count histogram, determine the electrical signal of the transit time required for the photon to fly between the scene area and the emission module, and output it.
S103.通过左图像传感器采集场景区域的左图像,并通过右图像传感器采集场景区域的右图像。S103. Use the left image sensor to collect the left image of the scene area, and use the right image sensor to collect the right image of the scene area.
在一个实施例中,左图像传感器和右图像传感器在发射模块的照射下,分别采集场景区域的左红外图像和右红外图像;左图像传感器和右图像传感器均可采用红外图像传感器,且由于红外图像传感器在成像时需要持续的主动光源(如红外光)进行照明,因此发射模块还包括红外泛光灯、红外投影仪等红外光源。In one embodiment, the left image sensor and the right image sensor respectively collect the left infrared image and the right infrared image of the scene area under the illumination of the emission module; both the left image sensor and the right image sensor can use infrared image sensors, and because the infrared The image sensor needs a continuous active light source (such as infrared light) for illumination during imaging, so the emission module also includes infrared light sources such as infrared floodlights and infrared projectors.
在另一实施例中,左图像传感器和右图像传感器还可以采用可见光图像传感器,左图像传感器和右图像传感器可以采用两个RGB图像传感器,或这一个RGB图像传感器与一个灰度传感器组合,在环境光的照明下,用于分别采集场景区域的左可见光图像和右可见光图像。In another embodiment, the left image sensor and the right image sensor may also use visible light image sensors, the left image sensor and the right image sensor may use two RGB image sensors, or the one RGB image sensor and one grayscale sensor are combined, Under the illumination of ambient light, it is used to collect the left visible light image and the right visible light image of the scene area respectively.
S104.对渡越时间的电信号进行处理以获取第一深度图像,并将第一深度图像转换为第一视差图像。S104. Process the electrical signal of the transit time to obtain a first depth image, and convert the first depth image into a first parallax image.
在一个实施例中,对于平行的立体视觉系统,左图像传感器与右图像传感 器所形成的左相机、右相机的光轴是平行的,因此可根据三角测距原理获得场景区域的深度信息;设定场景区域内P点在左相机和右相机上的投影坐标分别为P l(x l,y l)和P r(x r,y r),根据三角测距原理获得场景区域点P的深度信息: In one embodiment, for a parallel stereo vision system, the optical axes of the left camera and the right camera formed by the left image sensor and the right image sensor are parallel, so the depth information of the scene area can be obtained according to the principle of triangulation; The projection coordinates of point P in the fixed scene area on the left camera and the right camera are P l (x l , y l ) and P r (x r , y r ) respectively, and the depth of the scene area point P is obtained according to the principle of triangulation ranging information:
Figure PCTCN2020138128-appb-000004
Figure PCTCN2020138128-appb-000004
进一步地,获得P点处的深度值:Further, obtain the depth value at point P:
Figure PCTCN2020138128-appb-000005
Figure PCTCN2020138128-appb-000005
式中,z为点P的深度值,f为左相机与右相机的焦距,b为左图像传感器与右图像传感器之间的基线,d为左图像与右图像之间的视差。In the formula, z is the depth value of point P, f is the focal length of the left camera and the right camera, b is the baseline between the left image sensor and the right image sensor, and d is the parallax between the left image and the right image.
图2为本申请一实施例提供的一种融合的深度测量方法中将第一深度图像转换为第一视差图像的流程示意图,为了便于说明,仅示出了与本实施例相关的部分,详述如下:FIG. 2 is a schematic flowchart of converting a first depth image into a first parallax image in a fusion depth measurement method provided by an embodiment of the present application. For the convenience of description, only parts related to this embodiment are shown. as follows:
具体地,将第一深度图像转换为第一视差图像包括以下步骤:Specifically, converting the first depth image into the first parallax image includes the following steps:
S1041.将单光子雪崩二极管图像传感器与左图像传感器或者右图像传感器进行联合标定。S1041. Jointly calibrate the single-photon avalanche diode image sensor with the left image sensor or the right image sensor.
本实施例中以左图像为参考图像,因此将SPAD图像传感器与左图像传感器进行联合标定;将SPAD图像传感器获得的第一深度图像转化为点云数据,该点云通过联合标定的变换矩阵[R,T]映射到左图像传感器的相机坐标系,其中,R为旋转矩阵,T为平移矢量。即将SPAD图像传感器中的点的三维坐标X Ti(i=1,2,…,N)变换到以左图像传感器为参考的双目系统的三维点X Li(i=1,2,…,N),将双目系统的三维点X Li通过左图像传感器的内参数矩阵投影到左图像传感器坐标系中,从而形成点阵x Li(i=1,2,…,N),得到以左图像传感器为参考的平面二维点。 In this embodiment, the left image is used as the reference image, so the SPAD image sensor and the left image sensor are jointly calibrated; the first depth image obtained by the SPAD image sensor is converted into point cloud data, and the point cloud passes through the jointly calibrated transformation matrix [ R,T] maps to the camera coordinate system of the left image sensor, where R is the rotation matrix and T is the translation vector. That is to transform the three-dimensional coordinates X Ti (i=1,2,...,N) of the points in the SPAD image sensor to the three-dimensional points X Li (i=1,2,...,N) of the binocular system with the left image sensor as a reference ), the three-dimensional point X Li of the binocular system is projected into the coordinate system of the left image sensor through the internal parameter matrix of the left image sensor, so as to form a lattice x Li (i=1,2,...,N), and the left image is obtained. The sensor is a plane two-dimensional point of reference.
S1042.将第一深度图像转换为第一视差图像。S1042. Convert the first depth image into a first parallax image.
以左图像为参考图像,根据式(2),采用以下公式计算由SPAD图像传感 器组成的深度相机获取第一深度图像所对应的第一视差图像:Taking the left image as the reference image, according to formula (2), the following formula is used to calculate the first parallax image corresponding to the first depth image obtained by the depth camera formed by the SPAD image sensor:
Figure PCTCN2020138128-appb-000006
Figure PCTCN2020138128-appb-000006
式中,P D(x 0,y 0)为第一深度图像上深度值Z(x 0,y 0)在点(x 0,y 0)处的视差值,f为SPAD图像传感器的焦距,T lt是以深度相机-左相机为系统的基线长度,H lt是深度相机标定出来的单应矩阵。 In the formula, PD (x 0 , y 0 ) is the parallax value of the depth value Z (x 0 , y 0 ) at the point (x 0 , y 0 ) on the first depth image, and f is the focal length of the SPAD image sensor , T lt is the baseline length of the depth camera-left camera system, and H lt is the homography matrix calibrated by the depth camera.
应当理解的是,本实施例中选取左图像为参考图像,也可以选取右图像为参考图像,此处不作限制。It should be understood that, in this embodiment, the left image is selected as the reference image, and the right image may also be selected as the reference image, which is not limited here.
S1043.采用二元三次方程对第一视差图像的视差曲面进行拟合,以获取平滑的视差曲面。S1043. Use a binary cubic equation to fit the parallax surface of the first parallax image to obtain a smooth parallax surface.
由于上述步骤中获得的第一视差图像为稀疏的,因此需要对缺失的数据进行填充。根据稀疏的已知视差曲面,在视差空间拟合出平滑的视差曲面,即可根据此视差曲面进行上采样对未知视差进行填充,为了保证拟合的视差曲面最大程度上接近真实场景,本实施例采用二元三次方程对第一视差图像的视差曲面进行拟合,具体如下:Since the first parallax image obtained in the above steps is sparse, it is necessary to fill in the missing data. According to the sparse known parallax surface, a smooth parallax surface is fitted in the parallax space, and the unknown parallax can be filled by upsampling according to the parallax surface. In order to ensure that the fitted parallax surface is as close to the real scene as possible, this implementation For example, the parallax surface of the first parallax image is fitted by a binary cubic equation, as follows:
d(x,y)=a 1+a 2x+a 3y+a 4x 2+a 5xy+a 6y 2+a 7x 3+a 8x 2y+a 9xy 2+a 10y 3  (4) d(x,y)=a 1 +a 2 x+a 3 y+a 4 x 2 +a 5 xy+a 6 y 2 +a 7 x 3 +a 8 x 2 y+a 9 xy 2 +a 10 y 3 (4)
式中,d(x,y)为一个三维视差曲面,a 1,a 2,…,a 10为系数,x和y为像素坐标。 In the formula, d(x, y) is a three-dimensional parallax surface, a 1 , a 2 , ..., a 10 are coefficients, and x and y are pixel coordinates.
S105.对左图像与右图像进行立体匹配,以获取第二视差图像。S105. Perform stereo matching on the left image and the right image to obtain a second parallax image.
在其中一实施例中,对左图像与右图像进行立体匹配,以获取第二视差图像具体包括:In one embodiment, performing stereo matching on the left image and the right image to obtain the second parallax image specifically includes:
选取第一视差图像上的像素点作为种子点,引导左图像和右图像进行立体匹配以获得第二视差图像。The pixel points on the first parallax image are selected as seed points, and the left image and the right image are guided to perform stereo matching to obtain the second parallax image.
在一个实施例中,由于在低反射区域或者是折光行较强区域,由SPAD图像传感器组成的深度相机无法获取精确的深度信息,因此对于右图像每个视差范围内的像素点(x,y 0)深度距离代价函数为: In one embodiment, since a depth camera composed of a SPAD image sensor cannot obtain accurate depth information in a low-reflection area or an area with strong refraction lines, for the pixel points (x, y) within each parallax range of the right image 0 ) The depth distance cost function is:
Figure PCTCN2020138128-appb-000007
Figure PCTCN2020138128-appb-000007
式中,θ为根据经验选定的参数,用来调节深度距离代价函数的范围,以设定C(x,y 0)的阈值,若右图像上的某像素点(x,y 0)的深度距离代价函数的值在阈值范围之内,则该像素点为种子点,计算左图像与右图像的视差值,以获得第二视差图像;若右图像上的某像素点(x,y 0)的深度距离代价函数的值在阈值范围之外,则该点为非种子点,则将在双目立体匹配时将沿其水平方向搜索最近一个种子点将其视差值赋予该点。 In the formula, θ is a parameter selected according to experience, which is used to adjust the range of the depth distance cost function to set the threshold of C(x, y 0 ) . If the value of the depth distance cost function is within the threshold range, the pixel is the seed point, and the disparity value between the left image and the right image is calculated to obtain the second disparity image; if a certain pixel (x, y) on the right image 0 ) The value of the depth distance cost function is outside the threshold range, then the point is a non-seed point, then during binocular stereo matching, it will search for the nearest seed point along its horizontal direction and assign its disparity value to this point.
在一个实施例中,由左图像传感器、右图像传感器组成的双目相机是水平放置的,保证相机光轴平行的,成像也在同一个水平面上,不存在垂直方向上的视差,因此纵坐标y 0是相同的,视差搜索范围也是根据双目相机的基线决定其最大可能搜索范围;本申请以左图像为参考图像,右图像为目标图像,即对于左图像参考图像上的某一点(x 0,y 0),会在对应的右图像目标图像中的[(x 0-m,y 0),(x 0+m,y 0)]视差搜索范围内查找另一个点。 In one embodiment, the binocular camera composed of the left image sensor and the right image sensor is placed horizontally, ensuring that the optical axes of the cameras are parallel, and the imaging is also on the same horizontal plane, and there is no parallax in the vertical direction, so the ordinate is y 0 is the same, and the parallax search range is also determined according to the baseline of the binocular camera. 0 , y 0 ), will find another point within the [(x 0 -m,y 0 ),(x 0 +m,y 0 )] disparity search range in the corresponding right image target image.
假设在右图像目标图像即右图像上的一点(x,y 0)是参考图像即左图像点(x 0,y 0)的匹配点,利用深度距离代价函数C(x,y 0),引导双目相机进行立体匹配,求得视差为D=|(x-x 0)|,以得到第二视差图像。 Assuming that a point (x, y 0 ) on the target image of the right image, that is, the right image, is the matching point of the reference image, that is, the left image point (x 0 , y 0 ), using the depth distance cost function C(x, y 0 ), guide the Stereo matching is performed on the binocular camera, and the parallax is obtained as D=|(xx 0 )|, so as to obtain a second parallax image.
S106.对第一视差图像与第二视差图像进行融合,以获取目标图像。S106. Fusion of the first parallax image and the second parallax image to obtain a target image.
图3为本申请一实施例提供的一种融合的深度测量方法中对第一视差图像与第二视差图像进行融合的流程示意图,为了便于说明,仅示出了与本实施例相关的部分,详述如下:FIG. 3 is a schematic flowchart of a fusion depth measurement method for a first parallax image and a second parallax image in a fusion depth measurement method provided by an embodiment of the present application. For the convenience of description, only parts related to this embodiment are shown. Details are as follows:
在其中一实施例中,对第一视差图像与第二视差图像进行融合,以获取目标图像包括以下步骤:In one embodiment, fusing the first parallax image and the second parallax image to obtain the target image includes the following steps:
S1061.根据第一视差图像,获取第一可信度函数。S1061. Obtain a first reliability function according to the first parallax image.
对于SPAD图像传感器获取的第一深度图像,第一深度图像上的点(x 0,y 0)的深度值Z(x 0,y 0),设其标准差为σ z,根据式(3),可得出对应的视差标准差为σ d,即 For the first depth image acquired by the SPAD image sensor, the depth value Z(x 0 , y 0 ) of the point (x 0 , y 0 ) on the first depth image, set its standard deviation as σ z , according to formula (3) , the corresponding disparity standard deviation can be obtained as σ d , namely
Figure PCTCN2020138128-appb-000008
Figure PCTCN2020138128-appb-000008
进一步地,获得点(x 0,y 0)的视差标准差: Further, get the standard deviation of the parallax at the point (x 0 , y 0 ):
Figure PCTCN2020138128-appb-000009
Figure PCTCN2020138128-appb-000009
其中,定义视差标准差的最小阈值σ min和最大阈值σ max,其中σ min为接收到的景深最近处最亮像素的标准差,σ max为接收到的景深最远出最暗像素的标准差。当在第一深度图像上计算某像素点的视差值标准差小于σ min,则认为该点完全稳定,可信度为1;当标准差大于σ max时,则该点不可靠,可信度为0;标准差处于2个阈值之间时,设可信度范围为(0,1),由此得到第一可信度函数,具体如下: Among them, define the minimum threshold σ min and the maximum threshold σ max of the parallax standard deviation, where σ min is the standard deviation of the brightest pixel at the nearest received depth of field, and σ max is the received standard deviation of the darkest pixel farthest from the depth of field . When the standard deviation of the disparity value of a pixel calculated on the first depth image is less than σ min , the point is considered to be completely stable, and the reliability is 1; when the standard deviation is greater than σ max , the point is unreliable and reliable degree is 0; when the standard deviation is between the two thresholds, set the reliability range to (0,1), thus obtaining the first reliability function, as follows:
Figure PCTCN2020138128-appb-000010
Figure PCTCN2020138128-appb-000010
S1062.根据第二视差图像,获取第二可信度函数。S1062. Obtain a second reliability function according to the second parallax image.
在一个实施例中,根据自适应加权算法计算双目相机获取的第二视差图像,并获取第二可信度函数,即In one embodiment, the second parallax image obtained by the binocular camera is calculated according to an adaptive weighting algorithm, and the second reliability function is obtained, that is,
Figure PCTCN2020138128-appb-000011
Figure PCTCN2020138128-appb-000011
式中,
Figure PCTCN2020138128-appb-000012
Figure PCTCN2020138128-appb-000013
分别为双目相机在自适应加权算法中求得的最小匹配代价和第二小匹配代价,T c为常参数,为避免
Figure PCTCN2020138128-appb-000014
为0,设至T c>0。
In the formula,
Figure PCTCN2020138128-appb-000012
and
Figure PCTCN2020138128-appb-000013
are the minimum matching cost and the second smallest matching cost obtained by the binocular camera in the adaptive weighting algorithm, respectively, T c is a constant parameter, in order to avoid
Figure PCTCN2020138128-appb-000014
is 0, set to T c >0.
S1063.根据第一可信度函数和第二可信度函数构成不同的权重,对第一视差图像与第二视差图像进行融合,以获取像素融合视差。S1063. According to the different weights formed by the first credibility function and the second credibility function, fuse the first parallax image and the second parallax image to obtain pixel fusion parallax.
设由SPAD图像传感器获取的第一视差值为d t,由左图像传感器、右图像传感器获取的第二视差值为d s,则根据第一可信度函数和第二可信度函数,构 成不同的权重对两种视差进行融合,得像素融合视差d为 Suppose the first parallax value obtained by the SPAD image sensor is d t , and the second parallax value obtained by the left image sensor and the right image sensor is d s , then according to the first reliability function and the second reliability function , constitute different weights to fuse the two disparities, and obtain the pixel fusion disparity d as
d=w t·d t+w s·d s    (10) d=w t ·d t +w s ·d s (10)
式中,w t为根据SPAD图像传感器获得的深度图像求得的视差值的权重,w s为根据左图像传感器、右图像传感器获得左图像、右图像进行立体匹配求得的视差的权重。 In the formula, w t is the weight of the disparity value obtained from the depth image obtained by the SPAD image sensor, and ws is the weight of the disparity obtained by stereo matching of the left image and the right image obtained from the left image sensor and the right image sensor.
进一步地,由式(10)可得:Further, from formula (10), we can get:
Figure PCTCN2020138128-appb-000015
Figure PCTCN2020138128-appb-000015
w t=1-w s   (12) w t =1-w s (12)
应当理解的是,本申请并不对计算权重的方式进行限定,现有技术中任意计算权重的方式均可应用于本申请中。It should be understood that the present application does not limit the method of calculating the weight, and any method of calculating the weight in the prior art can be applied to the present application.
S1064.根据像素融合视差对场景区域进行三维重建,以获取目标图像。S1064. Perform three-dimensional reconstruction on the scene area according to the pixel fusion parallax to obtain a target image.
本申请实施例的第二方面提供了一种融合的深度测量装置,包括:A second aspect of the embodiments of the present application provides a fused depth measurement device, including:
图4为本申请一实施例提供的一种融合的深度测量装置的结构示意图,为了便于说明,仅示出了与本实施例相关的部分,详述如下:FIG. 4 is a schematic structural diagram of a fused depth measurement apparatus provided by an embodiment of the present application. For the convenience of description, only parts related to this embodiment are shown, and the details are as follows:
在其中一实施例中,一种融合的深度测量装置包括发射模块100、接收模块200以及控制与处理模块300。In one embodiment, a fused depth measurement apparatus includes a transmitting module 100 , a receiving module 200 and a control and processing module 300 .
发射模块100,用于对场景区域30发射脉冲光束10。The transmitting module 100 is used for transmitting the pulsed light beam 10 to the scene area 30 .
具体地,发射模块100包括光源阵列101、透镜元件102及光束扫描元件103。Specifically, the emission module 100 includes a light source array 101 , a lens element 102 and a beam scanning element 103 .
其中,光源阵列101用于产生脉冲光束;透镜元件102用于调整脉冲光束的发散度;光束扫描元件103用于对脉冲光束进行导向,以指示所述脉冲光束发散至场景区域30的各个方向。The light source array 101 is used to generate pulsed beams; the lens element 102 is used to adjust the divergence of the pulsed beams;
接收模块200包括探测模块和采集模块。The receiving module 200 includes a detection module and a collection module.
探测模块包括单光子雪崩二极管图像传感器201,单光子雪崩二极管图像传感器201用于接收脉冲光束10的反射信号20,并输出往返脉冲光束10和反 射信号20的渡越时间的电信号。The detection module includes a single-photon avalanche diode image sensor 201, and the single-photon avalanche diode image sensor 201 is used to receive the reflected signal 20 of the pulsed beam 10, and output an electrical signal of the transit time of the round-trip pulsed beam 10 and the reflected signal 20.
采集模块包括左图像传感器202和右图像传感器203,左图像传感器202用于采集场景区域30的左图像;右图像传感器203用于采集场景区域30的右图像。The acquisition module includes a left image sensor 202 and a right image sensor 203 . The left image sensor 202 is used to collect the left image of the scene area 30 ; the right image sensor 203 is used to collect the right image of the scene area 30 .
控制与处理模块300包括处理模块、转换模块及融合模块,处理模块、转换模块及融合模块可以是实现各模块中相应功能的独立模块,也可以是一个实现各模块中全部功能的综合处理器。The control and processing module 300 includes a processing module, a conversion module and a fusion module. The processing module, the conversion module and the fusion module can be independent modules that implement corresponding functions in each module, or can be an integrated processor that implements all functions in each module.
处理模块用于对渡越时间的电信号进行处理以获取第一深度图像,并将第一深度图像转换为第一视差图像。The processing module is configured to process the electrical signal of the transit time to obtain a first depth image, and convert the first depth image into a first parallax image.
转换模块用于对左图像与右图像进行立体匹配,以获取第二视差图像。The conversion module is used for stereo matching the left image and the right image to obtain a second parallax image.
融合模块用于对第一视差图像与第二视差图像进行融合,以获取目标图像。The fusion module is used for fusing the first parallax image and the second parallax image to obtain the target image.
需要说明的是,本实施例中的一种融合的深度测量装置,是上述一种融合的深度测量方法对应的测量装置的实施例,因此关于测量装置的各模块中软件方法的具体实现,可参照图1-图3的实施例,此处不再详细赘述。It should be noted that a fused depth measurement device in this embodiment is an embodiment of a measurement device corresponding to the above-mentioned fused depth measurement method. Therefore, the specific implementation of the software method in each module of the measurement device can be Referring to the embodiments in FIG. 1 to FIG. 3 , detailed descriptions are omitted here.
本发明实施例中上述的一种融合的深度测量方法及测量装置,通过单光子雪崩二极管图像传感器获取第一深度图像,并将第一深度图像的像素点作为种子点,引导左图像传感器和右图像传感器进行立体匹配,以获取第二视差图像,通过将第一深度图像转化为第一视差图像,并根据第一可信度函数和第二可信度函数构成不同的权重,以融合第一视差图像和第二视差图像,得到融合视差图像以恢复场景区域的三维重建,获取高分辨率的深度图像,识别精度较高。In the above-mentioned fusion depth measurement method and measurement device in the embodiment of the present invention, a first depth image is acquired by a single-photon avalanche diode image sensor, and the pixel points of the first depth image are used as seed points to guide the left image sensor and the right image sensor. The image sensor performs stereo matching to obtain a second parallax image, and by converting the first depth image into a first parallax image, and forming different weights according to the first credibility function and the second credibility function, to fuse the first The parallax image and the second parallax image are obtained to obtain a fusion parallax image to restore the three-dimensional reconstruction of the scene area, and obtain a high-resolution depth image with high recognition accuracy.
在本文对各种器件、电路、装置、系统和/或方法描述了各种实施方式。阐述了很多特定的细节以提供对如在说明书中描述的和在附图中示出的实施方式的总结构、功能、制造和使用的彻底理解。然而本领域中的技术人员将理解,实施方式可在没有这样的特定细节的情况下被实施。在其它实例中,详细描述了公知的操作、部件和元件,以免使在说明书中的实施方式难以理解。本领域中的技术人员将理解,在本文和所示的实施方式是非限制性例子,且因此可认 识到,在本文公开的特定的结构和功能细节可以是代表性的且并不一定限制实施方式的范围。Various embodiments are described herein for various devices, circuits, apparatus, systems, and/or methods. Numerous specific details are set forth to provide a thorough understanding of the general structure, function, manufacture, and use of the embodiments as described in the specification and illustrated in the accompanying drawings. It will be understood by those skilled in the art, however, that embodiments may be practiced without such specific details. In other instances, well-known operations, components and elements have been described in detail so as not to obscure the implementations in the specification. It will be understood by those skilled in the art that the embodiments herein and shown are non-limiting examples, and therefore, it may be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the embodiments range.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the above-mentioned embodiments, those of ordinary skill in the art should understand that: it can still be used for the above-mentioned implementations. The technical solutions described in the examples are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions in the embodiments of the application, and should be included in the within the scope of protection of this application.

Claims (10)

  1. 一种融合的深度测量方法,对场景区域的距离进行测量,其特征在于,所述测量方法包括:A fusion depth measurement method for measuring the distance of a scene area, wherein the measurement method comprises:
    对所述场景区域发射脉冲光束;emitting a pulsed light beam to the scene area;
    接收所述脉冲光束的反射信号,并输出往返所述脉冲光束与所述反射信号的渡越时间的电信号;receiving the reflected signal of the pulsed beam, and outputting an electrical signal of the transit time between the pulsed beam and the reflected signal;
    采集所述场景区域的左图像和右图像;collecting left and right images of the scene area;
    对所述渡越时间的电信号进行处理以获取第一深度图像,并将所述第一深度图像转换为第一视差图像;processing the electrical signal of the transit time to obtain a first depth image, and converting the first depth image into a first parallax image;
    对所述左图像与所述右图像进行立体匹配,以获取第二视差图像;performing stereo matching on the left image and the right image to obtain a second parallax image;
    对所述第一视差图像与所述第二视差图像进行融合,以获取目标图像。The first parallax image and the second parallax image are fused to obtain a target image.
  2. 根据权利要求1所述的测量方法,其特征在于,对所述场景区域发射脉冲光束包括:The measurement method according to claim 1, characterized in that, transmitting a pulsed beam to the scene area comprises:
    产生所述脉冲光束;generating the pulsed beam;
    调整所述脉冲光束的发散度;adjusting the divergence of the pulsed beam;
    对所述脉冲光束进行导向,以指示所述脉冲光束发散至所述场景区域的各个方向。The pulsed light beam is directed to instruct the pulsed light beam to diverge in various directions to the scene area.
  3. 一种融合的深度测量装置,其特征在于,包括:A fusion depth measurement device, comprising:
    发射模块,用于对场景区域发射脉冲光束;The emission module is used to emit pulsed beams to the scene area;
    探测模块,用于接收所述脉冲光束的反射信号,并输出往返所述脉冲光束与所述反射信号的渡越时间的电信号;a detection module, configured to receive the reflected signal of the pulsed beam, and output an electrical signal of the transit time between the pulsed beam and the reflected signal;
    采集模块,用于采集所述场景区域的左图像和右图像;an acquisition module, configured to acquire the left image and the right image of the scene area;
    处理模块,用于对所述渡越时间的电信号进行处理以获取第一深度图像,并将所述第一深度图像转换为第一视差图像;a processing module, configured to process the electrical signal of the transit time to obtain a first depth image, and convert the first depth image into a first parallax image;
    转换模块,对所述左图像与所述右图像进行立体匹配,以获取第二视差图像;以及a conversion module that performs stereo matching on the left image and the right image to obtain a second parallax image; and
    融合模块,用于对所述第一视差图像与所述第二视差图像进行融合,以获取目标图像。A fusion module, configured to fuse the first parallax image and the second parallax image to obtain a target image.
  4. 根据权利要求3所述的测量装置,其特征在于,所述发射模块包括:The measuring device according to claim 3, wherein the transmitting module comprises:
    光源阵列,用于产生脉冲光束;an array of light sources for generating pulsed light beams;
    透镜元件,用于调整所述脉冲光束的发散度;以及a lens element for adjusting the divergence of the pulsed beam; and
    光束扫描元件,用于对所述脉冲光束进行导向,以指示所述脉冲光束发散至所述场景区域的各个方向。The beam scanning element is used for guiding the pulsed beam to indicate that the pulsed beam is diverged to various directions of the scene area.
  5. 根据权利要求4所述的测量装置,其特征在于,所述探测模块包括单光子雪崩二极管图像传感器;The measurement device according to claim 4, wherein the detection module comprises a single photon avalanche diode image sensor;
    所述处理模块中,将所述第一深度图像转换为所述第一视差图像具体包括:In the processing module, converting the first depth image into the first parallax image specifically includes:
    以所述左图像为参考图像,计算所述第一深度图像对应的所述第一视差图像:Using the left image as a reference image, calculate the first parallax image corresponding to the first depth image:
    Figure PCTCN2020138128-appb-100001
    Figure PCTCN2020138128-appb-100001
    式中,P D(x 0,y 0)为所述第一深度图像上深度值Z(x 0,y 0)在点(x 0,y 0)处的视差值,f为所述单光子雪崩二极管图像传感器的焦距,T lt是以深度相机-左相机为系统的基线长度,H lt是深度相机标定出来的单应矩阵。 In the formula, PD (x 0 , y 0 ) is the disparity value of the depth value Z (x 0 , y 0 ) at the point (x 0 , y 0 ) on the first depth image, and f is the single The focal length of the photon avalanche diode image sensor, T lt is the baseline length of the depth camera-left camera system, and H lt is the homography matrix calibrated by the depth camera.
  6. 根据权利要求5所述的测量装置,其特征在于,所述处理模块中,将所述第一深度图像转换为第一视差图像还包括:The measuring device according to claim 5, wherein, in the processing module, converting the first depth image into the first parallax image further comprises:
    采用以下二元三次方程对所述第一视差图像的视差曲面进行拟合,以获取平滑的视差曲面:The parallax surface of the first parallax image is fitted by the following binary cubic equation to obtain a smooth parallax surface:
    d(x,y)=a 1+a 2x+a 3y+a 4x 2+a 5xy+a 6y 2+a 7x 3+a 8x 2y+a 9xy 2+a 10y 3 d(x,y)=a 1 +a 2 x+a 3 y+a 4 x 2 +a 5 xy+a 6 y 2 +a 7 x 3 +a 8 x 2 y+a 9 xy 2 +a 10 y 3
    式中,d(x,y)为一个三维视差曲面,a 1,a 2,···,a 10为系数,x和y为像素坐标。 In the formula, d(x, y) is a three-dimensional parallax surface, a 1 , a 2 , ···, a 10 are coefficients, and x and y are pixel coordinates.
  7. 根据权利要求6所述的测量装置,其特征在于,所述处理模块中,在将所述第一深度图像转化为第一视差图像之前,还包括:The measuring device according to claim 6, wherein, in the processing module, before converting the first depth image into a first parallax image, further comprising:
    对所述单光子雪崩二极管图像传感器与所述左图像传感器或者所述右图像传感器进行联合标定:Perform joint calibration on the single-photon avalanche diode image sensor and the left image sensor or the right image sensor:
    将所述单光子雪崩二极管图像传感器获得的所述第一深度图像转化为点云数据,将所述点云数据通过联合标定的变换矩阵映射到所述左图像传感器或者所述右图像传感器的相机坐标系中,得到以所述左图像传感器或者所述右图像传感器为参考的平面二维点。Converting the first depth image obtained by the single-photon avalanche diode image sensor into point cloud data, and mapping the point cloud data to the camera of the left image sensor or the right image sensor through a jointly calibrated transformation matrix In the coordinate system, a plane two-dimensional point with the left image sensor or the right image sensor as a reference is obtained.
  8. 根据权利要求7所述的测量装置,其特征在于,所述转换模块中,对所述左图像与所述右图像进行立体匹配,以获取第二视差图像具体包括:The measuring device according to claim 7, wherein, in the conversion module, performing stereo matching on the left image and the right image to obtain the second parallax image specifically comprises:
    选取所述第一视差图像上的像素点作为种子点,引导所述左图像和所述右图像进行立体匹配以获得所述第二视差图像,具体采用以下公式:Select the pixel points on the first parallax image as seed points, and guide the left image and the right image to perform stereo matching to obtain the second parallax image, specifically using the following formula:
    Figure PCTCN2020138128-appb-100002
    Figure PCTCN2020138128-appb-100002
    式中,(x,y 0)为右图像每个视差范围内的像素点,θ为选定的参数。 In the formula, (x, y 0 ) is the pixel point within each parallax range of the right image, and θ is the selected parameter.
  9. 根据权利要求8所述的测量装置,其特征在于,对所述第一视差图像与所述第二视差图像进行融合,以获取深度图像,具体包括:The measuring device according to claim 8, wherein the first parallax image and the second parallax image are fused to obtain a depth image, which specifically includes:
    根据所述第一视差图像,获取第一可信度函数;obtaining a first credibility function according to the first parallax image;
    根据所述第二视差图像,获取第二可信度函数;obtaining a second credibility function according to the second parallax image;
    根据所述第一可信度函数和所述第二可信度函数构成不同的权重,对所述第一视差图像与所述第二视差图像进行融合,以获取像素融合视差;According to the different weights formed by the first credibility function and the second credibility function, the first parallax image and the second parallax image are fused to obtain pixel fusion parallax;
    根据所述像素融合视差对场景区域进行三维重建,以获取所述目标图像。Three-dimensional reconstruction is performed on the scene area according to the pixel fusion parallax to obtain the target image.
  10. 根据权利要求9所述的测量装置,其特征在于,对所述第一视差图像与所述第二视差图像进行融合,以获取像素融合视差,具体包括:The measurement device according to claim 9, wherein the fusion of the first parallax image and the second parallax image to obtain pixel fusion parallax specifically includes:
    根据所述第一可信度函数和所述第二可信度函数,构成不同的权重对两种视差进行融合,得到像素融合视差,具体采用以下公式:According to the first credibility function and the second credibility function, different weights are formed to fuse the two parallaxes to obtain the pixel fusion parallax. Specifically, the following formula is used:
    d=w t·d t+w s·d s d=w t ·d t +w s ·d s
    Figure PCTCN2020138128-appb-100003
    Figure PCTCN2020138128-appb-100003
    w t=1-w s w t =1-w s
    式中,d t为第一视差值,d s为第二视差值,w t为所述第一视差值的权重,w s为所述第二视差值的权重。 In the formula, d t is the first disparity value, d s is the second disparity value, w t is the weight of the first disparity value, and ws is the weight of the second disparity value.
PCT/CN2020/138128 2020-09-08 2020-12-21 Fused depth measurement method and measurement device WO2022052366A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010933933.1A CN112230244B (en) 2020-09-08 2020-09-08 Fused depth measurement method and measurement device
CN202010933933.1 2020-09-08

Publications (1)

Publication Number Publication Date
WO2022052366A1 true WO2022052366A1 (en) 2022-03-17

Family

ID=74116726

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/138128 WO2022052366A1 (en) 2020-09-08 2020-12-21 Fused depth measurement method and measurement device

Country Status (2)

Country Link
CN (1) CN112230244B (en)
WO (1) WO2022052366A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495698A (en) * 2024-01-02 2024-02-02 福建卓航特种设备有限公司 Flying object identification method, system, intelligent terminal and computer readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115127449B (en) * 2022-07-04 2023-06-23 山东大学 Non-contact fish body measuring device and method assisting binocular vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150215602A1 (en) * 2014-01-29 2015-07-30 Htc Corporation Method for ajdusting stereo image and image processing device using the same
CN105115445A (en) * 2015-09-14 2015-12-02 杭州光珀智能科技有限公司 Three-dimensional imaging system and imaging method based on combination of depth camera and binocular vision
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
CN106772431A (en) * 2017-01-23 2017-05-31 杭州蓝芯科技有限公司 A kind of Depth Information Acquistion devices and methods therefor of combination TOF technologies and binocular vision
CN109255811A (en) * 2018-07-18 2019-01-22 南京航空航天大学 A kind of solid matching method based on the optimization of confidence level figure parallax

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150215602A1 (en) * 2014-01-29 2015-07-30 Htc Corporation Method for ajdusting stereo image and image processing device using the same
CN105115445A (en) * 2015-09-14 2015-12-02 杭州光珀智能科技有限公司 Three-dimensional imaging system and imaging method based on combination of depth camera and binocular vision
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
CN106772431A (en) * 2017-01-23 2017-05-31 杭州蓝芯科技有限公司 A kind of Depth Information Acquistion devices and methods therefor of combination TOF technologies and binocular vision
CN109255811A (en) * 2018-07-18 2019-01-22 南京航空航天大学 A kind of solid matching method based on the optimization of confidence level figure parallax

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LI BOWEN: "Research on Depth Information Acquisition Technology Based on ToF-Binocular Fusion", MASTER THESIS, TIANJIN POLYTECHNIC UNIVERSITY, CN, no. 4, 15 April 2020 (2020-04-15), CN , XP055911254, ISSN: 1674-0246 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117495698A (en) * 2024-01-02 2024-02-02 福建卓航特种设备有限公司 Flying object identification method, system, intelligent terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN112230244A (en) 2021-01-15
CN112230244B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN110596721B (en) Flight time distance measuring system and method of double-shared TDC circuit
KR101706093B1 (en) System for extracting 3-dimensional coordinate and method thereof
WO2021128587A1 (en) Adjustable depth measuring device and measuring method
WO2022052366A1 (en) Fused depth measurement method and measurement device
CN108881717B (en) Depth imaging method and system
CN108924408B (en) Depth imaging method and system
CN105432080A (en) A time-of-flight camera system
CN105115445A (en) Three-dimensional imaging system and imaging method based on combination of depth camera and binocular vision
JPH095050A (en) Three-dimensional image measuring apparatus
KR101145132B1 (en) The three-dimensional imaging pulsed laser radar system using geiger-mode avalanche photo-diode focal plane array and auto-focusing method for the same
CN104251995A (en) Three-dimensional color laser scanning technology
US11803982B2 (en) Image processing device and three-dimensional measuring system
CN110986816B (en) Depth measurement system and measurement method thereof
WO2021208582A1 (en) Calibration apparatus, calibration system, electronic device and calibration method
JP2020020612A (en) Distance measuring device, method for measuring distance, program, and mobile body
CN113780349A (en) Method for acquiring training sample set, model training method and related device
KR20210127950A (en) Three-dimensional imaging and sensing using dynamic vision sensors and pattern projection
CN111965658A (en) Distance measuring system, method and computer readable storage medium
CN108924407B (en) Depth imaging method and system
US10872442B2 (en) Apparatus and a method for encoding an image captured by an optical acquisition system
CN111965659A (en) Distance measuring system, method and computer readable storage medium
CN111510700A (en) Image acquisition device
US20220003875A1 (en) Distance measurement imaging system, distance measurement imaging method, and non-transitory computer readable storage medium
CN212471510U (en) Mobile robot
CN111277811B (en) Three-dimensional space camera and photographing method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20953151

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20953151

Country of ref document: EP

Kind code of ref document: A1