WO2018127007A1 - Depth image acquisition method and system - Google Patents

Depth image acquisition method and system Download PDF

Info

Publication number
WO2018127007A1
WO2018127007A1 PCT/CN2017/119992 CN2017119992W WO2018127007A1 WO 2018127007 A1 WO2018127007 A1 WO 2018127007A1 CN 2017119992 W CN2017119992 W CN 2017119992W WO 2018127007 A1 WO2018127007 A1 WO 2018127007A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
parallax
depth map
sub
matching cost
Prior art date
Application number
PCT/CN2017/119992
Other languages
French (fr)
Chinese (zh)
Inventor
周剑
龙学军
Original Assignee
成都通甲优博科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都通甲优博科技有限责任公司 filed Critical 成都通甲优博科技有限责任公司
Publication of WO2018127007A1 publication Critical patent/WO2018127007A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the present invention relates to the field of computer vision technology, and in particular, to a method and system for acquiring a depth map.
  • Depth precision is one of the most important characteristics of sensors used for distance estimation.
  • Depth maps are a very common application in the positioning and dimensioning of automated systems.
  • the stereo camera system uses pixel-level correspondence between two images taken from different angles to achieve image depth estimation.
  • the depth map accuracy based on integer parallax is not enough, because the depth map based on the stereo matching of integer pixels is discretely distributed in the parallax space, and the layering effect is obvious, so that some high precision cannot be achieved.
  • the measurement accuracy requirements of the application scenario of the demand To this end, it is necessary to optimize the depth map in units of integer parallax, so that the depth map information is continuous, so that accurate three-dimensional measurement information can be obtained in the application.
  • the optimization method for depth map mainly focuses on depth map processing based on image information. These methods rely on the pixel information, edge, etc. of the depth map, and basically process the two-dimensional depth image by filtering, classical interpolation and the like. To some extent, it is possible to improve the imaging of the depth map, but it does not meet the requirements of measurement applications with very high precision requirements. Therefore, how to obtain a depth map with accurate depth accuracy is a technical problem that a person skilled in the art needs to solve.
  • the object of the present invention is to provide a method and a system for acquiring a depth map, which can solve the pixel locking problem, realize accurate estimation of sub-pixels, and obtain high depth precision.
  • the algorithm requires less memory, simple calculation, and takes less time. , real-time is good.
  • the present invention provides a method for obtaining a depth map, including:
  • a continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and the integer parallax space is multi-fitted by the continuous matching cost fitting function to obtain a continuous parallax space, and the calculation sub- Pixel-level precision pixel coordinates to obtain a sub-pixel parallax space;
  • a depth map is obtained by calculating a depth value from the sub-pixel parallax space.
  • the disparity of the left image and the right image is obtained by using a parallax acquisition method, including:
  • the parallax of the left image and the right image is obtained by the fast matching method of the same name image point.
  • calculating, according to the disparity, a matching cost difference between two adjacent pixels in the depth map including:
  • C d is the matching cost after the aggregation in the stereo matching corresponding to the disparity d of the current pixel point
  • C d-1 is the current pixel point corresponding to the disparity d-1
  • the matching cost C d+1 is the matching cost of the current pixel point at the parallax d+1
  • LeftDif is the matching cost difference between the current pixel point and the pixel point to the left of the same name point
  • RightDif is the current pixel point and the pixel point to the right of the same name point
  • the matching cost is poor.
  • the continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference includes:
  • calculating pixel coordinates of sub-pixel precision to obtain a sub-pixel disparity space including:
  • the depth value is calculated according to the sub-pixel parallax space, and the depth map is obtained, including:
  • Q is the re-projection matrix
  • Z is the sub- The depth value after pixel optimization.
  • the method further includes:
  • the depth map is output through a display.
  • the invention also provides a depth map acquisition system, comprising:
  • a disparity calculation module configured to acquire a disparity of a left image and a right image by using a parallax acquisition method
  • a matching cost difference calculation module configured to calculate, according to the disparity, a matching cost difference between each pixel point in the depth map and two pixels adjacent to the same name point;
  • a sub-pixel disparity space obtaining module configured to perform a continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and use the continuous matching cost fitting function to perform multiple times A continuous parallax space is obtained, and pixel coordinates of sub-pixel precision are calculated to obtain a sub-pixel parallax space;
  • a depth map obtaining module configured to calculate a depth value according to the sub-pixel parallax space, to obtain a depth map.
  • the sub-pixel disparity space acquiring module includes:
  • Continuous matching cost fit function determining unit for utilizing Determining a fitted variable h; determining a continuous matching cost fit function f(h) based on the integer pixel sample according to the fitting variable h;
  • LeftDif is the matching cost difference between the current pixel point and the left pixel point
  • RightDif is the matching cost difference between the current pixel point and the right pixel point
  • d is the parallax after the current pixel point integer 3D reconstruction
  • C d is the parallax d of the current pixel point.
  • system further includes:
  • an output module configured to output the depth map through a display.
  • a method for obtaining a depth map according to the present invention includes: obtaining a disparity of a left image and a right image by using a disparity acquisition method; and calculating, between the pixel points of each pixel point adjacent to the same name point in the depth map according to the disparity Matching cost difference; a continuous matching cost fitting function based on integer pixel sampling determined by matching cost difference, and fitting the integer parallax space to the continuous parallax space by using the continuous matching cost fitting function to obtain a continuous parallax space, Calculating pixel coordinates of sub-pixel precision to obtain a sub-pixel disparity space; calculating a depth value according to the sub-pixel disparity space to obtain a depth map;
  • the method directly performs sub-pixel fitting on the depth map based on the integer parallax space, and the depth map algorithm based on the sub-pixel space-based stereo matching greatly reduces the required memory space and shortens the time.
  • the running time of the algorithm is fitted by the continuous function fitting method to fit the discrete parallax space to obtain a continuous parallax space, thus eliminating the layering effect of the depth map, so that the accuracy of the three-dimensional measurement based on parallax is improved.
  • the three-dimensional measurement scenes that require different measurement accuracy, especially the three-dimensional measurement scenes that require very high measurement accuracy; the present invention also provides a depth map acquisition system, which has the above-mentioned beneficial effects, and will not be described herein.
  • FIG. 1 is a flowchart of a method for acquiring a depth map according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a stereo matching integer parallax space according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a schematic diagram of a depth information space according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of matching cost curve fitting according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a pixel of a sensor according to an embodiment of the present invention.
  • FIG. 6 is a structural block diagram of an acquisition system of a depth map according to an embodiment of the present invention.
  • FIG. 7 is a structural block diagram of an acquisition system of another depth map according to an embodiment of the present invention.
  • the core of the present invention is to provide a method and system for acquiring a depth map, which can solve the pixel locking problem, realize accurate estimation of sub-pixels, and obtain high depth precision.
  • the algorithm requires less memory, simple calculation, and takes less time. , real-time is good.
  • the research on the acquisition method of depth map usually focuses on a simple window solution based on stereo matching.
  • the original classification proposed by Scharstein and Szeliski divides the stereo algorithm into two main groups: the local method and the global method.
  • the local algorithm class uses the limited support area around each point to calculate the disparity. This method is based on the selected matching window, and usually uses matching cost aggregation to achieve a smoothing effect. Large windows reduce the number of unsuccessful matches and reduce the mismatch rate of deep discontinuities.
  • the main advantage of the local method is that the computational complexity is small and can be realized in real time.
  • the main disadvantage is that only local information near the pixel is used at each step, resulting in the inability of these methods to handle uncharacterized regions or repeated texture regions.
  • the method provides a method for acquiring a depth map; the method can solve the pixel locking problem, realize accurate estimation of sub-pixels, and obtain high depth precision.
  • the algorithm requires less memory, simple calculation, less time, and good real-time performance. .
  • FIG. 1 is a flowchart of a method for acquiring a depth map according to an embodiment of the present invention
  • the main purpose of this step is to obtain the parallax of pixel-level precision, that is, to obtain the disparity d of the left image and the right image by using the parallax acquisition method.
  • This embodiment does not limit the calculation method of the specific parallax. Since the parallax is the basis of subsequent calculations, in order to ensure the reliability and accuracy of the subsequent calculated values, a highly accurate parallax calculation method, such as a fast matching method of the same name image point, can be selected here.
  • the user should not only consider the accuracy of the parallax calculation, but also the calculation speed of the hardware and the requirements of the real-time performance of the system.
  • the camera system calibration is first performed.
  • the internal parameters and external parameters of the camera in the binocular camera system are first calibrated, and the camera matrix K, the distortion matrix D, the rotation matrix R, the translation vector T and the re-projection matrix Q of the binocular camera system are obtained.
  • the re-projection matrix is Wherein, x T x of a component T, T is the translation vector.
  • the pixel-level parallax d is calculated.
  • the parallax is the difference between the left and right cameras to observe the same target.
  • the stereoscopic vision is described as the distance of the same name in the left image and the right image on the X axis.
  • the mathematical description is:
  • d x l -x r ; where x l is the distance of the point of the same name in the left image on the X axis, and x r is the distance of the point of the same name in the right image on the X axis.
  • S120 A continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and fitting the integer parallax space by the continuous matching cost fitting function to obtain a continuous parallax space. Calculating pixel coordinates of sub-pixel precision to obtain a sub-pixel parallax space;
  • the precise position of an ideal point on the sensor pixel in the world coordinate system cannot be reflected on the image.
  • the pixel coordinate position obtained on the image is only a part of the pixel.
  • the position information (such as the center point), the image information represented by this coordinate cannot reflect the image information of the entire pixel, which fundamentally causes the pixel point coordinate positioning error, that is, the image recognition error, to the image.
  • the integer parallax obtained by stereo matching is discontinuous.
  • x is an image plane x-axis
  • y is an image plane y-axis.
  • parallax d is discontinuous, and in the d-layer parallax, from the near to the far, the 0th layer, the 1st layer, the 2nd layer, ..., the d-1th layer, the dth layer.
  • the conversion relationship between integer parallax and depth obtained by stereo matching is:
  • the integer parallax is discontinuous. Then, after three-dimensional recovery, the depth information transformed by the integer parallax is also distributed in discrete layers, that is, the corresponding depth information space is also discontinuous, as shown in FIG. Shown.
  • the distance between the d+1 layer and the d layer is:
  • is the pixel size, that is, the spatial size occupied by each pixel
  • b is the baseline length
  • z is the depth value. It can be seen that the larger the depth value, the more obvious the stratification effect, and the smaller the parallax, the greater the error brought in. The smaller the depth value, the larger the parallax and the smaller the error.
  • Sub-pixel optimization based on depth map is generally implemented using a matching cost curve fitting method based on shaped sampling. Briefly described as: curve fitting using the point to be optimized and the matching cost of two points adjacent to it. As shown in Fig. 4, the minimum value point C s (minimum point) can be obtained by curve fitting of points C 1 , C 2 , and C 3 . Point C s is the matching cost function corresponding to the parallax of the sub-pixel level, that is, the point optimized for C 2 .
  • Steps S110 to S120 are sub-pixel optimization processes by curve fitting. That is, step S110 to step S120 mainly include a matching cost difference calculation process, a fitting variable determination process, a continuous matching cost fitting function determining process, and a pixel coordinate calculation process of sub-pixel level precision. This embodiment does not limit the specific implementation forms of these several processes. As long as it is guaranteed to use the continuous matching cost fit function to perform the matching cost fit, the sub-pixel disparity space can be determined.
  • calculating a matching cost difference between two adjacent pixel points in the depth map may include, that is, the matching cost difference calculation process may include:
  • C d is the matching cost after the aggregation in the stereo matching corresponding to the disparity d of the current pixel point
  • C d-1 is the current pixel point corresponding to the disparity d-1
  • the matching cost C d+1 is the matching cost of the current pixel point at the parallax d+1
  • LeftDif is the matching cost difference between the current pixel point and the pixel point to the left of the same name point
  • RightDif is the current pixel point and the pixel point to the right of the same name point
  • the matching cost is poor.
  • the method of continuous function fitting is used to fit and interpolate the discrete parallax spaces to obtain a continuous parallax space, thereby weakening the layering effect of the depth map, so that the accuracy of the three-dimensional measurement based on parallax is improved, and is suitable for Measurement accuracy requires different 3D measurement scenarios, especially 3D measurement scenarios where measurement accuracy is very high.
  • a cosine function as a continuous matching cost fitting function for matching cost fitting. That is, the continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference may include, that is, the fitting variable and the continuous matching cost fitting function determining process may include:
  • the cosine function is used to perform the matching cost fitting of the integer pixel sampling, and the pixel locking phenomenon can be substantially eliminated. That is, the cosine function is used to interpolate the time difference space, which reduces the complexity of the parallax space fitting, and eliminates the pixel locking effect of the parallax space fitting, and improves the interpolation precision of the parallax space.
  • the layering is obvious, which reflects that the integer pixel precision is insufficient to describe accurate image information, however, the image information acquired by the image acquisition sensor is pixel-based. Image information.
  • image information In order to obtain accurate image depth information, it is necessary to sub-pixel optimize the depth map based on the parallax space of the integer, that is, optimize the parallax space of the integer.
  • the image pixel obtained by the sensor occupies a certain space.
  • the pixel position in the stereo matching based on the integer pixel is only a certain point of the pixel, and generally takes the geometric center point coordinate of the pixel as the position coordinate of the integer pixel, as shown in FIG. 5 .
  • a is the coordinates of the pixel A in the sensor, but in reality, the pixel A also contains points such as b, c, d, etc., therefore, only the point a is image information that cannot represent the entire pixel.
  • This adjustment range is within a half pixel range of the integer parallax, that is, a pixel with sub-pixel precision is preferably calculated. Coordinates, resulting in subpixel parallax space can include:
  • sub-pixel optimization is a process of fitting a floating-point parallax space in an integer parallax space, using a continuous matching cost fitting function to perform multiple fittings, and the process of obtaining discrete data by curve fitting to obtain continuous data. .
  • step S110 to step S120 are sub-pixel optimization processes.
  • the sub-pixel parallax space is converted to obtain a depth value, thereby obtaining a depth map after sub-pixel optimization based on the depth map.
  • This embodiment does not limit the conversion process from the sub-pixel parallax space to the depth map.
  • the depth value is calculated according to the sub-pixel disparity space, and the obtaining the depth map may include:
  • Q is the re-projection matrix
  • Z The depth value after optimization for subpixels.
  • the method for obtaining the depth map directly performs sub-pixel fitting on the depth map based on the integer parallax space, and the depth map algorithm based on the stereo matching based on the sub-pixel space is greatly While reducing the required memory space, the running time of the algorithm is also shortened, and the discrete parallax space is fitted and fitted by the continuous function fitting method to obtain a continuous parallax space, thereby eliminating the layering effect of the depth map.
  • the accuracy of the three-dimensional measurement based on parallax is improved, and is suitable for a three-dimensional measurement scene with different measurement accuracy requirements, especially a three-dimensional measurement scene with very high measurement accuracy; further, a cosine function is used for matching of integer pixel sampling.
  • the cost fit can substantially eliminate pixel locking. That is, the cosine function is used to interpolate the time difference space, which reduces the complexity of the parallax space fitting, and eliminates the pixel locking effect of the parallax space fitting, and improves the interpolation precision of the parallax space.
  • the embodiment may further include: after acquiring the depth map:
  • the depth map is output through a display.
  • the depth map after optimization is displayed. From the method in the above embodiment, the depth map obtained is very dense, that is, the depth information is continuous and highly accurate.
  • the display here may be a display device such as a display screen.
  • the acquisition system of the depth map provided by the embodiment of the present invention is described below.
  • the acquisition system of the depth map described below and the acquisition method of the depth map described above may refer to each other.
  • FIG. 6 is a structural block diagram of a system for acquiring a depth map according to an embodiment of the present invention.
  • the acquiring system may include:
  • a disparity calculation module 100 configured to acquire disparity of a left image and a right image by using a disparity acquisition method
  • the matching cost difference calculation module 200 is configured to calculate, according to the disparity, a matching cost difference between each pixel point in the depth map and two pixels adjacent to the same name point;
  • the sub-pixel disparity space obtaining module 300 is configured to use a continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and perform the integer disparity space multiple times by using the continuous matching cost fitting function Fitting to obtain a continuous parallax space, calculating pixel coordinates of sub-pixel precision, and obtaining a sub-pixel parallax space;
  • the depth map obtaining module 400 is configured to calculate a depth value according to the sub-pixel parallax space to obtain a depth map.
  • the sub-pixel disparity space obtaining module 300 may include:
  • Continuous matching cost fit function determining unit for utilizing Determining a fitted variable h; determining a continuous matching cost fit function f(h) based on the integer pixel sample according to the fitting variable h;
  • LeftDif is the matching cost difference between the current pixel point and the left pixel point
  • RightDif is the matching cost difference between the current pixel point and the right pixel point
  • d is the parallax after the current pixel point integer 3D reconstruction
  • C d is the parallax d of the current pixel point.
  • the acquiring system may further include:
  • the output module 500 is configured to output the depth map through a display.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Abstract

A depth image acquisition method and system, said method comprising: using a parallax acquisition method to obtain the parallax of a left image and a right image; according to the parallax, calculating the matching cost difference between each pixel point in a depth image and two adjacent pixel points homologous thereto; using a continuous matching cost fitting function on an integer parallax space to perform fitting a plurality of times to obtain a continuous parallax space, and calculating a pixel coordinate having sub-pixel level accuracy to obtain a sub-pixel parallax space; calculating a depth value according to the sub-pixel parallax space, and obtaining the depth image. In the present method, sub-pixel fitting is performed directly on a parallax space-based depth image, relative to a depth image algorithm based on stereo matching of the sub-pixel space, algorithm operating time is shortened while required storage space is greatly reduced; a continuous function fitting method is used to perform fitting and interpolation on a discrete parallax space to obtain the continuous parallax space, thus eliminating a layering effect of the depth image such that the accuracy of parallax-based three-dimensional measurements is increased.

Description

深度图获取方法及系统Depth map acquisition method and system
本申请要求于2017年1月3日提交的申请号为201710001725.6、名称为“一种深度图的获取方法及系统”的中国专利申请的优先权,并将其全部内容通过引用的方式结合在本申请中。The present application claims priority to Chinese Patent Application No. 201710001725.6, entitled "A Method and System for Obtaining a Depth Map", which is filed on Jan. 3, 2017, the entire contents of which is incorporated herein by reference. In the application.
技术领域Technical field
本发明涉及计算机视觉技术领域,特别涉及一种深度图的获取方法及系统。The present invention relates to the field of computer vision technology, and in particular, to a method and system for acquiring a depth map.
背景技术Background technique
获取准确的深度信息是自动系统环境感知的重要部分,深度精度是传感器用于距离估计的最重要特性之一。深度图在自动系统的定位和尺寸测量中有非常普遍的应用。立体摄像系统使用从不同的角度拍摄的两个图像之间的像素级对应,从而实现图像深度估计。但对于远距离系统,基于整型视差的深度图精度是不够的,因为基于整型像素立体匹配的深度图在视差空间中呈整型离散分布,分层效应明显,这样没法达到一些高精度需求的应用场景的测量精度要求。为此,需要对以整型视差为单位的深度图做优化处理,使深度图信息连续,从而在应用中能够得到精确的三维测量信息。Obtaining accurate depth information is an important part of the automatic system environment perception. Depth precision is one of the most important characteristics of sensors used for distance estimation. Depth maps are a very common application in the positioning and dimensioning of automated systems. The stereo camera system uses pixel-level correspondence between two images taken from different angles to achieve image depth estimation. However, for long-distance systems, the depth map accuracy based on integer parallax is not enough, because the depth map based on the stereo matching of integer pixels is discretely distributed in the parallax space, and the layering effect is obvious, so that some high precision cannot be achieved. The measurement accuracy requirements of the application scenario of the demand. To this end, it is necessary to optimize the depth map in units of integer parallax, so that the depth map information is continuous, so that accurate three-dimensional measurement information can be obtained in the application.
目前,对于深度图的优化方法主要集中于基于图像信息的深度图处理,这些方法依赖于深度图的像素信息、边缘等,基本上靠滤波、经典的插值等方法对二维的深度图像做处理,在一定程度上能够对深度图的成像有一定的改善,但是,远远达不到精度要求非常高的测量应用场景的要求。因此,如何获取具有精确深度精度的深度图,是本领域技术人员需要解决的技术问题。At present, the optimization method for depth map mainly focuses on depth map processing based on image information. These methods rely on the pixel information, edge, etc. of the depth map, and basically process the two-dimensional depth image by filtering, classical interpolation and the like. To some extent, it is possible to improve the imaging of the depth map, but it does not meet the requirements of measurement applications with very high precision requirements. Therefore, how to obtain a depth map with accurate depth accuracy is a technical problem that a person skilled in the art needs to solve.
发明内容Summary of the invention
本发明的目的是提供一种深度图的获取方法及系统,能够解决像素锁定问题,实现亚像素的精确估计,获得很高的深度精度,同时,算法所需内存 少,计算简单,花费时间少,实时性好。The object of the present invention is to provide a method and a system for acquiring a depth map, which can solve the pixel locking problem, realize accurate estimation of sub-pixels, and obtain high depth precision. At the same time, the algorithm requires less memory, simple calculation, and takes less time. , real-time is good.
为解决上述技术问题,本发明提供一种深度图的获取方法,包括:To solve the above technical problem, the present invention provides a method for obtaining a depth map, including:
利用视差获取方法获取左图像和右图像的视差;Obtaining a disparity of the left image and the right image by using a parallax acquisition method;
根据所述视差,计算深度图中每个像素点与其同名点相邻两个像素点之间的匹配代价差;Calculating, according to the disparity, a matching cost difference between each pixel point in the depth map and two pixels adjacent to the same name point;
利用所述匹配代价差确定的基于整型像素采样的连续匹配代价拟合函数,并将整型的视差空间利用所述连续匹配代价拟合函数进行多次拟合得到连续的视差空间,计算亚像素级精度的像素坐标,得到亚像素视差空间;A continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and the integer parallax space is multi-fitted by the continuous matching cost fitting function to obtain a continuous parallax space, and the calculation sub- Pixel-level precision pixel coordinates to obtain a sub-pixel parallax space;
根据所述亚像素视差空间计算深度值,获得深度图。A depth map is obtained by calculating a depth value from the sub-pixel parallax space.
可选的,利用视差获取方法获取左图像和右图像的视差,包括:Optionally, the disparity of the left image and the right image is obtained by using a parallax acquisition method, including:
利用同名像点快速匹配方法获取左图像和右图像的视差。The parallax of the left image and the right image is obtained by the fast matching method of the same name image point.
可选的,根据所述视差,计算深度图中相邻两个像素点之间的匹配代价差,包括:Optionally, calculating, according to the disparity, a matching cost difference between two adjacent pixels in the depth map, including:
利用
Figure PCTCN2017119992-appb-000001
计算深度图中相邻两个像素点之间的匹配代价差;
use
Figure PCTCN2017119992-appb-000001
Calculating the matching cost difference between two adjacent pixels in the depth map;
其中,d为当前像素点整型三维重建之后的视差,C d为当前像素点的视差d对应的立体匹配中聚合之后的匹配代价,C d-1为当前像素点在视差d-1时对应的匹配代价,C d+1为当前像素点在视差d+1时对应的匹配代价,LeftDif为当前像素点与其同名点左边像素点的匹配代价差,RightDif为当前像素点与其同名点右边像素点的匹配代价差。 Where d is the disparity after the 3D reconstruction of the current pixel point integer, C d is the matching cost after the aggregation in the stereo matching corresponding to the disparity d of the current pixel point, and C d-1 is the current pixel point corresponding to the disparity d-1 The matching cost, C d+1 is the matching cost of the current pixel point at the parallax d+1, LeftDif is the matching cost difference between the current pixel point and the pixel point to the left of the same name point, RightDif is the current pixel point and the pixel point to the right of the same name point The matching cost is poor.
可选的,所述匹配代价差确定的基于整型像素采样的连续匹配代价拟合函数,包括:Optionally, the continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference includes:
利用
Figure PCTCN2017119992-appb-000002
确定拟合变量h;
use
Figure PCTCN2017119992-appb-000002
Determining the fitted variable h;
根据所述拟合变量h,确定基于整型像素采样的连续匹配代价拟合函数f(h),其中,
Figure PCTCN2017119992-appb-000003
Determining, according to the fitting variable h, a continuous matching cost fitting function f(h) based on integer pixel sampling, wherein
Figure PCTCN2017119992-appb-000003
可选的,计算亚像素级精度的像素坐标,得到亚像素视差空间,包括:Optionally, calculating pixel coordinates of sub-pixel precision to obtain a sub-pixel disparity space, including:
利用
Figure PCTCN2017119992-appb-000004
计算亚像素级精度的像素坐标d Refine,并确定亚像素视差空间D New={d Refine}。
use
Figure PCTCN2017119992-appb-000004
Calculate the pixel coordinate d Refine of sub-pixel precision and determine the sub-pixel disparity space D New ={d Refine }.
可选的,根据所述亚像素视差空间计算深度值,获取深度图,包括:Optionally, the depth value is calculated according to the sub-pixel parallax space, and the depth map is obtained, including:
利用
Figure PCTCN2017119992-appb-000005
计算深度图的世界坐标;
use
Figure PCTCN2017119992-appb-000005
Calculate the world coordinates of the depth map;
利用
Figure PCTCN2017119992-appb-000006
将所述世界坐标单位化得到深度图的点云数据;
use
Figure PCTCN2017119992-appb-000006
Unitizing the world coordinates to obtain point cloud data of the depth map;
其中,E=(x,y,z)为世界坐标,E1=(X,Y,Z)为单位化世界坐标,e=(u,v)为图像坐标,Q为重投影矩阵,Z为亚像素优化后的深度值。Where E=(x, y, z) is the world coordinate, E1=(X, Y, Z) is the unitized world coordinate, e=(u, v) is the image coordinate, Q is the re-projection matrix, and Z is the sub- The depth value after pixel optimization.
可选的,获取深度图之后,还包括:Optionally, after obtaining the depth map, the method further includes:
通过显示器输出所述深度图。The depth map is output through a display.
本发明还提供一种深度图的获取系统,包括:The invention also provides a depth map acquisition system, comprising:
视差计算模块,用于利用视差获取方法获取左图像和右图像的视差;a disparity calculation module, configured to acquire a disparity of a left image and a right image by using a parallax acquisition method;
匹配代价差计算模块,用于根据所述视差,计算深度图中每个像素点与其同名点相邻两个像素点之间的匹配代价差;a matching cost difference calculation module, configured to calculate, according to the disparity, a matching cost difference between each pixel point in the depth map and two pixels adjacent to the same name point;
亚像素视差空间获取模块,用于利用所述匹配代价差确定的基于整型像素采样的连续匹配代价拟合函数,并将整型的视差空间利用所述连续匹配代价拟合函数进行多次拟合得到连续的视差空间,计算亚像素级精度的像素坐标,得到亚像素视差空间;a sub-pixel disparity space obtaining module, configured to perform a continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and use the continuous matching cost fitting function to perform multiple times A continuous parallax space is obtained, and pixel coordinates of sub-pixel precision are calculated to obtain a sub-pixel parallax space;
深度图获取模块,用于根据所述亚像素视差空间计算深度值,获得深度图。a depth map obtaining module, configured to calculate a depth value according to the sub-pixel parallax space, to obtain a depth map.
可选的,所述亚像素视差空间获取模块包括:Optionally, the sub-pixel disparity space acquiring module includes:
连续匹配代价拟合函数确定单元,用于利用
Figure PCTCN2017119992-appb-000007
确定拟合变量h;根据所述拟合变量h,确定基于整型像素采样的连续匹配代价拟合函数f(h);
Continuous matching cost fit function determining unit for utilizing
Figure PCTCN2017119992-appb-000007
Determining a fitted variable h; determining a continuous matching cost fit function f(h) based on the integer pixel sample according to the fitting variable h;
亚像素视差空间获取单元,用于利用
Figure PCTCN2017119992-appb-000008
计算亚像素级精度的像素坐标d Refine,并确定亚像素视差空间D New={d Refine};
Sub-pixel parallax space acquisition unit for utilizing
Figure PCTCN2017119992-appb-000008
Calculate the pixel coordinate d Refine of sub-pixel precision and determine the sub-pixel disparity space D New ={d Refine };
其中,
Figure PCTCN2017119992-appb-000009
LeftDif为当前像素点与左边像素点的匹配代价差,RightDif为当前像素点与右边像素点的匹配代价差,d为当前像素点整型三维重建之后的视差,C d为当前像素点的视差d对应的立体匹配中聚合之后的匹配代价。
among them,
Figure PCTCN2017119992-appb-000009
LeftDif is the matching cost difference between the current pixel point and the left pixel point, RightDif is the matching cost difference between the current pixel point and the right pixel point, d is the parallax after the current pixel point integer 3D reconstruction, and C d is the parallax d of the current pixel point. The matching cost after aggregation in the corresponding stereo matching.
可选的,该系统还包括:Optionally, the system further includes:
输出模块,用于通过显示器输出所述深度图。And an output module, configured to output the depth map through a display.
本发明所提供的一种深度图的获取方法,包括:利用视差获取方法获取左图像和右图像的视差;根据视差计算深度图中每个像素点与其同名点相邻两个像素点之间的匹配代价差;利用匹配代价差确定的基于整型像素采样的连续匹配代价拟合函数,并将整型的视差空间利用所述连续匹配代价拟合函数进行多次拟合得到连续的视差空间,计算亚像素级精度的像素坐标得到亚像素视差空间;根据亚像素视差空间计算深度值,获得深度图;A method for obtaining a depth map according to the present invention includes: obtaining a disparity of a left image and a right image by using a disparity acquisition method; and calculating, between the pixel points of each pixel point adjacent to the same name point in the depth map according to the disparity Matching cost difference; a continuous matching cost fitting function based on integer pixel sampling determined by matching cost difference, and fitting the integer parallax space to the continuous parallax space by using the continuous matching cost fitting function to obtain a continuous parallax space, Calculating pixel coordinates of sub-pixel precision to obtain a sub-pixel disparity space; calculating a depth value according to the sub-pixel disparity space to obtain a depth map;
可见,该方法直接在基于整型视差空间的深度图上做亚像素拟合,相对于基于亚像素空间的立体匹配的深度图算法,在大大减小了所需内存空间的同时,也缩短了算法的运行时间,用连续函数拟合的方法对离散的视差空间进行拟合插值,得到连续的视差空间,从而消除了深度图的分层效应,使得基于视差的三维测量的精度得到提高,适用于对测量精度要求不同的三维测量场景,特别是对测量精度要求非常高的三维测量场景;本发明还提供一种 深度图的获取系统,具有上述有益效果,在此不再赘述。It can be seen that the method directly performs sub-pixel fitting on the depth map based on the integer parallax space, and the depth map algorithm based on the sub-pixel space-based stereo matching greatly reduces the required memory space and shortens the time. The running time of the algorithm is fitted by the continuous function fitting method to fit the discrete parallax space to obtain a continuous parallax space, thus eliminating the layering effect of the depth map, so that the accuracy of the three-dimensional measurement based on parallax is improved. The three-dimensional measurement scenes that require different measurement accuracy, especially the three-dimensional measurement scenes that require very high measurement accuracy; the present invention also provides a depth map acquisition system, which has the above-mentioned beneficial effects, and will not be described herein.
附图说明DRAWINGS
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description are only It is an embodiment of the present invention, and those skilled in the art can obtain other drawings according to the provided drawings without any creative work.
图1为本发明实施例所提供的深度图的获取方法的流程图;FIG. 1 is a flowchart of a method for acquiring a depth map according to an embodiment of the present invention;
图2为本发明实施例所提供的立体匹配整型视差空间的示意图;2 is a schematic diagram of a stereo matching integer parallax space according to an embodiment of the present invention;
图3为本发明实施例所提供的深度信息空间示意图的示意图;3 is a schematic diagram of a schematic diagram of a depth information space according to an embodiment of the present invention;
图4为本发明实施例所提供的匹配代价曲线拟合的示意图;4 is a schematic diagram of matching cost curve fitting according to an embodiment of the present invention;
图5为本发明实施例所提供的传感器整型像素示意图;FIG. 5 is a schematic diagram of a pixel of a sensor according to an embodiment of the present invention; FIG.
图6为本发明实施例所提供的深度图的获取系统的结构框图;FIG. 6 is a structural block diagram of an acquisition system of a depth map according to an embodiment of the present invention;
图7为本发明实施例所提供的另一深度图的获取系统的结构框图。FIG. 7 is a structural block diagram of an acquisition system of another depth map according to an embodiment of the present invention.
具体实施方式detailed description
本发明的核心是提供一种深度图的获取方法及系统,能够解决像素锁定问题,实现亚像素的精确估计,获得很高的深度精度,同时,算法所需内存少,计算简单,花费时间少,实时性好。The core of the present invention is to provide a method and system for acquiring a depth map, which can solve the pixel locking problem, realize accurate estimation of sub-pixels, and obtain high depth precision. At the same time, the algorithm requires less memory, simple calculation, and takes less time. , real-time is good.
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described in conjunction with the drawings in the embodiments of the present invention. It is a partial embodiment of the invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
目前,深度图的获取方法研究通常致力于基于立体匹配的简单窗口解决方案。Scharstein和Szeliski提出的原始分类法将立体算法分为两个主要组:局部方法和全局方法。局部算法类使用每个点周围的有限支持区域来计算视差,这种方法都基于选定的匹配窗口,通常应用匹配代价聚合,达到平滑的效果。大窗口会降低匹配不成功的数量,也减少深度不连续区域的误匹配率。 局部方法的主要优势是计算复杂度小,能够实时实现。主要不足是在每一步只使用该像素点附近的局部信息,导致这些方法不能够处理毫无特征的区域或重复纹理区域。全局的方法有很高的计算复杂度,因此它们并不适用于需要实时性要求高的自动系统。还有提出用半全局匹配(SGM)的立体匹配算法来替代现有的解决方案,这能够很有效果地减少执行时间。这几种立体算法能够得到像素级别的误差,但他们忽略了亚像素的结果。At present, the research on the acquisition method of depth map usually focuses on a simple window solution based on stereo matching. The original classification proposed by Scharstein and Szeliski divides the stereo algorithm into two main groups: the local method and the global method. The local algorithm class uses the limited support area around each point to calculate the disparity. This method is based on the selected matching window, and usually uses matching cost aggregation to achieve a smoothing effect. Large windows reduce the number of unsuccessful matches and reduce the mismatch rate of deep discontinuities. The main advantage of the local method is that the computational complexity is small and can be realized in real time. The main disadvantage is that only local information near the pixel is used at each step, resulting in the inability of these methods to handle uncharacterized regions or repeated texture regions. Global methods have high computational complexity, so they are not suitable for automated systems that require high real-time requirements. It has also been proposed to replace the existing solution with a semi-global matching (SGM) stereo matching algorithm, which can effectively reduce the execution time. These stereo algorithms are able to get pixel-level errors, but they ignore sub-pixel results.
近段时间,提出在立体匹配算法中使用一种简单的抛物线插值方法来实现亚像素的估计。该方法使用最小匹配代价和它邻域的两个点对应的匹配代价来进行抛物线插值,该抛物线的最小点的位置将代表亚像素点。该解决方案能够得到精确的精度。然而,这种方法针对于基于立体算法的简单窗口存在严重的问题,会出现像素锁定现象(“pixel-locking”),即视差所对应的点云数量以整形视差为对称轴呈高斯分布。Shimitzu和Okutomi最近提出通过建模误差和建模校正的来解决像素锁定问题,他们发现,误差是对称的,且能够通过变换图像来消除,变换图像将会使误差函数转置从而能达到与简单的匹配吻合。虽然此解决方案是相当有效的,但是仍存在一些缺点,其主要缺点是,立体匹配需要进行三次,会造成计算资源的大量浪费。最后现代立体匹配方法例如的半全局方法为了估计完美的亚像素插值数学模型需要使用多个非线性变换,这几乎是不可能的实现的。Recently, it has been proposed to use a simple parabolic interpolation method in the stereo matching algorithm to achieve sub-pixel estimation. The method uses a minimum matching cost and a matching cost corresponding to two points of its neighborhood for parabolic interpolation, the position of the smallest point of the parabola will represent the sub-pixel point. The solution is able to achieve precise accuracy. However, this method has a serious problem for a simple window based on a stereo algorithm, and a pixel locking phenomenon ("pixel-locking") occurs, that is, the number of point clouds corresponding to the parallax is Gaussian with the symmetric parallax as the axis of symmetry. Shimitzu and Okutomi recently proposed to solve the pixel locking problem by modeling error and modeling correction. They found that the error is symmetrical and can be eliminated by transforming the image. The transformed image will make the error function transposed to achieve and be simple. The match matches. Although this solution is quite effective, there are still some shortcomings. The main disadvantage is that the stereo matching needs to be performed three times, which causes a large waste of computing resources. Finally, modern stereo matching methods such as semi-global methods are almost impossible to implement in order to estimate a perfect sub-pixel interpolation mathematical model that requires the use of multiple nonlinear transformations.
针对现有深度图的优化技术中存在的复杂度高、优化效果不够好以及一些基于立体匹配算法实现深度图亚像素优化方法中存在的像素锁定问题,以及立体匹配方法复杂繁琐等不足,本实施例提供一种获取深度图的方法;该方法能够解决像素锁定问题,实现亚像素的精确估计,获得很高的深度精度,同时,算法所需内存少,计算简单,花费时间少,实时性好。具体请参考图1,图1为本发明实施例所提供的深度图的获取方法的流程图;该获取方法可以包括:Aiming at the complexity of the existing depth map optimization technology, the optimization effect is not good enough, and some pixel locking problems existing in the depth map sub-pixel optimization method based on the stereo matching algorithm, and the stereo matching method are complicated and cumbersome, the implementation is insufficient. The method provides a method for acquiring a depth map; the method can solve the pixel locking problem, realize accurate estimation of sub-pixels, and obtain high depth precision. At the same time, the algorithm requires less memory, simple calculation, less time, and good real-time performance. . For details, please refer to FIG. 1 , which is a flowchart of a method for acquiring a depth map according to an embodiment of the present invention;
S100、利用视差获取方法获取左图像和右图像的视差;S100. Acquire a disparity of a left image and a right image by using a parallax acquisition method;
具体的,该步骤主要的目的是为了获得像素级精度的视差,即利用视差获取方法获取左图像和右图像的视差d。本实施例并不限定具体视差的计算方法。由于该视差是后续计算的基础,因此为了保证后续计算数值的可靠性和准确性,这里可以选用精确度较高的视差计算方法,例如同名像点快速匹配 方法。这里用户在选用视差计算方法时,不仅要考虑视差计算的准确程度,还需要考虑硬件的计算速度,以及系统的实时性的要求等指标。Specifically, the main purpose of this step is to obtain the parallax of pixel-level precision, that is, to obtain the disparity d of the left image and the right image by using the parallax acquisition method. This embodiment does not limit the calculation method of the specific parallax. Since the parallax is the basis of subsequent calculations, in order to ensure the reliability and accuracy of the subsequent calculated values, a highly accurate parallax calculation method, such as a fast matching method of the same name image point, can be selected here. Here, when selecting the parallax calculation method, the user should not only consider the accuracy of the parallax calculation, but also the calculation speed of the hardware and the requirements of the real-time performance of the system.
在综合考虑上述指标后,优选的,可以利用同名像点快速匹配方法获取左图像和右图像的视差。After comprehensively considering the above indicators, it is preferable to obtain the disparity of the left image and the right image by using the same-name image point fast matching method.
具体的,首先要进行摄像系统标定。Specifically, the camera system calibration is first performed.
其中,先标定双目摄像系统中摄像机的内部参数和外部参数,得到双目摄像系统的相机矩阵K、畸变矩阵D、旋转矩阵R、平移向量T和重投影矩阵Q。Wherein, the internal parameters and external parameters of the camera in the binocular camera system are first calibrated, and the camera matrix K, the distortion matrix D, the rotation matrix R, the translation vector T and the re-projection matrix Q of the binocular camera system are obtained.
其中,相机矩阵为
Figure PCTCN2017119992-appb-000010
其中,f x,f y为主距参数,(c x,c y)为主点坐标。
Where the camera matrix is
Figure PCTCN2017119992-appb-000010
Where f x and f y are the main distance parameters, and (c x , c y ) is the main point coordinates.
畸变矩阵为D=[k 1 k 2 p 1 p 2 k 3[k 4 k 5 k 6]],其中,k i,p j,i=1,2,…,6,j=1,2为畸变参数。 The distortion matrix is D = [k 1 k 2 p 1 p 2 k 3 [k 4 k 5 k 6 ]], where k i , p j , i = 1, 2, ..., 6, j = 1, 2 Distortion parameters.
重投影矩阵为
Figure PCTCN2017119992-appb-000011
其中,T x的T的x分量,T为平移向量。
The re-projection matrix is
Figure PCTCN2017119992-appb-000011
Wherein, x T x of a component T, T is the translation vector.
其中,旋转矩阵
Figure PCTCN2017119992-appb-000012
平移向量T=[T xT yT z]。
Where the rotation matrix
Figure PCTCN2017119992-appb-000012
The translation vector T = [T x T y T z ].
其次,计算像素级视差d。Second, the pixel-level parallax d is calculated.
其中,视差就是左、右摄像机观察同一个目标所产生的方向差异,立体视觉中描述为左图像、右图像中的同名点在X轴上的距离,数学描述为:Among them, the parallax is the difference between the left and right cameras to observe the same target. The stereoscopic vision is described as the distance of the same name in the left image and the right image on the X axis. The mathematical description is:
d=x l-x r;其中,x l为左图像中的同名点在X轴上的距离,x r为右图像中的同名点在X轴上的距离。 d=x l -x r ; where x l is the distance of the point of the same name in the left image on the X axis, and x r is the distance of the point of the same name in the right image on the X axis.
S110、根据所述视差,计算深度图中每个像素点与其同名点相邻两个像素点之间的匹配代价差;S110. Calculate, according to the parallax, a matching cost difference between each pixel point in the depth map and two pixels adjacent to the same name point;
S120、利用所述匹配代价差确定的基于整型像素采样的连续匹配代价拟合函数,并将整型的视差空间利用所述连续匹配代价拟合函数进行多次拟合得到连续的视差空间,计算亚像素级精度的像素坐标,得到亚像素视差空间;S120: A continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and fitting the integer parallax space by the continuous matching cost fitting function to obtain a continuous parallax space. Calculating pixel coordinates of sub-pixel precision to obtain a sub-pixel parallax space;
其中,由于传感器的每一个像素都具有一定的面积,世界坐标系中一理想点在传感器像素上的精确位置无法在图像上得到反映,一般的,图像上获得的像素坐标位置只是像素的某一部分(诸如中心点)的位置信息,这一坐标所代表的图像信息无法反映整个像素的图像信息,这就在根本上对图像造成像素点坐标定位误差,即图像识别误差。那么,由立体匹配得到的整型视差是不连续的。如图2所示的立体匹配整型视差空间。图2中,d表示视差,x为像平面x轴,y为像平面y轴。可以看到,视差d是不连续的,在d层视差中,由近到远依次为第0层,第1层,第2层,…,第d-1层,第d层。由立体匹配得到的整型视差与深度的转换关系为:Wherein, since each pixel of the sensor has a certain area, the precise position of an ideal point on the sensor pixel in the world coordinate system cannot be reflected on the image. Generally, the pixel coordinate position obtained on the image is only a part of the pixel. The position information (such as the center point), the image information represented by this coordinate cannot reflect the image information of the entire pixel, which fundamentally causes the pixel point coordinate positioning error, that is, the image recognition error, to the image. Then, the integer parallax obtained by stereo matching is discontinuous. The stereo matching integer parallax space as shown in FIG. In Fig. 2, d represents a parallax, x is an image plane x-axis, and y is an image plane y-axis. It can be seen that the parallax d is discontinuous, and in the d-layer parallax, from the near to the far, the 0th layer, the 1st layer, the 2nd layer, ..., the d-1th layer, the dth layer. The conversion relationship between integer parallax and depth obtained by stereo matching is:
Figure PCTCN2017119992-appb-000013
其中,b为基线长度,f为主距,z为深度值,D={d}。
Figure PCTCN2017119992-appb-000013
Where b is the baseline length, f is the principal distance, z is the depth value, and D = {d}.
由上所述,整型视差是不连续的,那么,经过三维恢复后,由整型视差转化的深度信息也是呈离散分层分布的,即对应的深度信息空间也是不连续的,如图3所示。图3中,d+1层与d层的距离为:As described above, the integer parallax is discontinuous. Then, after three-dimensional recovery, the depth information transformed by the integer parallax is also distributed in discrete layers, that is, the corresponding depth information space is also discontinuous, as shown in FIG. Shown. In Figure 3, the distance between the d+1 layer and the d layer is:
Figure PCTCN2017119992-appb-000014
Figure PCTCN2017119992-appb-000014
其中,δ为像元尺寸,即每个像素所占的空间尺寸,b为基线长度,z为深度值。可以看出,深度值越大时,分层效应越明显,视差越小,带入的误差越大。深度值越小,视差越大,误差越小。Where δ is the pixel size, that is, the spatial size occupied by each pixel, b is the baseline length, and z is the depth value. It can be seen that the larger the depth value, the more obvious the stratification effect, and the smaller the parallax, the greater the error brought in. The smaller the depth value, the larger the parallax and the smaller the error.
为此,为了减小测量误差,需要恢复连续的深度空间,即恢复连续的视差空间,获取精确的测量尺寸,这一目标通过获取亚像素级尺寸得以实现。基于深度图亚像素优化一般利用基于整形采样的匹配代价曲线拟合方法来实现。简单描述为:利用需要优化的点以及与它相邻的两个点的匹配代价进行曲线拟合。如图4所示,通过点C 1,C 2,C 3的进行曲线拟合,可以得到极小值点C s(最小值点)。点C s就是亚像素级的视差对应的匹配代价函数,即对C 2优化后的点。步骤S110到步骤S120为通过曲线拟合进行亚像素优化过程。即步骤S110到步骤S120主要包括匹配代价差计算过程,拟合变量确定过程,连 续匹配代价拟合函数确定过程,亚像素级精度的像素坐标计算过程。本实施例并不对这几个过程的具体实现形式进行限定。只要保证利用连续匹配代价拟合函数进行匹配代价拟合,确定亚像素视差空间即可。 For this reason, in order to reduce the measurement error, it is necessary to restore the continuous depth space, that is, to restore the continuous parallax space, and to obtain an accurate measurement size, which is achieved by obtaining sub-pixel size. Sub-pixel optimization based on depth map is generally implemented using a matching cost curve fitting method based on shaped sampling. Briefly described as: curve fitting using the point to be optimized and the matching cost of two points adjacent to it. As shown in Fig. 4, the minimum value point C s (minimum point) can be obtained by curve fitting of points C 1 , C 2 , and C 3 . Point C s is the matching cost function corresponding to the parallax of the sub-pixel level, that is, the point optimized for C 2 . Steps S110 to S120 are sub-pixel optimization processes by curve fitting. That is, step S110 to step S120 mainly include a matching cost difference calculation process, a fitting variable determination process, a continuous matching cost fitting function determining process, and a pixel coordinate calculation process of sub-pixel level precision. This embodiment does not limit the specific implementation forms of these several processes. As long as it is guaranteed to use the continuous matching cost fit function to perform the matching cost fit, the sub-pixel disparity space can be determined.
其中,优选的,根据所述视差,计算深度图中相邻两个像素点之间的匹配代价差可以包括,即匹配代价差计算过程可以包括:Preferably, according to the disparity, calculating a matching cost difference between two adjacent pixel points in the depth map may include, that is, the matching cost difference calculation process may include:
利用
Figure PCTCN2017119992-appb-000015
计算深度图中相邻两个像素点之间的匹配代价差;
use
Figure PCTCN2017119992-appb-000015
Calculating the matching cost difference between two adjacent pixels in the depth map;
其中,d为当前像素点整型三维重建之后的视差,C d为当前像素点的视差d对应的立体匹配中聚合之后的匹配代价,C d-1为当前像素点在视差d-1时对应的匹配代价,C d+1为当前像素点在视差d+1时对应的匹配代价,LeftDif为当前像素点与其同名点左边像素点的匹配代价差,RightDif为当前像素点与其同名点右边像素点的匹配代价差。 Where d is the disparity after the 3D reconstruction of the current pixel point integer, C d is the matching cost after the aggregation in the stereo matching corresponding to the disparity d of the current pixel point, and C d-1 is the current pixel point corresponding to the disparity d-1 The matching cost, C d+1 is the matching cost of the current pixel point at the parallax d+1, LeftDif is the matching cost difference between the current pixel point and the pixel point to the left of the same name point, RightDif is the current pixel point and the pixel point to the right of the same name point The matching cost is poor.
本实施例使用连续函数拟合的方法对离散的视差空间进行拟合插值,得到连续的视差空间,从而削弱了深度图的分层效应,使得基于视差的三维测量的精度得到提高,适用于对测量精度要求不同的三维测量场景,特别是对测量精度要求非常高的三维测量场景。进一步,为了减小计算过程的复杂度,优选的,可以选用余弦函数作为连续匹配代价拟合函数进行匹配代价拟合。即优选的,所述匹配代价差确定的基于整型像素采样的连续匹配代价拟合函数可以包括,即拟合变量及连续匹配代价拟合函数确定过程可以包括:In this embodiment, the method of continuous function fitting is used to fit and interpolate the discrete parallax spaces to obtain a continuous parallax space, thereby weakening the layering effect of the depth map, so that the accuracy of the three-dimensional measurement based on parallax is improved, and is suitable for Measurement accuracy requires different 3D measurement scenarios, especially 3D measurement scenarios where measurement accuracy is very high. Further, in order to reduce the complexity of the calculation process, it is preferable to use a cosine function as a continuous matching cost fitting function for matching cost fitting. That is, the continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference may include, that is, the fitting variable and the continuous matching cost fitting function determining process may include:
利用
Figure PCTCN2017119992-appb-000016
确定拟合变量h;
use
Figure PCTCN2017119992-appb-000016
Determining the fitted variable h;
根据所述拟合变量h,确定基于整型像素采样的连续匹配代价拟合函数f(h),其中,
Figure PCTCN2017119992-appb-000017
Determining, according to the fitting variable h, a continuous matching cost fitting function f(h) based on integer pixel sampling, wherein
Figure PCTCN2017119992-appb-000017
具体的,本实施例采用余弦函数来进行整型像素采样的匹配代价拟合,可以基本消除了像素锁定现象。即利用余弦函数对时差空间进行插值,降低了视差空间拟合的复杂度,同时消除了视差空间拟合的像素锁定效应,提高了视差空间的插值精度。Specifically, in this embodiment, the cosine function is used to perform the matching cost fitting of the integer pixel sampling, and the pixel locking phenomenon can be substantially eliminated. That is, the cosine function is used to interpolate the time difference space, which reduces the complexity of the parallax space fitting, and eliminates the pixel locking effect of the parallax space fitting, and improves the interpolation precision of the parallax space.
具体的,在整型视差空间对应的深度图中,分层明显,这就反映了整型的像素精度不足以描述精确的图像信息,然而,图像获取传感器获取到的图像信息都是基于像素的图像信息。为了得到精确的图像深度信息,需要对基于整型时视差空间的深度图进行亚像素优化,即对整型的视差空间进行优化。传感器获得的图像像素是占一定空间的,基于整型像素的立体匹配中的像素位置只是像素的某一个点,一般取的是像素的几何中心点坐标作为整型像素的位置坐标,如图5所示,a为传感器中像素A的坐标,但实际上,像素A还包含得有b、c、d等点,因此,仅仅靠点a是不能够代表整个像素的图像信息。为了得到连续的视差空间,需要在基于整型像素匹配得到的视差基础上做一定的调整,这个调整范围在整型视差的半个像素范围之内,即优选的,计算亚像素级精度的像素坐标,得到亚像素视差空间可以包括:Specifically, in the depth map corresponding to the integer parallax space, the layering is obvious, which reflects that the integer pixel precision is insufficient to describe accurate image information, however, the image information acquired by the image acquisition sensor is pixel-based. Image information. In order to obtain accurate image depth information, it is necessary to sub-pixel optimize the depth map based on the parallax space of the integer, that is, optimize the parallax space of the integer. The image pixel obtained by the sensor occupies a certain space. The pixel position in the stereo matching based on the integer pixel is only a certain point of the pixel, and generally takes the geometric center point coordinate of the pixel as the position coordinate of the integer pixel, as shown in FIG. 5 . As shown, a is the coordinates of the pixel A in the sensor, but in reality, the pixel A also contains points such as b, c, d, etc., therefore, only the point a is image information that cannot represent the entire pixel. In order to obtain a continuous parallax space, it is necessary to make some adjustments based on the parallax obtained by integer pixel matching. This adjustment range is within a half pixel range of the integer parallax, that is, a pixel with sub-pixel precision is preferably calculated. Coordinates, resulting in subpixel parallax space can include:
利用
Figure PCTCN2017119992-appb-000018
计算亚像素级精度的像素坐标d Refine,并确定亚像素视差空间D New={d Refine}。
use
Figure PCTCN2017119992-appb-000018
Calculate the pixel coordinate d Refine of sub-pixel precision and determine the sub-pixel disparity space D New ={d Refine }.
即亚像素优化是在整型的视差空间拟合得到浮点型的视差空间的过程,利用连续匹配代价拟合函数进行多次拟合,把离散的数据通过曲线拟合得到连续的数据的过程。That is, sub-pixel optimization is a process of fitting a floating-point parallax space in an integer parallax space, using a continuous matching cost fitting function to perform multiple fittings, and the process of obtaining discrete data by curve fitting to obtain continuous data. .
S130、根据所述亚像素视差空间计算深度值,获得深度图。S130. Calculate a depth value according to the sub-pixel parallax space to obtain a depth map.
其中,步骤S110到步骤S120为亚像素优化过程。该步骤是将亚像素视差空间进行转换,得到深度值,进而获得基于深度图的亚像素优化之后的深度图。本实施例并不限定从亚像素视差空间到深度图的转换过程。具体的,根据所述亚像素视差空间计算深度值,获取深度图可以包括:Wherein, step S110 to step S120 are sub-pixel optimization processes. In this step, the sub-pixel parallax space is converted to obtain a depth value, thereby obtaining a depth map after sub-pixel optimization based on the depth map. This embodiment does not limit the conversion process from the sub-pixel parallax space to the depth map. Specifically, the depth value is calculated according to the sub-pixel disparity space, and the obtaining the depth map may include:
利用
Figure PCTCN2017119992-appb-000019
计算深度图的世界坐标;
use
Figure PCTCN2017119992-appb-000019
Calculate the world coordinates of the depth map;
利用
Figure PCTCN2017119992-appb-000020
将所述世界坐标单位化得到深度图的点云数据;
use
Figure PCTCN2017119992-appb-000020
Unitizing the world coordinates to obtain point cloud data of the depth map;
其中,E=(x,y,z)为世界坐标,E1=(X,Y,Z)为深度图的点云数据,e=(u,v)为图像坐标,Q为重投影矩阵,Z为亚像素优化后的深度值。Where E=(x, y, z) is the world coordinate, E1=(X, Y, Z) is the point cloud data of the depth map, e=(u, v) is the image coordinate, Q is the re-projection matrix, Z The depth value after optimization for subpixels.
基于上述技术方案,本实施例所提供的深度图的获取方法,直接在基于整型视差空间的深度图上做亚像素拟合,相对于基于亚像素空间的立体匹配的深度图算法,在大大减小了所需内存空间的同时,也缩短了算法的运行时间,用连续函数拟合的方法对离散的视差空间进行拟合插值,得到连续的视差空间,从而消除了深度图的分层效应,使得基于视差的三维测量的精度得到提高,适用于对测量精度要求不同的三维测量场景,特别是对测量精度要求非常高的三维测量场景;进一步,采用余弦函数来进行整型像素采样的匹配代价拟合,可以基本消除了像素锁定现象。即利用余弦函数对时差空间进行插值,降低了视差空间拟合的复杂度,同时消除了视差空间拟合的像素锁定效应,提高了视差空间的插值精度。Based on the foregoing technical solution, the method for obtaining the depth map provided by the embodiment directly performs sub-pixel fitting on the depth map based on the integer parallax space, and the depth map algorithm based on the stereo matching based on the sub-pixel space is greatly While reducing the required memory space, the running time of the algorithm is also shortened, and the discrete parallax space is fitted and fitted by the continuous function fitting method to obtain a continuous parallax space, thereby eliminating the layering effect of the depth map. The accuracy of the three-dimensional measurement based on parallax is improved, and is suitable for a three-dimensional measurement scene with different measurement accuracy requirements, especially a three-dimensional measurement scene with very high measurement accuracy; further, a cosine function is used for matching of integer pixel sampling. The cost fit can substantially eliminate pixel locking. That is, the cosine function is used to interpolate the time difference space, which reduces the complexity of the parallax space fitting, and eliminates the pixel locking effect of the parallax space fitting, and improves the interpolation precision of the parallax space.
基于上述实施例,本实施例在获取深度图之后还可以包括:Based on the foregoing embodiment, the embodiment may further include: after acquiring the depth map:
通过显示器输出所述深度图。The depth map is output through a display.
具体的,将优化之后的深度图显示出来。由上述实施例中的方法,得到的深度图的非常稠密的,即深度信息是连续的,是高精度的。Specifically, the depth map after optimization is displayed. From the method in the above embodiment, the depth map obtained is very dense, that is, the depth information is continuous and highly accurate.
其中,这里的显示器可以是显示屏等显示器件。Wherein, the display here may be a display device such as a display screen.
下面对本发明实施例提供的深度图的获取系统进行介绍,下文描述的深度图的获取系统与上文描述的深度图的获取方法及可相互对应参照。The acquisition system of the depth map provided by the embodiment of the present invention is described below. The acquisition system of the depth map described below and the acquisition method of the depth map described above may refer to each other.
请参考图6,图6为本发明实施例所提供的深度图的获取系统的结构框图;该获取系统可以包括:Please refer to FIG. 6. FIG. 6 is a structural block diagram of a system for acquiring a depth map according to an embodiment of the present invention; the acquiring system may include:
视差计算模块100,用于利用视差获取方法获取左图像和右图像的视差;a disparity calculation module 100, configured to acquire disparity of a left image and a right image by using a disparity acquisition method;
匹配代价差计算模块200,用于根据所述视差,计算深度图中每个像素点与其同名点相邻两个像素点之间的匹配代价差;The matching cost difference calculation module 200 is configured to calculate, according to the disparity, a matching cost difference between each pixel point in the depth map and two pixels adjacent to the same name point;
亚像素视差空间获取模块300,用于利用所述匹配代价差确定的基于整型像素采样的连续匹配代价拟合函数,并将整型的视差空间利用所述连续匹配代价拟合函数进行多次拟合得到连续的视差空间,计算亚像素级精度的像素坐标,得到亚像素视差空间;The sub-pixel disparity space obtaining module 300 is configured to use a continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and perform the integer disparity space multiple times by using the continuous matching cost fitting function Fitting to obtain a continuous parallax space, calculating pixel coordinates of sub-pixel precision, and obtaining a sub-pixel parallax space;
深度图获取模块400,用于根据所述亚像素视差空间计算深度值,获得深度图。The depth map obtaining module 400 is configured to calculate a depth value according to the sub-pixel parallax space to obtain a depth map.
基于上述实施例,所述亚像素视差空间获取模块300可以包括:Based on the foregoing embodiment, the sub-pixel disparity space obtaining module 300 may include:
连续匹配代价拟合函数确定单元,用于利用
Figure PCTCN2017119992-appb-000021
确定拟合变量h;根据所述拟合变量h,确定基于整型像素采样的连续匹配代价拟合函数f(h);
Continuous matching cost fit function determining unit for utilizing
Figure PCTCN2017119992-appb-000021
Determining a fitted variable h; determining a continuous matching cost fit function f(h) based on the integer pixel sample according to the fitting variable h;
亚像素视差空间获取单元,用于利用
Figure PCTCN2017119992-appb-000022
计算亚像素级精度的像素坐标d Refine,并确定亚像素视差空间D New={d Refine};
Sub-pixel parallax space acquisition unit for utilizing
Figure PCTCN2017119992-appb-000022
Calculate the pixel coordinate d Refine of sub-pixel precision and determine the sub-pixel disparity space D New ={d Refine };
其中,
Figure PCTCN2017119992-appb-000023
LeftDif为当前像素点与左边像素点的匹配代价差,RightDif为当前像素点与右边像素点的匹配代价差,d为当前像素点整型三维重建之后的视差,C d为当前像素点的视差d对应的立体匹配中聚合之后的匹配代价。
among them,
Figure PCTCN2017119992-appb-000023
LeftDif is the matching cost difference between the current pixel point and the left pixel point, RightDif is the matching cost difference between the current pixel point and the right pixel point, d is the parallax after the current pixel point integer 3D reconstruction, and C d is the parallax d of the current pixel point. The matching cost after aggregation in the corresponding stereo matching.
基于上述任意实施例,请参考图7,该获取系统还可以包括:Based on any of the above embodiments, referring to FIG. 7, the acquiring system may further include:
输出模块500,用于通过显示器输出所述深度图。The output module 500 is configured to output the depth map through a display.
说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。The various embodiments in the specification are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same similar parts between the various embodiments may be referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant parts can be referred to the method part.
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。A person skilled in the art will further appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware, computer software or a combination of both, in order to clearly illustrate the hardware and software. Interchangeability, the composition and steps of the various examples have been generally described in terms of function in the above description. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。The steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both. The software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.
以上对本发明所提供的深度图的获取方法及系统进行了详细介绍。本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。The method and system for acquiring the depth map provided by the present invention are described in detail above. The principles and embodiments of the present invention have been described herein with reference to specific examples, and the description of the above embodiments is only to assist in understanding the method of the present invention and its core idea. It should be noted that those skilled in the art can make various modifications and changes to the present invention without departing from the spirit and scope of the invention.

Claims (10)

  1. 一种深度图的获取方法,其特征在于,包括:A method for obtaining a depth map, comprising:
    利用视差获取方法获取左图像和右图像的视差;Obtaining a disparity of the left image and the right image by using a parallax acquisition method;
    根据所述视差,计算深度图中每个像素点与其同名点相邻两个像素点之间的匹配代价差;Calculating, according to the disparity, a matching cost difference between each pixel point in the depth map and two pixels adjacent to the same name point;
    利用所述匹配代价差确定的基于整型像素采样的连续匹配代价拟合函数,并将整型的视差空间利用所述连续匹配代价拟合函数进行多次拟合得到连续的视差空间,计算亚像素级精度的像素坐标,得到亚像素视差空间;A continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and the integer parallax space is multi-fitted by the continuous matching cost fitting function to obtain a continuous parallax space, and the calculation sub- Pixel-level precision pixel coordinates to obtain a sub-pixel parallax space;
    根据所述亚像素视差空间计算深度值,获得深度图。A depth map is obtained by calculating a depth value from the sub-pixel parallax space.
  2. 根据权利要求1所述的深度图的获取方法,其特征在于,利用视差获取方法获取左图像和右图像的视差,包括:The method for acquiring a depth map according to claim 1, wherein the disparity of the left image and the right image is obtained by using a parallax acquisition method, including:
    利用同名像点快速匹配方法获取左图像和右图像的视差。The parallax of the left image and the right image is obtained by the fast matching method of the same name image point.
  3. 根据权利要求2所述的深度图的获取方法,其特征在于,根据所述视差,计算深度图中相邻两个像素点之间的匹配代价差,包括:The method for acquiring a depth map according to claim 2, wherein calculating a matching cost difference between two adjacent pixel points in the depth map according to the disparity includes:
    利用
    Figure PCTCN2017119992-appb-100001
    计算深度图中相邻两个像素点之间的匹配代价差;
    use
    Figure PCTCN2017119992-appb-100001
    Calculating the matching cost difference between two adjacent pixels in the depth map;
    其中,d为当前像素点整型三维重建之后的视差,C d为当前像素点的视差d对应的立体匹配中聚合之后的匹配代价,C d-1为当前像素点在视差d-1时对应的匹配代价,C d+1为当前像素点在视差d+1时对应的匹配代价,LeftDif为当前像素点与其同名点左边像素点的匹配代价差,RightDif为当前像素点与其同名点右边像素点的匹配代价差。 Where d is the disparity after the 3D reconstruction of the current pixel point integer, C d is the matching cost after the aggregation in the stereo matching corresponding to the disparity d of the current pixel point, and C d-1 is the current pixel point corresponding to the disparity d-1 The matching cost, C d+1 is the matching cost of the current pixel point at the parallax d+1, LeftDif is the matching cost difference between the current pixel point and the pixel point to the left of the same name point, RightDif is the current pixel point and the pixel point to the right of the same name point The matching cost is poor.
  4. 根据权利要求3所述的深度图的获取方法,其特征在于,所述匹配代价差确定的基于整型像素采样的连续匹配代价拟合函数,包括:The method for obtaining a depth map according to claim 3, wherein the matching cost matching function based on the integer pixel sampling determined by the matching cost difference comprises:
    利用
    Figure PCTCN2017119992-appb-100002
    确定拟合变量h;
    use
    Figure PCTCN2017119992-appb-100002
    Determining the fitted variable h;
    根据所述拟合变量h,确定基于整型像素采样的连续匹配代价拟合函数f(h),其中,
    Figure PCTCN2017119992-appb-100003
    Determining, according to the fitting variable h, a continuous matching cost fitting function f(h) based on integer pixel sampling, wherein
    Figure PCTCN2017119992-appb-100003
  5. 根据权利要求4所述的深度图的获取方法,其特征在于,计算亚像素级精度的像素坐标,得到亚像素视差空间,包括:The method for acquiring a depth map according to claim 4, wherein calculating the pixel coordinates of the sub-pixel level precision to obtain the sub-pixel parallax space comprises:
    利用
    Figure PCTCN2017119992-appb-100004
    计算亚像素级精度的像素坐标d Refine,并确定亚像素视差空间D New={d Refine}。
    use
    Figure PCTCN2017119992-appb-100004
    Calculate the pixel coordinate d Refine of sub-pixel precision and determine the sub-pixel disparity space D New ={d Refine }.
  6. 根据权利要求5所述的深度图的获取方法,其特征在于,根据所述亚像素视差空间计算深度值,获取深度图,包括:The method for obtaining a depth map according to claim 5, wherein the depth value is calculated according to the sub-pixel parallax space, and the depth map is obtained, including:
    利用
    Figure PCTCN2017119992-appb-100005
    计算深度图的世界坐标;
    use
    Figure PCTCN2017119992-appb-100005
    Calculate the world coordinates of the depth map;
    利用
    Figure PCTCN2017119992-appb-100006
    将所述世界坐标单位化得到深度图的点云数据;
    use
    Figure PCTCN2017119992-appb-100006
    Unitizing the world coordinates to obtain point cloud data of the depth map;
    其中,E=(x,y,z)为世界坐标,E1=(X,Y,Z)为深度图的点云数据,e=(u,v)为图像坐标,Q为重投影矩阵,Z为亚像素优化后的深度值。Where E=(x, y, z) is the world coordinate, E1=(X, Y, Z) is the point cloud data of the depth map, e=(u, v) is the image coordinate, Q is the re-projection matrix, Z The depth value after optimization for subpixels.
  7. 根据权利要求1-6任一项所述的深度图的获取方法,其特征在于,获取深度图之后,还包括:The method for obtaining a depth map according to any one of claims 1 to 6, wherein after obtaining the depth map, the method further comprises:
    通过显示器输出所述深度图。The depth map is output through a display.
  8. 一种深度图的获取系统,其特征在于,包括:A depth map acquisition system, comprising:
    视差计算模块,用于利用视差获取方法获取左图像和右图像的视差;a disparity calculation module, configured to acquire a disparity of a left image and a right image by using a parallax acquisition method;
    匹配代价差计算模块,用于根据所述视差,计算深度图中每个像素点与其同名点相邻两个像素点之间的匹配代价差;a matching cost difference calculation module, configured to calculate, according to the disparity, a matching cost difference between each pixel point in the depth map and two pixels adjacent to the same name point;
    亚像素视差空间获取模块,用于利用所述匹配代价差确定的基于整型像素采样的连续匹配代价拟合函数,并将整型的视差空间利用所述连续匹配代价拟合函数进行多次拟合得到连续的视差空间,计算亚像素级精度的像素坐标,得到亚像素视差空间;a sub-pixel disparity space obtaining module, configured to perform a continuous matching cost fitting function based on the integer pixel sampling determined by the matching cost difference, and use the continuous matching cost fitting function to perform multiple times A continuous parallax space is obtained, and pixel coordinates of sub-pixel precision are calculated to obtain a sub-pixel parallax space;
    深度图获取模块,用于根据所述亚像素视差空间计算深度值,获得深度图。a depth map obtaining module, configured to calculate a depth value according to the sub-pixel parallax space, to obtain a depth map.
  9. 根据权利要求8所述的深度图的获取系统,其特征在于,所述亚像素视差空间获取模块包括:The acquiring system of the depth map according to claim 8, wherein the sub-pixel disparity space obtaining module comprises:
    连续匹配代价拟合函数确定单元,用于利用
    Figure PCTCN2017119992-appb-100007
    确定拟合变量h;根据所述拟合变量h,确定基于整型像素采样的连续匹配代价拟合函数f(h);
    Continuous matching cost fit function determining unit for utilizing
    Figure PCTCN2017119992-appb-100007
    Determining a fitted variable h; determining a continuous matching cost fit function f(h) based on the integer pixel sample according to the fitting variable h;
    亚像素视差空间获取单元,用于利用
    Figure PCTCN2017119992-appb-100008
    计算亚像素级精度的像素坐标d Refine,并确定亚像素视差空间D New={d Refine};
    Sub-pixel parallax space acquisition unit for utilizing
    Figure PCTCN2017119992-appb-100008
    Calculate the pixel coordinate d Refine of sub-pixel precision and determine the sub-pixel disparity space D New ={d Refine };
    其中,
    Figure PCTCN2017119992-appb-100009
    LeftDif为当前像素点与左边像素点的匹配代价差,RightDif为当前像素点与右边像素点的匹配代价差,d为当前像素点整型三维重建之后的视差,C d为当前像素点的视差d对应的立体匹配中聚合之后的匹配代价。
    among them,
    Figure PCTCN2017119992-appb-100009
    LeftDif is the matching cost difference between the current pixel point and the left pixel point, RightDif is the matching cost difference between the current pixel point and the right pixel point, d is the parallax after the current pixel point integer 3D reconstruction, and C d is the parallax d of the current pixel point. The matching cost after aggregation in the corresponding stereo matching.
  10. 根据权利要求8或9所述的深度图的获取方法,其特征在于,还包括:The method for obtaining a depth map according to claim 8 or 9, further comprising:
    输出模块,用于通过显示器输出所述深度图。And an output module, configured to output the depth map through a display.
PCT/CN2017/119992 2017-01-03 2017-12-29 Depth image acquisition method and system WO2018127007A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710001725.6A CN106780590B (en) 2017-01-03 2017-01-03 Method and system for acquiring depth map
CN201710001725.6 2017-01-03

Publications (1)

Publication Number Publication Date
WO2018127007A1 true WO2018127007A1 (en) 2018-07-12

Family

ID=58952072

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/119992 WO2018127007A1 (en) 2017-01-03 2017-12-29 Depth image acquisition method and system

Country Status (2)

Country Link
CN (1) CN106780590B (en)
WO (1) WO2018127007A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853080A (en) * 2019-09-30 2020-02-28 广西慧云信息技术有限公司 Method for measuring size of field fruit
CN111145271A (en) * 2019-12-30 2020-05-12 广东博智林机器人有限公司 Method and device for determining accuracy of camera parameters, storage medium and terminal
CN111260713A (en) * 2020-02-13 2020-06-09 青岛联合创智科技有限公司 Depth calculation method based on image
CN111382654A (en) * 2018-12-29 2020-07-07 北京市商汤科技开发有限公司 Image processing method and apparatus, and storage medium
CN111415402A (en) * 2019-01-04 2020-07-14 中国科学院沈阳计算技术研究所有限公司 Stereo matching algorithm with internal and external similarity aggregation
CN112101209A (en) * 2020-09-15 2020-12-18 北京百度网讯科技有限公司 Method and apparatus for determining a world coordinate point cloud for roadside computing devices
CN112116641A (en) * 2020-09-11 2020-12-22 南京理工大学智能计算成像研究院有限公司 Speckle image matching method based on OpenCL
CN112712477A (en) * 2020-12-21 2021-04-27 东莞埃科思科技有限公司 Depth image evaluation method and device of structured light module
CN113034568A (en) * 2019-12-25 2021-06-25 杭州海康机器人技术有限公司 Machine vision depth estimation method, device and system
CN114723967A (en) * 2022-03-10 2022-07-08 北京的卢深视科技有限公司 Disparity map optimization method, face recognition method, device, equipment and storage medium
CN116188558A (en) * 2023-04-27 2023-05-30 华北理工大学 Stereo photogrammetry method based on binocular vision

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780590B (en) * 2017-01-03 2019-12-24 成都通甲优博科技有限责任公司 Method and system for acquiring depth map
CN109919991A (en) * 2017-12-12 2019-06-21 杭州海康威视数字技术股份有限公司 A kind of depth information determines method, apparatus, electronic equipment and storage medium
WO2019116708A1 (en) * 2017-12-12 2019-06-20 ソニー株式会社 Image processing device, image processing method and program, and image processing system
CN108876835A (en) * 2018-03-28 2018-11-23 北京旷视科技有限公司 Depth information detection method, device and system and storage medium
CN110533701A (en) * 2018-05-25 2019-12-03 杭州海康威视数字技术股份有限公司 A kind of image parallactic determines method, device and equipment
CN110533703B (en) * 2019-09-04 2022-05-03 深圳市道通智能航空技术股份有限公司 Binocular stereo parallax determination method and device and unmanned aerial vehicle
CN110853086A (en) * 2019-10-21 2020-02-28 北京清微智能科技有限公司 Depth image generation method and system based on speckle projection
CN112749594B (en) * 2019-10-31 2022-04-22 浙江商汤科技开发有限公司 Information completion method, lane line identification method, intelligent driving method and related products
CN111179327B (en) * 2019-12-30 2023-04-25 青岛联合创智科技有限公司 Depth map calculation method
CN111402313B (en) * 2020-03-13 2022-11-04 合肥的卢深视科技有限公司 Image depth recovery method and device
CN112184793B (en) * 2020-10-15 2021-10-26 北京的卢深视科技有限公司 Depth data processing method and device and readable storage medium
CN112348859A (en) * 2020-10-26 2021-02-09 浙江理工大学 Asymptotic global matching binocular parallax acquisition method and system
CN114897665A (en) * 2022-04-01 2022-08-12 中国科学院自动化研究所 Configurable real-time parallax point cloud computing device and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101915571A (en) * 2010-07-20 2010-12-15 桂林理工大学 Full-automatic acquisition method for image matching initial parallax based on phase correlation
US8472699B2 (en) * 2006-11-22 2013-06-25 Board Of Trustees Of The Leland Stanford Junior University Arrangement and method for three-dimensional depth image construction
CN104065947A (en) * 2014-06-18 2014-09-24 长春理工大学 Depth image obtaining method for integrated imaging system
CN105953777A (en) * 2016-04-27 2016-09-21 武汉讯图科技有限公司 Large-plotting-scale tilt image measuring method based on depth image
CN106780590A (en) * 2017-01-03 2017-05-31 成都通甲优博科技有限责任公司 The acquisition methods and system of a kind of depth map

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100505334B1 (en) * 2003-03-28 2005-08-04 (주)플렛디스 Real-time stereoscopic image conversion apparatus using motion parallaxr
JP5337218B2 (en) * 2011-09-22 2013-11-06 株式会社東芝 Stereo image conversion device, stereo image output device, and stereo image conversion method
CN103106688B (en) * 2013-02-20 2016-04-27 北京工业大学 Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN103702098B (en) * 2013-12-09 2015-12-30 上海交通大学 Three viewpoint three-dimensional video-frequency depth extraction methods of constraint are combined in a kind of time-space domain

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8472699B2 (en) * 2006-11-22 2013-06-25 Board Of Trustees Of The Leland Stanford Junior University Arrangement and method for three-dimensional depth image construction
CN101915571A (en) * 2010-07-20 2010-12-15 桂林理工大学 Full-automatic acquisition method for image matching initial parallax based on phase correlation
CN104065947A (en) * 2014-06-18 2014-09-24 长春理工大学 Depth image obtaining method for integrated imaging system
CN105953777A (en) * 2016-04-27 2016-09-21 武汉讯图科技有限公司 Large-plotting-scale tilt image measuring method based on depth image
CN106780590A (en) * 2017-01-03 2017-05-31 成都通甲优博科技有限责任公司 The acquisition methods and system of a kind of depth map

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382654A (en) * 2018-12-29 2020-07-07 北京市商汤科技开发有限公司 Image processing method and apparatus, and storage medium
CN111382654B (en) * 2018-12-29 2024-04-12 北京市商汤科技开发有限公司 Image processing method and device and storage medium
CN111415402B (en) * 2019-01-04 2023-05-02 中国科学院沈阳计算技术研究所有限公司 Stereo matching algorithm for gathering internal and external similarity
CN111415402A (en) * 2019-01-04 2020-07-14 中国科学院沈阳计算技术研究所有限公司 Stereo matching algorithm with internal and external similarity aggregation
CN110853080A (en) * 2019-09-30 2020-02-28 广西慧云信息技术有限公司 Method for measuring size of field fruit
CN113034568B (en) * 2019-12-25 2024-03-29 杭州海康机器人股份有限公司 Machine vision depth estimation method, device and system
CN113034568A (en) * 2019-12-25 2021-06-25 杭州海康机器人技术有限公司 Machine vision depth estimation method, device and system
CN111145271B (en) * 2019-12-30 2023-04-28 广东博智林机器人有限公司 Method and device for determining accuracy of camera parameters, storage medium and terminal
CN111145271A (en) * 2019-12-30 2020-05-12 广东博智林机器人有限公司 Method and device for determining accuracy of camera parameters, storage medium and terminal
CN111260713A (en) * 2020-02-13 2020-06-09 青岛联合创智科技有限公司 Depth calculation method based on image
CN112116641A (en) * 2020-09-11 2020-12-22 南京理工大学智能计算成像研究院有限公司 Speckle image matching method based on OpenCL
CN112116641B (en) * 2020-09-11 2024-02-20 南京理工大学智能计算成像研究院有限公司 Speckle image matching method based on OpenCL
CN112101209B (en) * 2020-09-15 2024-04-09 阿波罗智联(北京)科技有限公司 Method and apparatus for determining world coordinate point cloud for roadside computing device
CN112101209A (en) * 2020-09-15 2020-12-18 北京百度网讯科技有限公司 Method and apparatus for determining a world coordinate point cloud for roadside computing devices
CN112712477A (en) * 2020-12-21 2021-04-27 东莞埃科思科技有限公司 Depth image evaluation method and device of structured light module
CN114723967B (en) * 2022-03-10 2023-01-31 合肥的卢深视科技有限公司 Disparity map optimization method, face recognition device, equipment and storage medium
CN114723967A (en) * 2022-03-10 2022-07-08 北京的卢深视科技有限公司 Disparity map optimization method, face recognition method, device, equipment and storage medium
CN116188558A (en) * 2023-04-27 2023-05-30 华北理工大学 Stereo photogrammetry method based on binocular vision

Also Published As

Publication number Publication date
CN106780590B (en) 2019-12-24
CN106780590A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
WO2018127007A1 (en) Depth image acquisition method and system
WO2021120846A1 (en) Three-dimensional reconstruction method and device, and computer readable medium
CN108596975B (en) Stereo matching algorithm for weak texture region
CN106447601B (en) Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation
Zhao et al. Geometric-constrained multi-view image matching method based on semi-global optimization
CN110223222B (en) Image stitching method, image stitching device, and computer-readable storage medium
CN109961401A (en) A kind of method for correcting image and storage medium of binocular camera
WO2020119467A1 (en) High-precision dense depth image generation method and device
CN108961383A (en) three-dimensional rebuilding method and device
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN108876861B (en) Stereo matching method for extraterrestrial celestial body patrolling device
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
CN113129352A (en) Sparse light field reconstruction method and device
CN114519772A (en) Three-dimensional reconstruction method and system based on sparse point cloud and cost aggregation
CN115222889A (en) 3D reconstruction method and device based on multi-view image and related equipment
CN111739071A (en) Rapid iterative registration method, medium, terminal and device based on initial value
CN117132737B (en) Three-dimensional building model construction method, system and equipment
CN114120012A (en) Stereo matching method based on multi-feature fusion and tree structure cost aggregation
CN111369435B (en) Color image depth up-sampling method and system based on self-adaptive stable model
Duan et al. A combined image matching method for Chinese optical satellite imagery
Le Besnerais et al. Dense height map estimation from oblique aerial image sequences
CN115937002B (en) Method, apparatus, electronic device and storage medium for estimating video rotation
Chen et al. Densefusion: Large-scale online dense pointcloud and dsm mapping for uavs
Li et al. A Bayesian approach to uncertainty-based depth map super resolution
CN109325974B (en) Method for obtaining length of three-dimensional curve by adopting image processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17889854

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17889854

Country of ref document: EP

Kind code of ref document: A1