WO2020177061A1 - 一种基于极值校验的双目视觉立体匹配方法及系统 - Google Patents

一种基于极值校验的双目视觉立体匹配方法及系统 Download PDF

Info

Publication number
WO2020177061A1
WO2020177061A1 PCT/CN2019/076889 CN2019076889W WO2020177061A1 WO 2020177061 A1 WO2020177061 A1 WO 2020177061A1 CN 2019076889 W CN2019076889 W CN 2019076889W WO 2020177061 A1 WO2020177061 A1 WO 2020177061A1
Authority
WO
WIPO (PCT)
Prior art keywords
cost
matching
pixel
disparity value
value
Prior art date
Application number
PCT/CN2019/076889
Other languages
English (en)
French (fr)
Inventor
赵勇
卢昌义
谢旺多
桑海伟
张丽
陈天健
Original Assignee
北京大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学深圳研究生院 filed Critical 北京大学深圳研究生院
Priority to PCT/CN2019/076889 priority Critical patent/WO2020177061A1/zh
Publication of WO2020177061A1 publication Critical patent/WO2020177061A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Definitions

  • the invention relates to the technical field of binocular vision, in particular to a stereo matching method and system for binocular vision based on extreme value verification.
  • Binocular Stereo Vision (Binocular Stereo Vision) is an important form of computer vision. It is based on the principle of parallax and uses imaging equipment to obtain two images of the measured object from different positions. By calculating the position deviation between the corresponding points of the image, To obtain the three-dimensional geometric information of the object. It can be seen from this that it processes the real world by simulating the human visual system. The research on stereo vision matching can greatly enhance the computer or robot’s ability to perceive the environment, so that the robot can better adapt to the environment and become more intelligent. Serve people better. After years of technological development, binocular stereo vision has been applied in the neighborhoods of robot vision, aerial surveying and mapping, reverse engineering, military applications, medical imaging and industrial inspection.
  • binocular stereo vision combines the images obtained by two imaging devices and observes the difference between them, so that the computer can obtain accurate depth information, establish the correspondence between features, and map the same physical point in different images.
  • Corresponding points usually called disparity (disparity).
  • stereo vision matching the most important but very difficult problem in binocular stereo vision is the problem of stereo vision matching, that is, finding matching corresponding points from images from different viewpoints.
  • the main technical problem solved by the present invention is how to find matching corresponding points from different viewpoint images, so as to improve the accuracy and robustness of binocular vision stereo matching.
  • this application provides a binocular vision stereo matching method based on extreme value verification.
  • an embodiment provides a binocular vision stereo matching method based on extreme value verification, including the following steps:
  • the disparity value corresponding to each matching cost in the first matching cost group is checked for left and right consistency in turn. If the disparity value corresponding to one of the matching costs passes the left and right consistency check, the visual is determined The difference is the optimal disparity value of the first pixel.
  • the performing cost aggregation on the first pixel in one of the images according to a preset cost function and a plurality of preset disparity values includes: calculating each disparity value under the cost function in the first The function value at the pixel point is aggregated with the function value of each disparity value at the first pixel point to obtain the cost aggregation function corresponding to the first pixel point.
  • the cost function includes, but is not limited to, the cost function corresponding to color, gradient, rank, NCC, cintrium or mutual-information; the disparity value is any value in the value range [0, d max ], where d max Indicates the maximum allowable value of the disparity value.
  • the obtaining the matching cost corresponding to each disparity value at the first pixel according to the cost aggregation function includes: calculating the minimum value of each disparity value at the first pixel under the cost aggregation function The minimum value is used as the matching cost corresponding to the disparity value at the first pixel.
  • the performing left-right consistency check on the disparity value corresponding to each matching cost in the first matching cost group includes:
  • the disparity value corresponding to each matching cost in the first matching cost group is sequentially compared with the disparity value corresponding to each matching cost in the second matching cost group, and it is determined that the absolute value of the comparison result is less than the preset calibration.
  • Check threshold ⁇ it is determined that the disparity value corresponding to the matching cost in the first matching cost group passes the left-right consistency check.
  • a number of matching costs are selected from the matching costs corresponding to each disparity value at the first pixel to obtain the first matching cost group, and each disparity value at the second pixel corresponds to Selecting several matching costs from the matching costs to obtain the second matching cost group;
  • the preset rule includes: arranging the matching costs corresponding to each disparity value at the first pixel point or the second pixel point in ascending order, and determining several matches less than or equal to a noise threshold from the arrangement result
  • the cost is used as the selection object of several matching costs
  • the noise threshold is the sum of the smallest matching cost in the permutation result and the preset noise parameter ⁇ .
  • the left and right consistency check includes:
  • the binocular vision stereo matching method further includes: if the disparity value corresponding to each matching cost in the first matching cost group does not pass the left-right consistency check, then placing the first matching cost group in The disparity value corresponding to the first matching cost is taken as the best disparity value.
  • an embodiment provides an image visual stereo matching method, including:
  • the stereo matching method for binocular vision described in the first aspect is used to perform stereo matching on each pixel in one of the images to obtain the optimal disparity value of each pixel.
  • an embodiment provides a binocular vision stereo matching system based on extreme value verification, including:
  • Memory used to store programs
  • the processor is configured to execute the program stored in the memory to implement the method described in the first aspect or the second aspect.
  • an embodiment provides a computer-readable storage medium including a program that can be executed by a processor to implement the method described in the first or second aspect.
  • a binocular vision stereo matching method and system based on extreme value verification includes: acquiring images at two viewpoints; according to a preset cost function and preset multiple pairs of disparity values The first pixel in one of the images is cost-aggregated to obtain the cost aggregation function corresponding to the first pixel; the matching cost corresponding to each disparity value at the first pixel is obtained according to the cost aggregation function to obtain the first matching cost group ; The disparity value corresponding to each matching cost in the first matching cost group is checked for left and right consistency in turn. If the disparity value corresponding to a matching cost passes the left and right consistency check, it is determined that the disparity value is the first The optimal disparity value of the pixel.
  • the left and right consistency checks are performed on each matching cost one by one, so that a more robust cost aggregation function and a higher accuracy rate of each pixel can be obtained through the verification result.
  • the best disparity value becomes possible; on the other hand, because the left-right consistency check is introduced into the image visual stereo matching, the problem of mismatching during stereo matching can be effectively solved, which is conducive to accurately finding the images in different viewpoints The corresponding points of matching can improve the accuracy of stereo matching.
  • FIG. 1 is a flowchart of a stereo matching method for binocular vision in an embodiment
  • Figure 2 is a flowchart of left and right consistency verification
  • Fig. 3 is a flowchart of an image visual stereo matching method in an embodiment
  • Fig. 4 is a structural diagram of a binocular vision stereo matching system in an embodiment.
  • connection and “connection” mentioned in this application include direct and indirect connection (connection) unless otherwise specified.
  • a key problem is to find the matching points in the left and right images to obtain the horizontal position difference of the corresponding pixels in the two images, also called parallax, so that the pixel value can be further calculated depth.
  • Pixels that are not at the same depth may have the same color, texture, gradient, etc., so this often leads to mismatches during stereo matching, which further leads to larger errors in the disparity calculation, which greatly affects the depth of binocular vision Application in measurement.
  • the pixels in the surrounding area of the pixel are generally used to estimate the pixel. Because the pixels in the surrounding area may not be at the same depth as the central pixel Therefore, the existing methods are still relatively unrobust.
  • the fast stereo matching algorithm is mainly realized through cost matrix calculation, cost aggregation, WTA (winner-take-all), post-processing and other steps.
  • the technical solution provided by this application can start from the minimum matching cost, and perform left and right consistency checks for each matching cost one by one, thereby obtaining a more robust cost aggregation function and obtaining the accuracy of each pixel through the verification result Higher optimal parallax value.
  • the technical solution provided by the present application can effectively solve the problem of mismatching during stereo matching, which is conducive to accurately finding matching corresponding points in different viewpoint images, and improving the accuracy of stereo matching.
  • this application discloses a binocular vision stereo matching method based on extreme value verification, which includes steps S110-S180, which will be described separately below.
  • Step S110 Obtain images from two viewpoints.
  • the stereo matching object is captured by a binocular camera. Since the binocular camera constitutes two capturing viewpoints, one frame of image is obtained under the two capturing viewpoints.
  • Step S120 Perform cost aggregation on the first pixel in one of the images according to the preset cost function and multiple preset disparity values, to obtain the cost aggregation function corresponding to the first pixel, where the first pixel is Is any pixel in the image.
  • the function value of each disparity value at the first pixel point under the preset cost function is calculated, and the function value of each disparity value at the first pixel point is aggregated, so as to obtain the corresponding value of the first pixel point The cost aggregation function.
  • the cost function in this application includes but is not limited to the cost function corresponding to color, gradient, rank, NCC, cintrium or mutual-information; among them, for the cost function of color, please refer to the technical document “Color-based cost function References [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1994, Vol.16(9), pp.920-932 CrossRef", for the cost function of gradient, you can refer to the technical document "Yang Xin, a gradient-based operator Image matching algorithm[J].
  • the disparity value in this embodiment is any value in the value range [0, d max ], where d max represents the maximum allowable value of the disparity value, and the selection situation is set by the user.
  • cost_left(0,...,d max ) cost_volume_left(y,x,0,...,d max )
  • cost_left() represents the cost aggregation function corresponding to the first pixel (y, x) in the left image
  • cost_volume_left() represents the cost function for performing cost aggregation operations on the left image
  • Step S130 Obtain the matching cost corresponding to each disparity value at the first pixel according to the cost aggregation function, and obtain the first matching cost group.
  • the minimum value of each disparity value at the first pixel under the cost aggregation function is calculated, and the minimum value is used as the matching cost corresponding to the disparity value at the first pixel.
  • each disparity value (such as 0,...,d max )
  • the corresponding matching costs are arranged in ascending order as follows: min_l_cost 1 ,min_l_cost 2 ,...,min_l_cost i ,...,min_l_cost h ; where the subscript h is the total number of matching costs in the ascending sequence .
  • the first matching cost group can be expressed as:
  • the preset rules include: Arrange the matching costs corresponding to each disparity value at the first pixel in ascending order, and determine from the arrangement result several matching costs less than or equal to a noise threshold as the selection objects of several matching costs, and the noise threshold is the result of the arrangement The sum of the smallest matching cost and the preset noise parameter ⁇ .
  • the value of the subscript m in the first matching cost group is determined by the minimum matching cost min_l_cost 1 and the noise parameter ⁇ , so that min_l_cost i and min_l_cost m-1 are less than or equal to min_l_cost 1 + ⁇ , which also makes min_l_cost m+1 Greater than min_l_cost 1 + ⁇ .
  • the noise parameter ⁇ is a parameter that measures the level of image noise, and can be specifically set according to the left image obtained in step S110, and there is no limitation here.
  • Step S140 the disparity value corresponding to each matching cost in the first matching cost group is checked for left and right consistency in turn. If the disparity value corresponding to a matching cost passes the left and right consistency check, it is determined that the disparity value is The best disparity value of the first pixel.
  • step S140 may include steps S141-S147, which are described as follows.
  • Step S141 For the disparity value corresponding to each matching cost in the first matching cost group, obtain a second pixel point corresponding to the first pixel point in another image according to the disparity value.
  • Step S142 Perform cost aggregation on the second pixel in another image according to the preset cost function and multiple preset disparity values to obtain a cost aggregation function corresponding to the second pixel.
  • cost_right(0,...,d max ) cost_volume_right(y,x-min_left i ,0,...,d max )
  • cost_right() represents the cost aggregation function corresponding to the second pixel (y, x-min_left i ) in the right image
  • cost_volume_right() represents the cost function for performing cost aggregation operations on the right image
  • Step S143 Obtain the matching cost corresponding to each disparity value at the second pixel according to the cost aggregation function corresponding to the second pixel, and obtain a second matching cost group.
  • the minimum value of each disparity value at the second pixel under the cost aggregation function is calculated, and the minimum value is used as the matching cost corresponding to the disparity value at the second pixel.
  • each disparity value at the second pixel point (y, x-min_left i ) under the cost aggregation function cost_right(0,...,d max ), denoted as min_r_cost k ; then, each disparity
  • the matching cost corresponding to the value (such as 0,...,d max ) is arranged from smallest to largest as follows: min_r_cost 1 ,min_r_cost 2 ,...,min_r_cost k ,...,min_r_cost h ; where the subscript h is the matching cost in the ascending sequence The total number of.
  • the second matching cost group can be expressed as:
  • a method similar to that of obtaining the first matching cost group is adopted to obtain the second matching cost group, specifically: according to a preset rule, from the matching cost corresponding to each disparity value at the second pixel point Several matching costs are selected to obtain the second matching cost group; the preset rule here includes: arranging the matching costs corresponding to each disparity value at the second pixel in ascending order, and determining from the arrangement result that it is less than or equal to a noise Several matching costs of the threshold are used as selection objects of several matching costs, and the noise threshold is the sum of the smallest matching cost in the arrangement result and the preset noise parameter ⁇ .
  • the value of the subscript m in the first matching cost group is determined by the minimum matching cost min_l_cost 1 and the noise parameter ⁇ , so that min_l_cost i and min_l_cost m-1 are less than or equal to min_l_cost 1 + ⁇ , which also makes min_l_cost m+1 Greater than min_l_cost 1 + ⁇ .
  • Step S144 The disparity value corresponding to each matching cost in the first matching cost group is sequentially compared with the disparity value corresponding to each matching cost in the second matching cost group.
  • the left and right consistency check includes:
  • is the preset check threshold, and ⁇ 2,...,5 ⁇ .
  • first matching cost is the smallest disparity value corresponding to min_l_cost 1 min_left 1 respectively min_right 1, min_right 2, ... min_right k ..., min_right m are compared, and then views small disparity value corresponding matching cost min_l_cost 2 min_left 2 respectively min_right 1, min_right 2, ... min_right k ..., min_right m compared until the end, the larger the corresponding matching cost min_l_cost m disparity value min_left m Compare with min_right 1 , min_right 2 , ...min_right k ..., min_right m respectively.
  • LRC left-right consistency detection
  • the specific method of using LRC is: according to the left and right input images, the left and right disparity maps are obtained respectively, and the disparity value of a point in the left image is obtained, and the corresponding point in the right image is found and the disparity of the corresponding point is obtained. , If the absolute value of the two parallaxes is greater than the threshold, the point identified in the left picture is marked as the occlusion point.
  • step S145 it is judged whether the absolute value of the comparison result is less than the preset check threshold ⁇ , if yes, go to step S146, otherwise go to step S147. In a specific embodiment, if
  • Step S146 the matching determining a first set of matching cost is the cost min_l_cost i disparity value corresponding to the left and right by the consistency checking min_left i.
  • Step S147 the matching determining a first set of matching cost is the cost min_l_cost i corresponds approximately min_left i no disparity value by consistency checking.
  • Step S150 Determine whether the disparity value corresponding to the matching cost currently participating in the left-right consistency check in the first matching cost group passes the left-right consistency check, if it passes the check, go to step S150, otherwise go to step S170.
  • step S146 After step S146 is completed, step S150 can be entered, and step S170 can be entered after step S147 is completed.
  • step S160 when the disparity value min_left i corresponding to the matching cost in the first matching cost group is compared with min_right 1 , min_right 2 ,...min_right k ..., min_right m , if min_right k —min_left i If the absolute value of is smaller than the check threshold ⁇ , it is considered that min_left i has passed the left and right consistency check, and then step S160 is entered.
  • Step S160 Determine that the disparity value corresponding to the matching cost currently participating in the left-right consistency check in the first matching cost group is the best disparity value of the first pixel.
  • min_left i passes the left and right consistency check, it is considered that min_left i is the best disparity value of the first pixel (y, x).
  • step S160 it is considered that the optimal disparity value of the first pixel is found, and the following steps S170-S180 do not need to be executed, that is, the method flow this time ends.
  • Step S170 It is judged whether each matching cost in the first matching cost group has been checked for left and right consistency. If so, step S180 is entered, otherwise, step S140 is entered to perform a left and right consistency check for the next matching cost.
  • Step S180 if the disparity value corresponding to each matching cost in the first matching cost group fails the left-right consistency check, the disparity value corresponding to the first matching cost in the first matching cost group is taken as the best disparity value.
  • a binocular vision stereo matching system 30 based on extreme value verification is also correspondingly disclosed.
  • the system includes a memory 301 and a processor 302, where the memory 301 is used to store programs, and the processor 302 is used to execute the programs stored in the memory 301 to implement the methods described in steps S110-S150.
  • this embodiment also provides an image vision stereo matching method. Please refer to FIG. 3.
  • the image vision stereo matching method includes steps S210-S220, which are described below.
  • Step S210 Acquire images of at least two viewpoints.
  • multiple cameras can be used to capture images of the stereo matching object, so that images from multiple viewpoints can be obtained.
  • Step S220 Perform stereo matching on each pixel in one of the images by using the binocular vision stereo matching method described in the embodiment to obtain the optimal disparity value of each pixel.
  • the binocular vision stereo matching method in the first embodiment obtains the best disparity value of one pixel in the image, and the matching corresponding in another image can be found according to the best disparity value. Then, you can continue to calculate the best disparity value of all pixels in the image according to this method, so that one-to-one stereo matching of pixels between two or more images can be realized, and then the effect of image stereo matching can be achieved.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium may include: read-only memory, random access memory, magnetic disk, optical disk, hard disk, etc.
  • the computer executes the program to realize the above-mentioned functions.
  • the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the above functions can be realized.
  • the program can also be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a mobile hard disk, and saved by downloading or copying.
  • a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a mobile hard disk, and saved by downloading or copying.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种基于极值校验的双目视觉立体匹配方法及系统,该方法包括:获取两个视点下的图像;根据预设的代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,得到第一像素点对应的代价聚合函数;根据代价聚合函数获取第一像素点处各个视差值对应的匹配代价,得到第一匹配代价组;对第一匹配代价组中每个匹配代价对应的视差值依次进行左右一致性校验,若一匹配代价对应的视差值通过左右一致性校验,则确定该视差值为第一像素点的最佳视差值。由于从最小的匹配代价开始,逐一对每个匹配代价进行左右一致性校验,使得通过校验结果得到鲁棒性较高的代价聚合函数以及得到各个像素点的准确率较高的最佳视差值成为可能。

Description

一种基于极值校验的双目视觉立体匹配方法及系统 技术领域
本发明涉及双目视觉技术领域,具体涉及一种基于极值校验的双目视觉立体匹配方法及系统。
背景技术
众所周知,场景中的光线在人眼这个精密的成像系统中被采集,通过神经中枢被送入包含有数以亿计的神经元的大脑中被并行的处理,得到了实时、高清晰、准确的深度感觉信息。这使得人类对环境的适应能力大大提高,很多复杂的动作能够得以完成:如行走、体育运动、驾驶车辆以及进行科学实验等。而计算机视觉正是使用计算机来模拟人的视觉系统的学科,目的是根据获取的两幅平面图像恢复3D图像。当前,计算机立体视觉的水平与人类的双目视觉水平还相距甚远,因此对它的研究仍然是一个非常活跃的邻域。
双目立体视觉(Binocular Stereo Vision)是计算机视觉的一种重要形式,它是基于视差原理并利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对应点间的位置偏差,来获取物体三维几何信息的方法。由此可知,它通过模拟人的视觉系统来处理现实世界,对于立体视觉匹配的研究,能够大大的增强计算机或机器人对环境的感知能力,使得机器人能够更好的适应环境、更加智能,从而能够更好的为人们服务。经过多年的技术发展,双目立体视觉已在机器人视觉、航空测绘、反求工程、军事运用、医学成像和工业检测等邻域中得以应用。
当前,双目立体视觉融合了两取像设备获得的图像并观察它们之间的差别,使计算机可以获得准确的深度信息,建立特征间的对应关系,将同一空间物理点在不同图像中的映像点对应起来,通常将这种差别称作视差(disparity)。然而,双目立体视觉中最重要但又非常困难的问题就是立体视觉匹配问题,即从不同视点图像中找到匹配的对应点。
为在不同视点图像中找到匹配的对应点,可以采用全局匹配误差最小且上边沿光滑性能约束的方法,但该方法计算量十分巨大,几乎不可能在现有的处理器上进行实时计算。另一种办法是采用一个像素周边区域的像素来估计该像素点,如用一个矩形区域、自适应生长区域或最小生成树等等,但在区域内,对像素的匹配代价的加权仍然只能采用上面 所谓的颜色(亮度)、纹理、梯度等与视差没有直接关系的特征来进行计算,因此,在实用过程中,都还有较大的不鲁棒性。
发明内容
本发明主要解决的技术问题是如何从不同的视点图像中找到匹配的对应点,以提高双目视觉立体匹配的准确度和鲁棒性。为解决上述技术问题,本申请提供了一种基于极值校验的双目视觉立体匹配方法。
根据第一方面,一种实施例中提供一种基于极值校验的双目视觉立体匹配方法,包括以下步骤:
获取两个视点下的图像;
根据预设的代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,得到所述第一像素点对应的代价聚合函数,所述第一像素点为该图像中的任意一像素点;
根据所述代价聚合函数获取所述第一像素点处各个视差值对应的匹配代价,得到第一匹配代价组;
对所述第一匹配代价组中每个匹配代价对应的视差值依次进行左右一致性校验,若一所述匹配代价对应的视差值通过所述左右一致性校验,则确定该视差值为所述第一像素点的最佳视差值。
所述根据预设的代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,包括:计算所述代价函数下每个视差值在所述第一像素点处的函数值,聚合各个视差值在所述第一像素点处的函数值,得到所述第一像素点对应的代价聚合函数。
所述代价函数包括但不限于颜色、梯度、rank、NCC、cintrium或mutual-information对应的代价函数;所述视差值为取值范围[0,d max]中的任意值,其中,d max表示所述视差值的最大允许值。
所述根据所述代价聚合函数获取所述第一像素点处各个视差值对应的匹配代价,包括:计算所述代价聚合函数下每个视差值在所述第一像素点处的极小值,将极小值作为所述第一像素点处该视差值对应的匹配代价。
所述对所述第一匹配代价组中每个匹配代价对应的视差值进行左右一致性校验,包括:
对于所述第一匹配代价组中每个匹配代价对应的视差值,根据该视差值得到另一幅图像中与所述第一像素点相对应的第二像素点;
根据预设的代价函数和预设的多个视差值对所述另一幅图像中的第二像素点进行代价聚合,得到所述第二像素点对应的代价聚合函数;
根据所述第二像素点对应的代价聚合函数获取所述第二像素点处各个视差值对应的匹配代价,得到第二匹配代价组;
将所述第一匹配代价组中每个匹配代价对应的视差值依次与所述第二匹配代价组中各个匹配代价对应的视差值进行比较,判断比较结果的绝对值小于预设的校验阈值ε,则确定所述第一匹配代价组中该匹配代价对应的视差值通过所述左右一致性校验。
根据预设规则从所述第一像素点处各个视差值对应的匹配代价中选取若干个匹配代价以得到所述第一匹配代价组,且从所述第二像素点处各个视差值对应的匹配代价中选取若干个匹配代价以得到所述第二匹配代价组;
所述预设规则包括:将所述第一像素点处或所述第二像素点处各个视差值对应的匹配代价进行升序排列,从排列结果中确定小于或等于一噪声阈值的几个匹配代价作为若干个匹配代价的选取对象,所述噪声阈值为排列结果中最小的匹配代价与预设噪声参数δ之和。
对于所述第一匹配代价组中的任意一个匹配代价min_l_cost i对应的视差值min_left i,和所述第二匹配代价组中的任意一个匹配代价min_r_cost k对应的视差值min_right k,进行所述左右一致性校验包括:|min_right k—min_left i|<ε,其中,ε为预设的校验阈值,且ε∈{2,…,5}。
所述的双目视觉立体匹配方法还包括:若所述第一匹配代价组中各个匹配代价对应的视差值均未通过所述左右一致性校验,则将所述第一匹配代价组中第一个匹配代价对应的视差值作为最佳视差值。
根据第二方面,一种实施例中提供一种图像视觉立体匹配方法,包括:
获取至少两个视点的图像;
通过上述第一方面所述的双目视觉立体匹配方法对其中一幅图像中的各个像素点进行立体匹配,分别得到各个像素点的最佳视差值。
根据第三方面,一种实施例中提供一种基于极值校验的双目视觉立体匹配系统,包括:
存储器,用于存储程序;
处理器,用于通过执行所述存储器存储的程序以实现上述第一方面 或第二方面中所述的方法。
根据第四方面,一种实施例中提供一种计算机可读存储介质,包括程序,所述程序能够被处理器执行以实现上述第一方面或第二方面所述的方法。
本申请的有益效果是:
依据上述实施例的一种基于极值校验的双目视觉立体匹配方法及系统,该方法包括:获取两个视点下的图像;根据预设的代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,得到第一像素点对应的代价聚合函数;根据代价聚合函数获取第一像素点处各个视差值对应的匹配代价,得到第一匹配代价组;对第一匹配代价组中每个匹配代价对应的视差值依次进行左右一致性校验,若一匹配代价对应的视差值通过左右一致性校验,则确定该视差值为第一像素点的最佳视差值。一方面,由于从最小的匹配代价开始,逐一对每个匹配代价进行左右一致性校验,使得通过校验结果得到鲁棒性较高的代价聚合函数以及得到各个像素点的准确率较高的最佳视差值成为可能;另一方面,由于将左右一致性校验引入至图像视觉立体匹配之中,可以有效解决立体匹配时发生误匹配的问题,利于在不同的视点图像中准确地找到匹配的对应点,提高立体匹配的精确度。
附图说明
图1为一实施例中双目视觉立体匹配方法的流程图;
图2为左右一致性校验的流程图;
图3为一实施例中图像视觉立体匹配方法的流程图;
图4为一实施例中双目视觉立体匹配系统的结构图。
具体实施方式
下面通过具体实施方式结合附图对本发明作进一步详细说明。其中不同实施方式中类似元件采用了相关联的类似的元件标号。在以下的实施方式中,很多细节描述是为了使得本申请能被更好的理解。然而,本领域技术人员可以毫不费力的认识到,其中部分特征在不同情况下是可以省略的,或者可以由其他元件、材料、方法所替代。在某些情况下,本申请相关的一些操作并没有在说明书中显示或者描述,这是为了避免本申请的核心部分被过多的描述所淹没,而对于本领域技术人员而言,详细描述这些相关操作并不是必要的,他们根据说明书中的描述以及本 领域的一般技术知识即可完整了解相关操作。
另外,说明书中所描述的特点、操作或者特征可以以任意适当的方式结合形成各种实施方式。同时,方法描述中的各步骤或者动作也可以按照本领域技术人员所能显而易见的方式进行顺序调换或调整。因此,说明书和附图中的各种顺序只是为了清楚描述某一个实施例,并不意味着是必须的顺序,除非另有说明其中某个顺序是必须遵循的。
本文中为部件所编序号本身,例如“第一”、“第二”等,仅用于区分所描述的对象,不具有任何顺序或技术含义。而本申请所说“连接”、“联接”,如无特别说明,均包括直接和间接连接(联接)。
在双目视觉的立体匹配中,一个关键问题是寻找在左右图像中的匹配点,以得到两幅图像中对应像素的水平位置差,也称之为视差,从而进一步可以计算出该像素点的深度。
不在同一深度的像素点,完全可能有相同的颜色、纹理和梯度等,所以这常常会导致立体匹配时发生错配,从而进一步导致视差计算出现较大的错误,大大影响了双目视觉在深度测量中的应用。为了克服这一点,在现有的双目图像的立体匹配方法中,一般会采用像素点周边区域的像素点来估计该像素点,由于周边区域的像素点可能存在与中心像素不在同一深度的情形,因此现有的方法还有较大的不鲁棒性。通常,快速立体匹配算法主要是通过代价阵计算、代价聚合、WTA(winner-take-all)、后处理等步骤来实现,其中,WTA虽作为一种快速高效的求取视差方法,但它也存在受噪声或其它干扰的问题,导致在最小视差的对应点上,匹配代价因无法达到最小而发生严重的视差估计错误,这一情形在室外场景视频中表面的尤为突出。为了克服这一缺陷,且提高匹配代价的鲁棒性,本申请是在现有方法的技术上,结合新的技术手段,采用极值校验技术(非WTA技术)对任一个代价函数的多个极小值(即匹配代价)进行左右一致性校验,根据校验结果判断哪一个具体的视差值作为双目视觉立体匹配的最佳视差估计值。本申请提供的技术方案可从最小的匹配代价开始,逐一对每个匹配代价进行左右一致性校验,从而通过校验结果得到鲁棒性较高的代价聚合函数以及得到各个像素点的准确率较高的最佳视差值。通过本申请提供的技术方案可以有效解决立体匹配时发生误匹配的问题,利于在不同的视点图像中准确地找到匹配的对应点,提高立体匹配的精确度。
实施例一:
请参考图1,本申请公开一种基于极值校验的双目视觉立体匹配方法,其包括步骤S110-S180,下面分别说明。
步骤S110,获取两个视点下的图像。在一实施例中,通过双目相机对立体匹配对象进行取像,由于双目相机构成了两个取像视点,则在这两个取像视点下分别得到一帧图像。
步骤S120,根据预设的代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,得到第一像素点对应的代价聚合函数,这里的第一像素点为该图像中的任意一像素点。
在一实施例中,计算预设的代价函数下每个视差值在第一像素点处的函数值,聚合各个视差值在第一像素点处的函数值,从而得到第一像素点对应的代价聚合函数。
需要说明的是,本申请中的代价函数包括但不限于颜色、梯度、rank、NCC、cintrium或mutual-information对应的代价函数;其中,关于颜色的代价函数可以参考技术文献“基于颜色的代价函数参考文献[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1994,Vol.16(9),pp.920-932CrossRef”,关于梯度的代价函数可以参考技术文献“杨新,一种基于梯度算子的图像匹配算法[J].电子学报,1999(10):30-33”;关于rank的代价函数可以参考技术文献“A constraint to improve the reliability ofstereo matching using the rank transform:Acoustics,Speech,and Signal Processing,1999.on 1999IEEE International Conference,1999[C]”;关于mutual-information(互信息)的代价函数可参考技术文献“Stereo Processing by Semi-Global Matching and Mutual Information,IEEE Conferenceon Computer Visionand Pattern Recognition(CVPR),SanDiego,CA,USA,Ju”,关于NCC的代价函数可以参考技术文献“博客文章,图像处理之基于NCC模板匹配识别,查询网址为https://blog.csdn.net/jia20003/article/details/48852549,认为NCC是一种基于统计学计算两组样本数据相关性的算法”。由于列举的各类代价函数全部属于现有技术,因此这里不再一一进行说明。此外,本领域的技术人员应当理解,随着技术的发展,未来还可能出现一些其它种类的代价函数,这些未来出现的代价函数仍可以应用至本实施例公开的技术方案,且不对本实施例的技术方案构成限制。
需要说明的是,本实施例中的视差值为取值范围[0,d max]中的任意值,其中,d max表示视差值的最大允许值,选取情况由用户进行设定。
例如,根据现有的一种代价函数,且选取其中一幅图像(如左图像)上的第一像素点(y,x)进行代价聚合,用公式表示为
cost_left(0,…,d max)=cost_volume_left(y,x,0,…,d max)
其中,cost_left()表示左图像中第一像素点(y,x)对应的代价聚合函数,cost_volume_left()表示针对左图像进行代价聚合运算的代价函数。
步骤S130,根据代价聚合函数获取第一像素点处各个视差值对应的匹配代价,得到第一匹配代价组。在一实施例中,计算代价聚合函数下每个视差值在第一像素点处的极小值,将极小值作为第一像素点处该视差值对应的匹配代价。
例如,计算代价聚合函数cost_left(0,…,d max)下每个视差值在第一像素点(y,x)处的极小值,记作min_l_cost i;那么,各个视差值(如0,…,d max)对应的匹配代价按照从小到大排列为:min_l_cost 1,min_l_cost 2,…,min_l_cost i,...,min_l_cost h;其中,下标h为升序列中匹配代价的总数目。
则第一匹配代价组可以表示为:
(min_left 1,min_l_cost 1),(min_left 2,min_l_cost 2),...,(min_left i,min_l_cost i),…,(min_left m,min_l_cost m)
其中,匹配代价min_l_cost i对应的视差值为min_left i;下标m为第一匹配代价组中匹配代价的总数目,使得i=1,2,…,m。
需要说明的是,本实施例中根据预设规则从第一像素点处各个视差值对应的匹配代价中选取若干个匹配代价以得到所述第一匹配代价组;这里的预设规则包括:将第一像素点处各个视差值对应的匹配代价进行升序排列,从排列结果中确定小于或等于一噪声阈值的几个匹配代价作为若干个匹配代价的选取对象,所述噪声阈值为排列结果中最小的匹配代价与预设噪声参数δ之和。也就是说,第一匹配代价组中下标m的取值由最小匹配代价min_l_cost 1和噪声参数δ决定,使得min_l_cost i且min_l_cost m-1小于或等于min_l_cost 1+δ,也使得min_l_cost m+1大于min_l_cost 1+δ。需要说明的是,其中噪声参数δ是衡量图像噪声水平的参数,可根据步骤S110获取的左图像而具体设定,这里不做限制。
步骤S140,对第一匹配代价组中每个匹配代价对应的视差值依次进 行左右一致性校验,若一匹配代价对应的视差值通过左右一致性校验,则确定该视差值为第一像素点的最佳视差值。在一实施例中,见图2,步骤S140可包括步骤S141-S147,分别说明如下。
步骤S141,对于第一匹配代价组中每个匹配代价对应的视差值,根据该视差值得到另一幅图像中与第一像素点相对应的第二像素点。
例如,需要对匹配代价min_l_cost i对应的视差值min_left i进行左右一致性校验时,根据min_left i得到右图像中与第一像素点(y,x)相对应的第二像素点(y,x-min_left i)。
步骤S142,根据预设的代价函数和预设的多个视差值对另一幅图像中的第二像素点进行代价聚合,得到第二像素点对应的代价聚合函数。
例如,根据现有的一种代价函数,且选取另一幅图像(如右图像)上的第二像素点(y,x-min_left i)进行代价聚合,用公式表示为
cost_right(0,…,d max)=cost_volume_right(y,x-min_left i,0,…,d max)
其中,cost_right()表示右图像中第二像素点(y,x-min_left i)对应的代价聚合函数,cost_volume_right()表示针对右图像进行代价聚合运算的代价函数。
步骤S143,根据第二像素点对应的代价聚合函数获取第二像素点处各个视差值对应的匹配代价,得到第二匹配代价组。在一实施例中,计算代价聚合函数下每个视差值在第二像素点处的极小值,将极小值作为第二像素点处该视差值对应的匹配代价。
例如,计算代价聚合函数cost_right(0,…,d max)下每个视差值在第二像素点(y,x-min_left i)处的极小值,记作min_r_cost k;那么,各个视差值(如0,…,d max)对应的匹配代价按照从小到大排列为:min_r_cost 1,min_r_cost 2,…,min_r_cost k,...,min_r_cost h;其中,下标h为升序列中匹配代价的总数目。
则第二匹配代价组可以表示为:
(min_right 1,min_r_cost 1),(min_right 2,min_r_cost 2),...,(min_right k,min_r_cost k),…,(min_right m,min_r_cost m)
其中,匹配代价min_r_cost k对应的视差值为min_right k;下标m为第一匹配代价组中匹配代价的总数目,使得k=1,2,…,m。
需要说明的是,本实施例中采用与得到第一匹配代价组相类似的方法得到第二匹配代价组,具体为:根据预设规则从第二像素点处各个视 差值对应的匹配代价中选取若干个匹配代价以得到所述第二匹配代价组;这里的预设规则包括:将第二像素点处各个视差值对应的匹配代价进行升序排列,从排列结果中确定小于或等于一噪声阈值的几个匹配代价作为若干个匹配代价的选取对象,所述噪声阈值为排列结果中最小的匹配代价与预设噪声参数δ之和。也就是说,第一匹配代价组中下标m的取值由最小匹配代价min_l_cost 1和噪声参数δ决定,使得min_l_cost i且min_l_cost m-1小于或等于min_l_cost 1+δ,也使得min_l_cost m+1大于min_l_cost 1+δ。
步骤S144,将第一匹配代价组中每个匹配代价对应的视差值依次与第二匹配代价组中各个匹配代价对应的视差值进行比较。
在一具体实施例中,对于第一匹配代价组中的任意一个匹配代价min_l_cost i对应的视差值min_left i,和第二匹配代价组中的任意一个匹配代价min_r_cost k对应的视差值min_right k,进行左右一致性校验包括:
|min_right k—min_left i|<ε         (1)
其中,ε为预设的校验阈值,且ε∈{2,…,5}。
需要说明的是,依次进行比较的含义是:先将最小的匹配代价min_l_cost 1对应的视差值min_left 1分别与min_right 1、min_right 2、...min_right k…、min_right m进行比较,再将次小的匹配代价min_l_cost 2对应的视差值min_left 2分别与min_right 1、min_right 2、...min_right k…、min_right m进行比较,直到最后,较大的匹配代价min_l_cost m对应的视差值min_left m分别与min_right 1、min_right 2、...min_right k…、min_right m进行比较。
需要说明的是,左右一致性检测(left-right consistency,LRC)是立体匹配中常见的后处理手段,常常用来实现遮挡检测。比如,一些点只出现在一幅图像中,而在另一幅图像中看不到,如果不针对遮挡区域做一些特殊处理是不可能通过单幅图像提供的有限信息得到遮挡点的正确视差的。运用LRC的具体做法为:根据左右两幅输入图像,分别得到左右两幅视差图,对于左图中的一个点求得其视差值,找到右图中的对应点并求得对应点的视差,若两视差的绝对值大于阈值,则将左图中确定的点标记为遮挡点。
步骤S145,判断比较结果的绝对值是否小于预设的校验阈值ε,如是则进入步骤S146,反之进入步骤S147。在一具体实施例中,假如 |min_right k—min_left i|<ε,则进入步骤S146。
本领域的技术人员应当理解,只要min_right 1、min_right 2、...min_right k…、min_right m中的一个视差值使得min_left i满足公式(1),那么就可认为视差值min_left i通过左右一致性校验。
步骤S146,确定第一匹配代价组中匹配代价min_l_cost i对应的视差值min_left i通过左右一致性校验。
步骤S147,确定第一匹配代价组中匹配代价min_l_cost i对应的视差值min_left i没有通过左右一致性校验。
步骤S150,判断第一匹配代价组中当前参与左右一致性校验的匹配代价所对应的视差值是否通过左右一致性校验,如果通过校验则进入步骤S150,反之进入步骤S170。
需要说明的是,针对一个匹配代价进行左右一致性校验的过程可具体参考上述的步骤S141-S147,当完成步骤S146后可进入步骤S150,当完成步骤S147后可进入步骤S170。
例如,公式(1)中,第一匹配代价组中匹配代价对应的视差值min_left i分别与min_right 1、min_right 2、...min_right k…、min_right m进行比较时,如果min_right k—min_left i的绝对值小于校验阈值ε,则认为min_left i通过左右一致性校验,此时进入步骤S160。
步骤S160,确定第一匹配代价组中当前参与左右一致性校验的匹配代价所对应的视差值为第一像素点的最佳视差值。
例如,若min_left i通过左右一致性校验,则认为min_left i为第一像素点(y,x)的最佳视差值。
需要说明的是,在完成步骤S160之后,认为找到了第一像素点的最佳视差值,则不必在执行接下来的步骤S170-S180,即本次的方法流程结束。
步骤S170,判断第一匹配代价组中的各个匹配代价是否均进行了左右一致性校验,若是则进入步骤S180,反之进入步骤S140以对下一个匹配代价进行左右一致性校验。
步骤S180,若第一匹配代价组中各个匹配代价对应的视差值均未通过左右一致性校验,则将第一匹配代价组中第一个匹配代价对应的视差值作为最佳视差值。
例如,当第一匹配代价组中的min_left 1、min_left 2、min_left i、min_left m 均未通过左右一致性校验时,则将第一个匹配代价min_l_cost 1对应的视差值min_left 1作为第一像素点(y,x)的最佳视差值。
在本实施例中,还相应地公开了一种基于极值校验的双目视觉立体匹配系统30。请参考图4,该系统包括存储器301和处理器302,其中,存储器301用于存储程序,而处理器302用于通过执行存储器301存储的程序以实现步骤S110-S150中所述的方法。
实施例二:
在基于实施例一中双目视觉立体匹配方法的基础上,本实施例还提供一种图像视觉立体匹配方法,请参考图3,该图像视觉立体匹配方法包括步骤S210-S220,下面分别说明。
步骤S210,获取至少两个视点的图像。在一具体实施例中,可通过多个相机来对立体匹配对象进行取像,如此可获得多个视点下的图像。
步骤S220,通过实施例一种所述的双目视觉立体匹配方法对其中一幅图像中的各个像素点进行立体匹配,分别得到各个像素点的最佳视差值。
本领域的技术人员可以理解,实施例一中的双目视觉立体匹配方法获得的是图像中一个像素点的最佳视差值,根据该最佳视差值可以找到另一个图像中的匹配对应点,那么,可以根据该方法继续计算图像中所有像素点的最佳视差值,如此可实现两幅或多幅图像之间像素点的一一立体匹配,进而达到图像立体匹配的效果。
本领域技术人员可以理解,上述实施方式中各种方法的全部或部分功能可以通过硬件的方式实现,也可以通过计算机程序的方式实现。当上述实施方式中全部或部分功能通过计算机程序的方式实现时,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器、随机存储器、磁盘、光盘、硬盘等,通过计算机执行该程序以实现上述功能。例如,将程序存储在设备的存储器中,当通过处理器执行存储器中程序,即可实现上述全部或部分功能。另外,当上述实施方式中全部或部分功能通过计算机程序的方式实现时,该程序也可以存储在服务器、另一计算机、磁盘、光盘、闪存盘或移动硬盘等存储介质中,通过下载或复制保存到本地设备的存储器中,或对本地设备的系统进行版本更新,当通过处理器执行存储器中的程序时,即可实现上述实施方式中全部或部分功能。
以上应用了具体个例对本发明进行阐述,只是用于帮助理解本发明,并不用以限制本发明。对于本发明所属技术领域的技术人员,依据本发明的思想,还可以做出若干简单推演、变形或替换。

Claims (11)

  1. 一种基于极值校验的双目视觉立体匹配方法,其特征在于,包括以下步骤:
    获取两个视点下的图像;
    根据预设的代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,得到所述第一像素点对应的代价聚合函数,所述第一像素点为该图像中的任意一像素点;
    根据所述代价聚合函数获取所述第一像素点处各个视差值对应的匹配代价,得到第一匹配代价组;
    对所述第一匹配代价组中每个匹配代价对应的视差值依次进行左右一致性校验,若一所述匹配代价对应的视差值通过所述左右一致性校验,则确定该视差值为所述第一像素点的最佳视差值。
  2. 如权利要求1所述的双目视觉立体匹配方法,其特征在于,所述根据预设的代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,包括:
    计算所述代价函数下每个视差值在所述第一像素点处的函数值,聚合各个视差值在所述第一像素点处的函数值,得到所述第一像素点对应的代价聚合函数。
  3. 如权利要求2所述的双目视觉立体匹配方法,其特征在于,所述代价函数包括但不限于颜色、梯度、rank、NCC、cintrium或mutual-information对应的代价函数;所述视差值为取值范围[0,d max]中的任意值,其中,d max表示所述视差值的最大允许值。
  4. 如权利要求1所述的双目视觉立体匹配方法,其特征在于,所述根据所述代价聚合函数获取所述第一像素点处各个视差值对应的匹配代价,包括:
    计算所述代价聚合函数下每个视差值在所述第一像素点处的极小值,将极小值作为所述第一像素点处该视差值对应的匹配代价。
  5. 如权利要求4所述的双目视觉立体匹配方法,其特征在于,所述对所述第一匹配代价组中每个匹配代价对应的视差值进行左右一致性校验,包括:
    对于所述第一匹配代价组中每个匹配代价对应的视差值,根据该视差值得到另一幅图像中与所述第一像素点相对应的第二像素点;
    根据预设的代价函数和预设的多个视差值对所述另一幅图像中的第二像素点进行代价聚合,得到所述第二像素点对应的代价聚合函数;
    根据所述第二像素点对应的代价聚合函数获取所述第二像素点处各个视差值对应的匹配代价,得到第二匹配代价组;
    将所述第一匹配代价组中每个匹配代价对应的视差值依次与所述第二匹配代价组中各个匹配代价对应的视差值进行比较,判断比较结果的绝对值小于预设的校验阈值ε,则确定所述第一匹配代价组中该匹配代价对应的视差值通过所述左右一致性校验。
  6. 如权利要求5所述的双目视觉立体匹配方法,其特征在于,
    根据预设规则从所述第一像素点处各个视差值对应的匹配代价中选取若干个匹配代价以得到所述第一匹配代价组,且从所述第二像素点处各个视差值对应的匹配代价中选取若干个匹配代价以得到所述第二匹配代价组;
    所述预设规则包括:将所述第一像素点处或所述第二像素点处各个视差值对应的匹配代价进行升序排列,从排列结果中确定小于或等于一噪声阈值的几个匹配代价作为若干个匹配代价的选取对象,所述噪声阈值为排列结果中最小的匹配代价与预设噪声参数δ之和。
  7. 如权利要求6所述的双目视觉立体匹配方法,其特征在于,对于所述第一匹配代价组中的任意一个匹配代价min_l_cost i对应的视差值min_left i,和所述第二匹配代价组中的任意一个匹配代价min_r_cost k对应的视差值min_right k,进行所述左右一致性校验包括:
    |min_right k—min_left i|<ε
    其中,ε为预设的校验阈值,且ε∈{2,…,5}。
  8. 如权利要求6所述的双目视觉立体匹配方法,其特征在于,还包括:若所述第一匹配代价组中各个匹配代价对应的视差值均未通过所述左右一致性校验,则将所述第一匹配代价组中第一个匹配代价对应的视差值作为最佳视差值。
  9. 一种图像视觉立体匹配方法,其特征在于,包括:
    获取至少两个视点的图像;
    通过权利要求1-8中任一项所述的双目视觉立体匹配方法对其中一幅图像中的各个像素点进行立体匹配,分别得到各个像素点的最佳视差值。
  10. 一种基于极值校验的双目视觉立体匹配系统,其特征在于,包括:
    存储器,用于存储程序;
    处理器,用于通过执行所述存储器存储的程序以实现如权利要求1-9中任一项所述的方法。
  11. 一种计算机可读存储介质,其特征在于,包括程序,所述程序能够被处理器执行以实现如权利要求1-9中任一项所述的方法。
PCT/CN2019/076889 2019-03-04 2019-03-04 一种基于极值校验的双目视觉立体匹配方法及系统 WO2020177061A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/076889 WO2020177061A1 (zh) 2019-03-04 2019-03-04 一种基于极值校验的双目视觉立体匹配方法及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/076889 WO2020177061A1 (zh) 2019-03-04 2019-03-04 一种基于极值校验的双目视觉立体匹配方法及系统

Publications (1)

Publication Number Publication Date
WO2020177061A1 true WO2020177061A1 (zh) 2020-09-10

Family

ID=72337386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/076889 WO2020177061A1 (zh) 2019-03-04 2019-03-04 一种基于极值校验的双目视觉立体匹配方法及系统

Country Status (1)

Country Link
WO (1) WO2020177061A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866497A (zh) * 2010-06-18 2010-10-20 北京交通大学 基于双目立体视觉的智能三维人脸重建方法及系统
CN103325120A (zh) * 2013-06-30 2013-09-25 西南交通大学 一种快速自适应支持权值双目视觉立体匹配方法
CN103440653A (zh) * 2013-08-27 2013-12-11 北京航空航天大学 双目视觉立体匹配方法
CN104680510A (zh) * 2013-12-18 2015-06-03 北京大学深圳研究生院 Radar视差图优化方法、立体匹配视差图优化方法及系统
CN108629812A (zh) * 2018-04-11 2018-10-09 深圳市逗映科技有限公司 一种基于双目相机的测距方法
CN108682026A (zh) * 2018-03-22 2018-10-19 辽宁工业大学 一种基于多匹配基元融合的双目视觉立体匹配方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866497A (zh) * 2010-06-18 2010-10-20 北京交通大学 基于双目立体视觉的智能三维人脸重建方法及系统
CN103325120A (zh) * 2013-06-30 2013-09-25 西南交通大学 一种快速自适应支持权值双目视觉立体匹配方法
CN103440653A (zh) * 2013-08-27 2013-12-11 北京航空航天大学 双目视觉立体匹配方法
CN104680510A (zh) * 2013-12-18 2015-06-03 北京大学深圳研究生院 Radar视差图优化方法、立体匹配视差图优化方法及系统
CN108682026A (zh) * 2018-03-22 2018-10-19 辽宁工业大学 一种基于多匹配基元融合的双目视觉立体匹配方法
CN108629812A (zh) * 2018-04-11 2018-10-09 深圳市逗映科技有限公司 一种基于双目相机的测距方法

Similar Documents

Publication Publication Date Title
WO2015135323A1 (zh) 一种摄像机跟踪方法及装置
KR20180087994A (ko) 스테레오 매칭 방법 및 영상 처리 장치
US10460471B2 (en) Camera pose estimating method and system
CN109978934B (zh) 一种基于匹配代价加权的双目视觉立体匹配方法及系统
WO2016203731A1 (en) Method for reconstructing 3d scene as 3d model
AU2020203790B2 (en) Transformed multi-source content aware fill
da Silveira et al. Dense 3D scene reconstruction from multiple spherical images for 3-DoF+ VR applications
CN110070610B (zh) 特征点匹配方法、三维重构过程的特征点匹配方法及装置
CN110443874B (zh) 基于卷积神经网络的视点数据生成方法和装置
CN117456114B (zh) 基于多视图的三维图像重建方法及系统
JP6359985B2 (ja) デプス推定モデル生成装置及びデプス推定装置
CN109978928B (zh) 一种基于加权投票的双目视觉立体匹配方法及其系统
CN109961092B (zh) 一种基于视差锚点的双目视觉立体匹配方法及系统
CN112270748A (zh) 基于图像的三维重建方法及装置
Lee et al. Robust uncertainty-aware multiview triangulation
CN109741245B (zh) 平面信息的插入方法及装置
CN111738061A (zh) 基于区域特征提取的双目视觉立体匹配方法及存储介质
WO2020177061A1 (zh) 一种基于极值校验的双目视觉立体匹配方法及系统
CN115841602A (zh) 基于多视角的三维姿态估计数据集的构建方法及装置
CN109544622A (zh) 一种基于mser的双目视觉立体匹配方法及系统
CN109544619A (zh) 一种基于图割的双目视觉立体匹配方法及系统
WO2020177060A1 (zh) 一种基于极值校验和加权投票的双目视觉立体匹配方法
CN117237431A (zh) 深度估计模型的训练方法、装置、电子设备及存储介质
JP2023056466A (ja) グローバル測位装置及び方法
CN114608558A (zh) 基于特征匹配网络的slam方法、系统、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19918174

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19918174

Country of ref document: EP

Kind code of ref document: A1