WO2018098891A1 - Procédé et système de stéréocorrespondance - Google Patents

Procédé et système de stéréocorrespondance Download PDF

Info

Publication number
WO2018098891A1
WO2018098891A1 PCT/CN2017/070639 CN2017070639W WO2018098891A1 WO 2018098891 A1 WO2018098891 A1 WO 2018098891A1 CN 2017070639 W CN2017070639 W CN 2017070639W WO 2018098891 A1 WO2018098891 A1 WO 2018098891A1
Authority
WO
WIPO (PCT)
Prior art keywords
parallax
point
points
support
pixel
Prior art date
Application number
PCT/CN2017/070639
Other languages
English (en)
Chinese (zh)
Inventor
唐荣富
余勤力
周剑
龙学军
徐一丹
Original Assignee
成都通甲优博科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都通甲优博科技有限责任公司 filed Critical 成都通甲优博科技有限责任公司
Publication of WO2018098891A1 publication Critical patent/WO2018098891A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present invention relates to the field of computer vision technology, and in particular, to a stereo matching method and system.
  • Stereo dense matching refers to the correspondence between the features of the same spatial physical point in different images according to the calculation of the selected features and the correspondence between the features.
  • Stereo matching is an important hotspot and difficulty in computer vision research. It is one of the key technologies in many applications such as robotics, medicine and artificial intelligence. In recent years, with the development of mobile platforms, the accuracy and real-time requirements of the stereo matching method have been continuously improved.
  • the stereo matching algorithm is decomposed into four steps: matching cost calculation, matching cost aggregation, disparity calculation, and parallax refinement. According to different constraints, the stereo matching algorithm can be divided into local matching algorithm and global matching algorithm.
  • the global stereo matching algorithm mainly estimates the parallax through the global optimization theory method, establishes the global energy function, and then obtains the optimal disparity value by minimizing the global energy function.
  • the global matching algorithm obtains higher accuracy than the local algorithm, but it is computationally intensive and time consuming, and is not suitable for real-time applications.
  • the main algorithms are graph cuts, belief propagation, semi-global matching, dynamic programing, and so on.
  • a research direction of stereo matching is a machine learning method using a convolutional neural network, which can obtain an accuracy equal to or higher than that of the classical global algorithm.
  • the local matching algorithm mainly uses the local optimization method to estimate the disparity value. Like the global stereo matching algorithm, the parallax estimation is also performed by the energy minimization method. The difference is that in the energy function, the local matching algorithm has only data items, but no smoothing. item. Because the local matching algorithm is sensitive to changes in illumination intensity and contrast, when the image has repeated texture features, weak texture and severe occlusion, the probability of false matching is high.
  • Commonly used local matching algorithms mainly include SAD (sum of absolute differences algorithm, CT (census transform) algorithm, ASW (adaptive support) Weight) algorithm, ELAS (efficient large area stereo matching) algorithm, IDR (iterative dense refinement) algorithm, and the like.
  • the SAD algorithm calculates the absolute value of the corresponding pixel difference of the local window.
  • the CT algorithm first transforms the window region, and then calculates the matching cost according to the Hamming distance metric.
  • the SAD and CT methods are simple and fast to implement, but with low accuracy.
  • the ASW algorithm changes the single weight of the SAD algorithm, introduces the idea of adaptive weight, and obtains a high matching precision, but the amount of computation caused by the adaptive weight is very large.
  • the IDR algorithm uses a two-pass approach to simplify the implementation of ASW and adds an iterative improvement method to achieve higher accuracy.
  • the IDR algorithm structure is very conducive to parallel processing. It can get high computational efficiency after optimization under CUDA architecture.
  • IDR algorithm also has two main disadvantages: it runs slowly under non-CUDA architecture, and the memory overhead is very large.
  • the ELAS algorithm adopts a completely different idea from the above method: it first uses the sobel operator to obtain the strong texture support points of the image; then uses the support point geometry to perform Delaunay triangulation on the image to obtain the parallax plane estimation of the pixel; The sobel operator measure calculates the matching cost and uses the weighting method to obtain the optimal estimate of the disparity.
  • ELAS algorithm is one of the fastest running stereo matching algorithms, and its accuracy is also very high, suitable for real-time applications.
  • the main shortcomings of the ELAS algorithm are: the algorithm structure is not conducive to parallelization, and there are some cases where pixel points cannot be calculated.
  • the object of the present invention is to provide a stereo matching method and system, which realizes fast matching to obtain a high-precision parallax map, and is particularly suitable for a mobile platform or an application field with high real-time requirements.
  • the present invention provides a stereo matching method, including:
  • the Delaunay triangle Constructing a Delaunay triangle according to the support point; wherein the Delaunay triangle includes a prior probability of parallax of all pixel points in the triangle and a minimum support distance of the pixel point and the support point;
  • the disparity condition probability and the parallax confidence level are calculated by using a Bayesian principle to obtain an optimal a posteriori parallax.
  • extract feature points of the left and right images, and perform feature point matching on the feature points to determine support points including:
  • the feature points are extracted from the left and right graphs by using the FAST operator, and the feature description is performed by using theRIEF;
  • the feature points are matched by the polar line constraint and the Hamming distance in the feature description, and the feature points with successful matching are used as the support points.
  • constructing a Delaunay triangle according to the support point includes:
  • Gaussian model Calculating a prior probability P(dn
  • D p,i is the Euclidean distance of the corresponding pixel point and the support point of its Delaunay triangle
  • ⁇ p m ⁇
  • is the variance
  • m is the constant parameter
  • dp is the disparity estimate determined by the support point
  • dp au p +bv p +c
  • the parameters a, b, c are obtained by fitting three support point planes
  • dn is the support point disparity value
  • u p and v p are the abscissa and ordinate of the pixel point, respectively.
  • the parallax calculation method is used to calculate the parallax condition probability and the parallax confidence level of the pixel points in the left image, including:
  • the parallax condition probability and the parallax confidence level of the pixel points in the left picture are calculated by using an improved census transform stereo matching algorithm.
  • the disparity condition probability and the disparity confidence level using a Bayesian principle to calculate an optimal a posteriori parallax, including:
  • f(dn) is the Hamming distance of the improved census transform stereo matching algorithm
  • f m (d) is the confidence level function of the conditional parallax
  • S is the support point
  • O is based on some local stereo matching operator.
  • Matching cost ⁇ is the weight parameter.
  • the stereo matching method further includes:
  • the left and right consistency detection method is used to detect the mismatched points in the disparity map.
  • the method further includes:
  • the disparity values of the error matching points are replaced by the disparity values on the left and right sides of the mismatching point.
  • the invention also provides a stereo matching system comprising:
  • a support point determining module configured to extract feature points of the left and right images, and perform feature point matching on the feature points to determine a support point
  • a Delaunay triangle building module configured to construct a Delaunay triangle according to the support point; wherein the Delaunay triangle includes a prior probability of parallax of all pixel points in the triangle and a minimum support distance of the pixel point and the support point;
  • a probability calculation module configured to calculate a parallax condition probability and a parallax confidence level of a pixel point in the left image by using a parallax calculation method
  • the a posteriori parallax calculation module is configured to calculate an optimal a posteriori parallax by using a Bayesian principle according to the Delaunay triangle, the parallax condition probability and the parallax confidence level.
  • the Delaunay triangle building module includes:
  • a splitting unit for performing Delaunay triangulation on the set of support points of the left figure by using the divide and conquer method
  • Distance calculation unit for using formulas Calculating a minimum support distance m between the pixel point and the support point of the left image
  • Prior probability calculation unit for utilizing a Gaussian model Calculating a prior probability P(dn
  • D p,i is the Euclidean distance of the corresponding pixel point and the support point of its Delaunay triangle
  • ⁇ p m ⁇
  • is the variance
  • m is the constant parameter
  • dp is the disparity estimate determined by the support point
  • dp au p +bv p +c
  • the parameters a, b, c are obtained by fitting three support point planes
  • dn is the support point disparity value
  • u p and v p are the abscissa and ordinate of the pixel point, respectively.
  • the a posteriori parallax calculation module specifically uses a formula Calculating a module that obtains an optimal a posteriori parallax d * ;
  • f(dn) is the Hamming distance of the improved census transform stereo matching algorithm
  • f m (d) is the confidence level function of the conditional parallax
  • S is the support point
  • O is based on some local stereo matching operator.
  • Matching cost ⁇ is the weight parameter.
  • the stereo matching method comprises: extracting feature points of the left and right images, and performing feature point matching on the feature points to determine support points; constructing a Delaunay triangle according to the support points; wherein the Delaunay triangle includes the first parallax of all the pixels in the triangle Probability and the minimum support distance between the pixel and the support point; use the disparity calculation method to calculate the parallax condition probability and the parallax confidence level of the pixel in the left picture; according to the Delaunay triangle, the disparity condition probability and the disparity confidence level, calculate by Bayesian principle The optimal a posteriori parallax is obtained.
  • the method uses fast, high-precision and adaptive stereo matching method to achieve fast matching and obtain high-precision parallax map, which is especially suitable for mobile platforms or applications with high real-time requirements.
  • a stereo matching system is provided, which has the above-mentioned beneficial effects and will not be described herein.
  • FIG. 1 is a flowchart of a stereo matching method according to an embodiment of the present invention
  • FIG. 2 is a structural block diagram of a stereo matching system according to an embodiment of the present invention.
  • the core of the invention is to provide a stereo matching method and system, which realizes fast matching to obtain a high-precision parallax map, and is particularly suitable for a mobile platform or an application field with high real-time requirements.
  • the general term of stereo matching is adopted, and the left image is used as a reference image, and the left and right images have been completed to perform camera calibration and stereo rectification.
  • FIG. 1 is a flowchart of a stereo matching method according to an embodiment of the present invention; the method may include:
  • S100 Extract feature points of the left and right images, and perform feature point matching on the feature points to determine support points;
  • the step is mainly for obtaining a support point, and the embodiment does not limit the extraction and matching algorithm of the specific feature point.
  • the selected algorithm should also have the characteristics of relatively simple computational logic. For example, when feature point matching is performed, feature point matching can be quickly performed by using polar line constraints and feature point descriptors, and the matching feature points are called support points.
  • the polar constraint reduces the matching search from two-dimensional to one-dimensional, greatly simplifying the complexity and increasing the calculation speed. Suitable for mobile platforms or applications with high real-time requirements.
  • extracting the feature points of the left and right images and performing feature point matching on the feature points to determine the support points may include:
  • the feature points are extracted from the left and right graphs by using the FAST operator, and the feature description is performed by using theRIEF;
  • the feature points are extracted from the left and right graphs by using the FAST (features from accelerated segment test) operator and characterized by a BRIEF Fiction (BRIEF).
  • FAST features from accelerated segment test
  • BRIEF BRIEF Fiction
  • FAST feature point detection is recognized as a fast and effective feature point extraction method.
  • FAST feature extraction mainly includes three steps: segmentation test on pixels on a fixed radius (usually selected as three pixels) circle, and removing a large number of non-feature candidate points through logic test; based on classification of corner feature detection, utilization
  • the ID3 classifier determines whether the candidate points are corner features according to the 16 features; the non-maximum suppression is used to verify the corner feature.
  • the BRIEF descriptor is a gray-scale calculation that randomly takes a pair of points around the feature points to directly obtain a binary feature description vector.
  • the BRIEF descriptor has two distinct advantages: the descriptor requires less bytes and the memory overhead is small; the Hamming measure is very fast. fast.
  • the feature points are matched by the polar line constraint and the Hamming distance in the feature description, and the feature points with successful matching are used as the support points.
  • feature point matching is quickly performed using polar line constraints and feature point distances, and the matching feature points are called support points.
  • the polar line constraint reduces the matching search from two-dimensional to one-dimensional, greatly simplifying the complexity.
  • the matching points on the right picture of the feature points on the left picture can only be located between one cell on the corresponding outer pole line. Therefore, in this embodiment, a WTA (winner takes all) strategy can be used to select a point with the smallest matching cost as a matching point in the parallax space, and the disparity D L (p) of the p point on the corresponding left image is
  • d(d ⁇ Disp) represents a possible disparity in the disparity space Disp, which is generally an integer between 0 and the maximum disparity d max ;
  • H(.) represents a Hamming distance corresponding to the left and right pixel BRIEF descriptors.
  • Delaunay triangulation on the left image is to divide the image into triangular meshes that cover the entire image plane and are connected to each other, and describe the disparity map as a series of triangular regions with the same or similar disparity values, triangles.
  • the mesh can reflect the topological connection between the pixel and its neighboring pixels.
  • the triangle division should be large enough to reduce the ambiguity of the matching while ensuring the edge details.
  • the vertex density and number should be as small as possible to speed up the matching.
  • the number of vertices should be sufficient to better ensure the accuracy of subsequent disparity map matching.
  • Delaunay triangulation has the following advantages: good structure, simple data structure, small data redundancy, high storage efficiency, and consistent with irregular ground features, which can represent linear features and can adapt to data of various distribution densities.
  • the Delaunay triangulation algorithm may include a random increment method, a triangulation method, and a divide and conquer method.
  • the random incremental method is simple and efficient, and takes up less memory, but its time complexity is high.
  • the triangulation growth method is less efficient because of its relatively low efficiency.
  • the efficiency of the divide and conquer method is the highest, and after splitting.
  • the triangular patches are smoother while maintaining the edge features of the object.
  • the present embodiment can perform Delaunay triangulation on the set of support points using the divide and conquer method.
  • the Delaunay triangle provides information such as the prior probability of the parallax of all the pixels in the triangle and the minimum support distance of the pixel and the support point; specifically, constructing the Delaunay triangle according to the support point may include :
  • Gaussian model Calculating a prior probability P(dn
  • D p,i is the Euclidean distance of the corresponding pixel point and the support point of its Delaunay triangle
  • ⁇ p m ⁇
  • is the variance
  • m is a constant parameter (can be set and modified according to the empirical value)
  • dp is The disparity estimate determined by the support point
  • dp au p +bv p +c
  • the parameters a, b, c are obtained by fitting three support point planes
  • dn is the support point disparity value
  • u p and v p are respectively The abscissa and ordinate of the pixel.
  • this step is mainly for calculating the disparity condition probability and the confidence level of any pixel point in the left figure, and does not limit the specific parallax calculation method.
  • the improved census transform stereo matching algorithm has the advantages of simple structure and fast calculation speed. Therefore, Preferably, the parallax condition probability and the parallax confidence level of the pixel points in the left picture are calculated by using an improved census transform stereo matching algorithm.
  • the census transform is a non-parametric local transform.
  • the downside is that the result is too dependent on the center pixel. Therefore, the present embodiment adopts a modified census transform based on the neighborhood information for stereo matching algorithm.
  • the improved census transform stereo matching algorithm is proposed to improve the census transform stereo matching algorithm based on the correlation information of the traditional census transform in the case of parallax discontinuous region and noise interference.
  • the two-information is used to represent the gray-scale difference between the pixel and the central pixel, the pixel and the neighborhood gray-scale mean.
  • the census transform is improved, and the initial matching cost is obtained by Hamming distance calculation; the parallel layered weighted cost aggregation improves the matching precision, and Reduce the cost of aggregation calculations.
  • the improved census transform stereo matching algorithm makes the representation of the central pixel more precise; the information representation of the transformed image in the parallax discontinuous region is more abundant; and the influence of noise on the matching quality is reduced.
  • the test shows that the algorithm is simple in structure, low in complexity, high in robustness, and effectively improves matching accuracy.
  • This step uses the Hamming distance to represent the parallax conditional probability P(O
  • f(dn) is the Hamming distance of the improved census transform stereo matching algorithm
  • f m (d) is the confidence level function of the conditional disparity. From a statistical point of view, the parallax conditional probability characterizes the confidence level of the disparity dn.
  • Optimal posterior parallax is the optimal posterior estimate of parallax
  • the Bayesian principle is utilized.
  • the optimal posterior disparity is obtained, which is the best disparity of the pixel.
  • the Bayesian parameter estimation model is obtained: P(dn
  • f(dn) is the Hamming distance of the improved census transform stereo matching algorithm
  • f m (d) is the confidence level function of the conditional parallax
  • S is the support point
  • O is based on some local stereo matching operator.
  • Matching cost ⁇ is the weight parameter
  • dn is the parallax.
  • the present invention provides a Bayesian stereo matching method for optimal disparity estimation, in which the algorithm of support point extraction and conditional probability is replaceable, and the corresponding model parameters are adjusted.
  • the stereo matching method provided by the embodiment of the present invention can determine the weight parameters of the prior probability and the conditional probability according to the confidence level of the conditional parallax, the geometric topological relationship between the pixel point and the support point, and obtain a more accurate The parallax posterior estimate.
  • the method makes full use of the information contained in the a priori parallax and the conditional parallax, and the model is more reasonable.
  • the parameters are adaptive.
  • the prior probability of the parameter model and the weight parameter of the conditional probability are adaptively determined according to the confidence level and the geometric topological relationship.
  • the parametric model only needs to determine the empirical parameters ⁇ and ⁇ (which can be determined experimentally), and ⁇ p and P(O
  • the parameter model is simple in form and efficient in operation. That is to say, the method utilizes a fast, high-precision, adaptive stereo matching method to achieve fast matching to obtain a high-precision parallax map, which is particularly suitable for mobile platforms or applications requiring high real-time performance.
  • the method may further include:
  • the left and right consistency detection method is used to detect the mismatched points in the disparity map.
  • the mismatching point is detected using the left and right consistency detection method.
  • the method may further include:
  • the disparity values of the error matching points are replaced by the disparity values on the left and right sides of the mismatching point.
  • the WTA strategy can be replaced by the parallax values on the left and right sides. Improve the accuracy of stereo matching.
  • the parallax of sub-pixel precision is obtained by interpolation optimization, so that the disparity map is more complete and correct.
  • the stereo matching method uses the FAST feature extraction operator and the BRIEF description operator to construct the Bayesian prior probability model, which improves the efficiency and density of the support point.
  • the improved CT algorithm ie, the improved census transform algorithm
  • the Bayesian parameter estimation model of the method is used.
  • the geometric topology of a priori parallax and the confidence level of conditional probability are fully considered. It has the characteristics of parameter adaptability, simple form and high efficiency.
  • the stereo matching system provided by the embodiment of the present invention is described below.
  • the stereo matching system described below and the stereo matching method described above can refer to each other.
  • FIG. 2 is a structural block diagram of a stereo matching system according to an embodiment of the present invention.
  • the system may include:
  • the support point determining module 100 is configured to extract feature points of the left and right images, and perform feature point matching on the feature points to determine a support point;
  • a Delaunay triangle building block 200 configured to construct a Delaunay triangle according to the support point; wherein the Delaunay triangle includes a prior probability of parallax of all pixel points in the triangle and a minimum support distance of the pixel point and the support point;
  • the probability calculation module 300 is configured to calculate a parallax condition probability and a parallax confidence level of the pixel points in the left image by using a parallax calculation method;
  • the a posteriori parallax calculation module 400 is configured to calculate an optimal a posteriori parallax by using a Bayesian principle according to the Delaunay triangle, the parallax condition probability and the parallax confidence level.
  • the Delaunay triangle building module 200 may include:
  • a splitting unit for performing Delaunay triangulation on the set of support points of the left figure by using the divide and conquer method
  • Distance calculation unit for using formulas Calculating a minimum support distance m between the pixel point and the support point of the left image
  • Prior probability calculation unit for utilizing a Gaussian model Calculating a prior probability P(dn
  • D p,i is the Euclidean distance of the corresponding pixel point and the support point of its Delaunay triangle
  • ⁇ p m ⁇
  • is the variance
  • m is the constant parameter
  • dp is the disparity estimate determined by the support point
  • dp au p +bv p +c
  • the parameters a, b, c are obtained by fitting three support point planes
  • dn is the support point disparity value
  • u p and v p are the abscissa and ordinate of the pixel point, respectively.
  • the a posteriori parallax calculation module 400 specifically uses a formula Calculating a module that obtains an optimal a posteriori parallax d * ;
  • f(dn) is the Hamming distance of the improved census transform stereo matching algorithm
  • f m (d) is the confidence level function of the conditional parallax
  • S is the support point
  • O is based on some local stereo matching operator.
  • Matching cost ⁇ is the weight parameter.
  • system may further include:
  • the consistency detection module is configured to detect the mismatched point in the disparity map by using the left and right consistency detection method.
  • system may further include:
  • a replacement module configured to replace, by using a disparity value of the left and right sides of the mismatching point, a disparity value of the error matching point according to a WTA policy.
  • system may further include:
  • the denoising module is used to filter out using fast median filtering, and finally the parallax of sub-pixel precision is obtained by interpolation optimization, so that the disparity map is more complete and correct.
  • the stereo matching system obtaineds the parallax optimal estimation by using the idea of Bayesian maximum a posteriori estimation.
  • the system first uses the support point to quickly match to obtain parallax Prior probability (prior probability), where prior probability is related to support point parallax, pixel point geometry, minimum distance; conditional probability is calculated by improved census transform stereo matching algorithm, where conditional probability and matching cost, confidence level Correlation; Finally, the posterior probability is obtained based on the prior probability and the conditional probability, and the optimal estimate of the disparity is obtained by maximizing the posterior probability.
  • Prior probability Prior probability
  • prior probability is related to support point parallax, pixel point geometry, minimum distance
  • conditional probability is calculated by improved census transform stereo matching algorithm, where conditional probability and matching cost, confidence level Correlation
  • the posterior probability is obtained based on the prior probability and the conditional probability, and the optimal estimate of the disparity is obtained by maximizing the posterior probability.
  • the parameters in the Bayesian model used by the system are adaptive.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un procédé et un système de stéréocorrespondance. Le procédé comprend les étapes qui consistent : à extraire des points caractéristiques d'images gauche et droite et à effectuer une mise en correspondance de points caractéristiques sur les points caractéristiques pour déterminer des points supports (S100) ; à construire un triangle de Delaunay selon les points supports, le triangle de Delaunay incluant des probabilités antérieures de toutes les parallaxes de points de pixels à l'intérieur du triangle ainsi que les distances de supports minimales entre les points de pixels et les points supports (S110) ; à calculer les probabilités conditionnelles de parallaxes et les niveaux de confiance de parallaxes de points de pixels dans l'image gauche à l'aide d'un procédé de calcul de parallaxes (S120) ; et à calculer au moyen d'une méthode de Bayes, en fonction du triangle de Delaunay, des probabilités conditionnelles de parallaxes et des niveaux de confiance de parallaxes, une parallaxe postérieure optimale (S130). Le procédé met en œuvre une mise en correspondance rapide pour obtenir une image à parallaxe de haute précision, et il est particulièrement approprié dans le cas d'une plateforme mobile ou de champs d'application ayant de grandes exigences de temps réel.
PCT/CN2017/070639 2016-11-30 2017-01-09 Procédé et système de stéréocorrespondance WO2018098891A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611079621.9 2016-11-30
CN201611079621.9A CN106780442B (zh) 2016-11-30 2016-11-30 一种立体匹配方法及系统

Publications (1)

Publication Number Publication Date
WO2018098891A1 true WO2018098891A1 (fr) 2018-06-07

Family

ID=58901184

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/070639 WO2018098891A1 (fr) 2016-11-30 2017-01-09 Procédé et système de stéréocorrespondance

Country Status (2)

Country Link
CN (1) CN106780442B (fr)
WO (1) WO2018098891A1 (fr)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887021A (zh) * 2019-01-19 2019-06-14 天津大学 基于跨尺度的随机游走立体匹配方法
CN109961092A (zh) * 2019-03-04 2019-07-02 北京大学深圳研究生院 一种基于视差锚点的双目视觉立体匹配方法及系统
CN109978934A (zh) * 2019-03-04 2019-07-05 北京大学深圳研究生院 一种基于匹配代价加权的双目视觉立体匹配方法及系统
CN110533711A (zh) * 2019-09-04 2019-12-03 云南电网有限责任公司带电作业分公司 一种基于加速稳健特征的高效大尺度立体匹配算法
CN110599478A (zh) * 2019-09-16 2019-12-20 中山大学 一种图像区域复制粘贴篡改检测方法
CN111439889A (zh) * 2020-04-01 2020-07-24 吉林建筑大学 一种协同去除地下水中铁锰、氨氮和有机物的水循环处理方法
CN111476837A (zh) * 2019-01-23 2020-07-31 上海科技大学 自适应立体匹配优化方法及其装置、设备和存储介质
CN111784753A (zh) * 2020-07-03 2020-10-16 江苏科技大学 自主水下机器人回收对接前景视场三维重建立体匹配方法
CN111833393A (zh) * 2020-07-05 2020-10-27 桂林电子科技大学 一种基于边缘信息的双目立体匹配方法
CN112116640A (zh) * 2020-09-11 2020-12-22 南京理工大学智能计算成像研究院有限公司 一种基于OpenCL的双目立体匹配方法
CN112163622A (zh) * 2020-09-30 2021-01-01 山东建筑大学 全局与局部融合约束的航空宽基线立体像对线段特征匹配方法
CN112308897A (zh) * 2020-10-30 2021-02-02 江苏大学 一种基于邻域信息约束与自适应窗口的立体匹配方法
CN112435282A (zh) * 2020-10-28 2021-03-02 西安交通大学 一种基于自适应候选视差预测网络的实时双目立体匹配方法
CN113129313A (zh) * 2021-03-22 2021-07-16 北京中科慧眼科技有限公司 基于超像素的稠密匹配算法、系统和智能终端
CN114299132A (zh) * 2022-01-04 2022-04-08 重庆邮电大学 一种采用代价融合与分层匹配策略的半全局立体匹配方法
CN115018934A (zh) * 2022-07-05 2022-09-06 浙江大学 结合十字骨架窗口和图像金字塔的立体图像深度检测方法
CN117834844A (zh) * 2024-01-09 2024-04-05 国网湖北省电力有限公司荆门供电公司 基于特征对应的双目立体匹配方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248179A (zh) * 2017-06-08 2017-10-13 爱佩仪中测(成都)精密仪器有限公司 用于视差计算的三维匹配建立方法
CN108876841B (zh) * 2017-07-25 2023-04-28 成都通甲优博科技有限责任公司 一种视差图视差求精中插值的方法及系统
CN107730543B (zh) * 2017-09-08 2021-05-14 成都通甲优博科技有限责任公司 一种半稠密立体匹配的快速迭代计算方法
CN107945217B (zh) * 2017-11-20 2020-07-14 北京宇航系统工程研究所 一种适用于自动装配的图像特征点对快速筛选方法及系统
CN109816710B (zh) * 2018-12-13 2023-08-29 中山大学 一种双目视觉系统高精度且无拖影的视差计算方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110007948A1 (en) * 2004-04-02 2011-01-13 The Boeing Company System and method for automatic stereo measurement of a point of interest in a scene
CN102609936A (zh) * 2012-01-10 2012-07-25 四川长虹电器股份有限公司 基于置信度传播的图像立体匹配方法
CN103440653A (zh) * 2013-08-27 2013-12-11 北京航空航天大学 双目视觉立体匹配方法
CN104091339A (zh) * 2014-07-17 2014-10-08 清华大学深圳研究生院 一种图像快速立体匹配方法及装置
CN106097336A (zh) * 2016-06-07 2016-11-09 重庆科技学院 基于置信传播和自相似差异测度的前后景立体匹配方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110007948A1 (en) * 2004-04-02 2011-01-13 The Boeing Company System and method for automatic stereo measurement of a point of interest in a scene
CN102609936A (zh) * 2012-01-10 2012-07-25 四川长虹电器股份有限公司 基于置信度传播的图像立体匹配方法
CN103440653A (zh) * 2013-08-27 2013-12-11 北京航空航天大学 双目视觉立体匹配方法
CN104091339A (zh) * 2014-07-17 2014-10-08 清华大学深圳研究生院 一种图像快速立体匹配方法及装置
CN106097336A (zh) * 2016-06-07 2016-11-09 重庆科技学院 基于置信传播和自相似差异测度的前后景立体匹配方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIANG, FENG ET AL.: "Fast Stereo Matching Algorithm Based on Bayesian Model", COMPUTER ENGINEERING AND DESIGN, vol. 36, no. 4, 16 April 2015 (2015-04-16), pages 956 - 961, ISSN: 1000-7024 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887021B (zh) * 2019-01-19 2023-06-06 天津大学 基于跨尺度的随机游走立体匹配方法
CN109887021A (zh) * 2019-01-19 2019-06-14 天津大学 基于跨尺度的随机游走立体匹配方法
CN111476837B (zh) * 2019-01-23 2023-02-24 上海科技大学 自适应立体匹配优化方法及其装置、设备和存储介质
CN111476837A (zh) * 2019-01-23 2020-07-31 上海科技大学 自适应立体匹配优化方法及其装置、设备和存储介质
CN109961092B (zh) * 2019-03-04 2022-11-01 北京大学深圳研究生院 一种基于视差锚点的双目视觉立体匹配方法及系统
CN109961092A (zh) * 2019-03-04 2019-07-02 北京大学深圳研究生院 一种基于视差锚点的双目视觉立体匹配方法及系统
CN109978934A (zh) * 2019-03-04 2019-07-05 北京大学深圳研究生院 一种基于匹配代价加权的双目视觉立体匹配方法及系统
CN109978934B (zh) * 2019-03-04 2023-01-10 北京大学深圳研究生院 一种基于匹配代价加权的双目视觉立体匹配方法及系统
CN110533711A (zh) * 2019-09-04 2019-12-03 云南电网有限责任公司带电作业分公司 一种基于加速稳健特征的高效大尺度立体匹配算法
CN110599478B (zh) * 2019-09-16 2023-02-03 中山大学 一种图像区域复制粘贴篡改检测方法
CN110599478A (zh) * 2019-09-16 2019-12-20 中山大学 一种图像区域复制粘贴篡改检测方法
CN111439889A (zh) * 2020-04-01 2020-07-24 吉林建筑大学 一种协同去除地下水中铁锰、氨氮和有机物的水循环处理方法
CN111784753B (zh) * 2020-07-03 2023-12-05 江苏科技大学 自主水下机器人回收对接前景视场三维重建立体匹配方法
CN111784753A (zh) * 2020-07-03 2020-10-16 江苏科技大学 自主水下机器人回收对接前景视场三维重建立体匹配方法
CN111833393A (zh) * 2020-07-05 2020-10-27 桂林电子科技大学 一种基于边缘信息的双目立体匹配方法
CN112116640B (zh) * 2020-09-11 2024-02-23 南京理工大学智能计算成像研究院有限公司 一种基于OpenCL的双目立体匹配方法
CN112116640A (zh) * 2020-09-11 2020-12-22 南京理工大学智能计算成像研究院有限公司 一种基于OpenCL的双目立体匹配方法
CN112163622A (zh) * 2020-09-30 2021-01-01 山东建筑大学 全局与局部融合约束的航空宽基线立体像对线段特征匹配方法
CN112435282A (zh) * 2020-10-28 2021-03-02 西安交通大学 一种基于自适应候选视差预测网络的实时双目立体匹配方法
CN112435282B (zh) * 2020-10-28 2023-09-12 西安交通大学 一种基于自适应候选视差预测网络的实时双目立体匹配方法
CN112308897A (zh) * 2020-10-30 2021-02-02 江苏大学 一种基于邻域信息约束与自适应窗口的立体匹配方法
CN113129313A (zh) * 2021-03-22 2021-07-16 北京中科慧眼科技有限公司 基于超像素的稠密匹配算法、系统和智能终端
CN114299132A (zh) * 2022-01-04 2022-04-08 重庆邮电大学 一种采用代价融合与分层匹配策略的半全局立体匹配方法
CN115018934A (zh) * 2022-07-05 2022-09-06 浙江大学 结合十字骨架窗口和图像金字塔的立体图像深度检测方法
CN115018934B (zh) * 2022-07-05 2024-05-31 浙江大学 结合十字骨架窗口和图像金字塔的立体图像深度检测方法
CN117834844A (zh) * 2024-01-09 2024-04-05 国网湖北省电力有限公司荆门供电公司 基于特征对应的双目立体匹配方法

Also Published As

Publication number Publication date
CN106780442B (zh) 2019-12-24
CN106780442A (zh) 2017-05-31

Similar Documents

Publication Publication Date Title
WO2018098891A1 (fr) Procédé et système de stéréocorrespondance
US9754377B2 (en) Multi-resolution depth estimation using modified census transform for advanced driver assistance systems
CN107220997B (zh) 一种立体匹配方法及系统
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
CN107369131B (zh) 图像的显著性检测方法、装置、存储介质和处理器
CN110147815A (zh) 基于k均值聚类的多帧点云融合方法及装置
CN109658378B (zh) 基于土壤ct图像的孔隙辨识方法及系统
CN110009663B (zh) 一种目标跟踪方法、装置、设备及计算机可读存储介质
Hirner et al. FC-DCNN: A densely connected neural network for stereo estimation
Mordohai The self-aware matching measure for stereo
Geetha et al. An improved method for segmentation of point cloud using minimum spanning tree
Psota et al. A local iterative refinement method for adaptive support-weight stereo matching
CN107122782B (zh) 一种均衡的半密集立体匹配方法
CN111179327B (zh) 一种深度图的计算方法
Oh et al. Probabilistic Correspondence Matching using Random Walk with Restart.
Morreale et al. Dense 3D visual mapping via semantic simplification
CN113344989B (zh) 一种NCC和Census的最小生成树航拍图像双目立体匹配方法
Saygili et al. Stereo similarity metric fusion using stereo confidence
Lefebvre et al. A 1D approach to correlation-based stereo matching
Sheng et al. Depth enhancement based on hybrid geometric hole filling strategy
CN106652048B (zh) 基于3d-susan算子的三维模型感兴趣点提取方法
Zhang et al. Multi-view depth estimation with color-aware propagation and texture-aware triangulation
Lian et al. 3D-SIFT Point Cloud Registration Method Integrating Curvature Information
Navarro et al. Filtering and interpolation of inaccurate and incomplete depth maps
Xia et al. Efficient Large Scale Stereo Matching based on Cross-Scale

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17875104

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17875104

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17875104

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.02.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17875104

Country of ref document: EP

Kind code of ref document: A1