WO2020177060A1 - 一种基于极值校验和加权投票的双目视觉立体匹配方法 - Google Patents

一种基于极值校验和加权投票的双目视觉立体匹配方法 Download PDF

Info

Publication number
WO2020177060A1
WO2020177060A1 PCT/CN2019/076888 CN2019076888W WO2020177060A1 WO 2020177060 A1 WO2020177060 A1 WO 2020177060A1 CN 2019076888 W CN2019076888 W CN 2019076888W WO 2020177060 A1 WO2020177060 A1 WO 2020177060A1
Authority
WO
WIPO (PCT)
Prior art keywords
cost
matching
disparity value
pixel
disparity
Prior art date
Application number
PCT/CN2019/076888
Other languages
English (en)
French (fr)
Inventor
赵勇
张丽
陈天健
桑海伟
卢昌义
谢旺多
Original Assignee
北京大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学深圳研究生院 filed Critical 北京大学深圳研究生院
Priority to PCT/CN2019/076888 priority Critical patent/WO2020177060A1/zh
Publication of WO2020177060A1 publication Critical patent/WO2020177060A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Definitions

  • the invention relates to the technical field of binocular vision, in particular to a stereo matching method for binocular vision based on extreme value checking and weighted voting.
  • Binocular Stereo Vision (Binocular Stereo Vision) is an important form of computer vision. It is based on the principle of parallax and uses imaging equipment to obtain two images of the measured object from different positions. By calculating the position deviation between the corresponding points of the image, To obtain the three-dimensional geometric information of the object. It can be seen from this that it processes the real world by simulating the human visual system. The research on stereo vision matching can greatly enhance the computer or robot’s ability to perceive the environment, so that the robot can better adapt to the environment and become more intelligent. Serve people better. After years of technological development, binocular stereo vision has been applied in the neighborhoods of robot vision, aerial surveying and mapping, reverse engineering, military applications, medical imaging and industrial inspection.
  • binocular stereo vision combines the images obtained by two imaging devices and observes the difference between them, so that the computer can obtain accurate depth information, establish the correspondence between features, and map the same physical point in different images.
  • Corresponding points usually called disparity (disparity).
  • stereo vision matching the most important but very difficult problem in binocular stereo vision is the problem of stereo vision matching, that is, finding matching corresponding points from images from different viewpoints.
  • the main technical problem solved by the present invention is how to find matching corresponding points from different viewpoint images, so as to improve the accuracy and robustness of binocular vision stereo matching.
  • this application provides a binocular vision stereo matching method based on extreme value checking and weighted voting.
  • an embodiment provides a weighted voting-based binocular vision stereo matching method, which includes the following steps:
  • Acquisition steps acquire images under two viewpoints
  • Aggregation step perform cost aggregation on the first pixel in one of the images according to multiple preset cost functions and multiple preset disparity values, to obtain a cost aggregation function corresponding to each of the cost functions,
  • the first pixel is any pixel in the image;
  • Calculation step obtaining the matching cost corresponding to each disparity value at the first pixel according to each of the cost aggregation functions;
  • Verification step Perform a left-right consistency check on the disparity value corresponding to each matching cost, and if the disparity value corresponding to the matching cost passes the left-right consistency check, then the disparity corresponding to the matching cost Value is specially weighted;
  • Weighted voting step weighted voting is performed according to the special weighted result of the disparity value corresponding to each matching cost and the matching cost corresponding to each disparity value at the first pixel, and the weight corresponding to each disparity value is calculated Voting value; obtaining the best disparity value of the first pixel from the weighted voting value corresponding to each disparity value.
  • obtaining the matching cost corresponding to each disparity value at the first pixel according to each of the cost aggregation functions includes: for each of the cost aggregation functions, calculating the cost aggregation function The minimum value of each disparity value at the first pixel is used as the matching cost corresponding to the disparity value at the first pixel; The matching cost corresponding to each disparity value at the first pixel point obtains the first matching cost group.
  • performing a left-right consistency check on the disparity value corresponding to each matching cost includes: for the disparity value corresponding to each matching cost in the first matching cost group, according to The disparity value corresponding to the matching cost obtains the second pixel corresponding to the first pixel in another image; according to the cost function corresponding to the matching cost and the preset multiple disparity values, Perform cost aggregation of two pixels to obtain the cost aggregation function corresponding to the second pixel; obtain the matching cost corresponding to each disparity value at the second pixel according to the cost aggregation function corresponding to the second pixel to obtain The second matching cost group; the disparity value corresponding to each matching cost in the first matching cost group is sequentially compared with the disparity value corresponding to each matching cost in the second matching cost group, and the absolute of the comparison result is determined If the value is less than the preset check threshold, it is determined that the disparity value corresponding to the matching cost in the first matching cost group passes the left-right consistency check.
  • the weighted voting is performed according to the special weighted result of the disparity value corresponding to each matching cost and the matching cost corresponding to each disparity value at the first pixel, and each The weighted voting value corresponding to the disparity value includes: obtaining each matching cost in the first matching cost group, a disparity value corresponding to each matching cost, and a special weighted result of the disparity value corresponding to each matching cost; Perform weighted voting according to each matching cost in the first matching cost group, the disparity value corresponding to each matching cost, and the special weighted result of the disparity value corresponding to each matching cost, and calculate the corresponding disparity value Weighted vote value.
  • the statistical learning step includes: obtaining each matching cost in the first matching cost group, a disparity value corresponding to each matching cost, and a corresponding matching cost
  • the special weighting result of the disparity value of the first matching cost group; the special weighting result of each matching cost in the first matching cost group, the disparity value corresponding to each matching cost, and the disparity value corresponding to each matching cost are input into a
  • the optimal disparity statistical learning model obtains the optimal disparity value of the first pixel according to the statistical learning value obtained by calculation.
  • an embodiment provides an image visual stereo matching method, including:
  • the stereo matching method for binocular vision described in the first aspect is used to perform stereo matching on each pixel in one of the images to obtain the optimal disparity value of each pixel.
  • an embodiment provides a binocular vision stereo matching system based on extreme value checking and weighted voting, including:
  • Memory used to store programs
  • the processor is configured to execute the program stored in the memory to implement the method described in the first aspect or the second aspect.
  • an embodiment provides a computer-readable storage medium including a program that can be executed by a processor to implement the method described in the first or second aspect above.
  • a binocular vision stereo matching method based on extremum checking and weighted voting, the method includes: acquiring images at two viewpoints; according to preset multiple cost functions and preset multiple views The difference value performs cost aggregation on the first pixel in one of the images to obtain the cost aggregation function corresponding to each cost function; according to each cost aggregation function, the matching cost corresponding to each disparity value at the first pixel is obtained; The disparity value corresponding to each matching cost is checked for left and right consistency.
  • the disparity value corresponding to the matching cost passes the left and right consistency check, the disparity value corresponding to the matching cost is specially weighted; according to each match
  • the special weighted result of the disparity value corresponding to the cost and the matching cost corresponding to each disparity value at the first pixel are weighted to vote, and the weighted vote value corresponding to each disparity value is calculated; from the weighted vote corresponding to each disparity value Get the best disparity value of the first pixel from the value.
  • this method calculates the matching cost corresponding to each disparity value based on the cost aggregation function.
  • the left and right consistency checks are performed on each matching cost one by one, so that the one that passes the left and right consistency checks
  • the disparity value is specially weighted, which is conducive to accurately obtain the weighted voting results corresponding to each disparity value, and then through the weighted voting results to obtain a more robust cost aggregation function and obtain the best view of each pixel with higher accuracy. Difference.
  • the technical solution provided by the present application can effectively solve the problem of mismatching during stereo matching, which is conducive to accurately finding matching corresponding points in different viewpoint images, and improving the accuracy of stereo matching.
  • FIG. 1 is a flowchart of a binocular vision stereo matching method based on extreme value check and weighted voting in an embodiment
  • Figure 2 is a flowchart of left and right consistency verification
  • Figure 3 is a flowchart of a binocular vision stereo matching method in another embodiment
  • Figure 4 is a flowchart of an image visual stereo matching method in an embodiment
  • Fig. 5 is a structural diagram of a binocular vision stereo matching system based on extreme value check and weighted voting in an embodiment.
  • connection and “connection” mentioned in this application include direct and indirect connection (connection) unless otherwise specified.
  • a key problem is to find the matching points in the left and right images to obtain the horizontal position difference of the corresponding pixels in the two images, also called parallax, so that the pixel value can be further calculated depth.
  • Pixels that are not at the same depth may have the same color, texture, gradient, etc., so this often leads to mismatches during stereo matching, which further leads to larger errors in the disparity calculation, which greatly affects the depth of binocular vision Application in measurement.
  • the pixels in the surrounding area of the pixel are generally used to estimate the pixel. Because the pixels in the surrounding area may not be at the same depth as the central pixel Therefore, the existing methods are still relatively unrobust.
  • the fast stereo matching algorithm is mainly realized through cost matrix calculation, cost aggregation, WTA (winner-take-all), post-processing and other steps.
  • this application uses extreme value check and weighted voting technology (non-WTA technology) for multiple extremes of each cost function based on the technology of the existing method. Weighted voting is performed for small values, and which specific disparity value is determined as the best disparity estimation value for binocular stereo matching according to the voting result.
  • the technical solution provided by this application can perform cost aggregation operations through each cost function, and the matching cost corresponding to each disparity value obtained according to the cost aggregation function. Starting from the smallest matching cost, perform left and right consistency calibration for each matching cost one by one. Therefore, the disparity value passed through the left and right consistency check is specially weighted, which is conducive to accurately obtain the weighted voting result corresponding to each disparity value, and then obtain the more robust cost aggregation function through the weighted voting result and obtain each The best disparity value for pixels with higher accuracy.
  • the technical solution provided by the present application can effectively solve the problem of mismatching during stereo matching, which is conducive to accurately finding matching corresponding points in different viewpoint images, and improving the accuracy of stereo matching.
  • This application discloses a binocular vision stereo matching method based on extreme value checking and weighted voting, which mainly includes steps S110-S150, which are described as follows.
  • Step S110 obtaining step: obtaining images under two viewpoints.
  • the stereo matching object is captured by a binocular camera. Since the binocular camera constitutes two capturing viewpoints, one frame of image is obtained under the two capturing viewpoints.
  • Step S120 aggregating step: performing cost aggregation on the first pixel in one of the images according to multiple preset cost functions and multiple preset disparity values to obtain cost aggregation functions corresponding to each cost function.
  • the first pixel is any pixel in the image.
  • each cost function calculates the function value of each disparity value at the first pixel under the cost function, and aggregate the function values of each disparity value at the first pixel to obtain the cost The cost aggregation function corresponding to the function.
  • the cost function in this application includes, but is not limited to, the cost function corresponding to color, gradient, rank, NCC, or mutual-information; among them, for the cost function of color, please refer to the technical literature “Color-based cost function reference [J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 1994, Vol.16(9), pp.920-932CrossRef", for the cost function of the gradient, please refer to the technical document “Yang Xin, an image based on the gradient operator Matching algorithm [J].
  • the disparity value in this embodiment is any value in the value range [0, d max ], where d max represents the maximum allowable value of the disparity value, and the selection situation is set by the user.
  • cost_left(0,...,d max ) cost_volume_left(y,x,0,...,d max )
  • cost_left() represents the cost aggregation function corresponding to the first pixel (y, x) in the left image
  • cost_volume_left() represents the cost function for performing cost aggregation operations on the left image
  • the first pixel on one of the images (such as the left image) is selected (y, x) for cost aggregation, expressed as
  • cost_left 1 (0,...,d max ) cost_volume_left 1 (y,x,0,...d max )
  • cost_left 2 (0,...,d max ) cost_volume_left 2 (y,x,0,...d max )
  • cost_left i (0,...,d max ) cost_volume_left i (y,x,0,...d max )
  • cost_left n (0,...,d max ) cost_volume_left n (y,x,0,...d max )
  • cost_left() represents the cost aggregation function corresponding to the pixel (y, x) in the left image
  • cost_volume_left() represents The cost function of the left image for cost aggregation operation. Since each cost function corresponds to a cost aggregation function, n cost aggregation functions can be obtained in this way.
  • Step S130 a calculation step: obtain the matching cost corresponding to each disparity value at the first pixel according to each cost aggregation function.
  • this step S130 may include steps S131-S132, which are respectively described as follows.
  • Step S131 For each cost aggregation function, calculate the minimum value of each disparity value at the first pixel under the cost aggregation function, and use the minimum value as the matching cost corresponding to the disparity value at the first pixel. .
  • each disparity value corresponds to the matching cost in ascending order from small to large: min_l_cost 1 ,min_l_cost 2 ,...,min_l_cost j ,...,min_l_cost h .
  • each matching cost and its corresponding disparity value in the ascending sequence can be matched one by one, respectively expressed as:
  • the subscript j represents the sequence number of each matching cost in the matching cost group in the ascending sequence, j ⁇ (1,2,...,h), h represents the first pixel (y, x) under any cost function
  • Step S132 Obtain the first matching cost group according to the matching cost corresponding to each disparity value at the first pixel under each cost aggregation function.
  • a plurality of matching costs are selected from the matching costs corresponding to each disparity value at the first pixel under each cost aggregation function to obtain the first matching cost group.
  • the preset rules here include: arranging the matching costs corresponding to each disparity value at the first pixel in ascending order, and determining several matching costs less than or equal to a noise threshold from the arrangement result as the selection objects of several matching costs,
  • the noise threshold is the sum of the smallest matching cost in the arrangement result and the preset noise parameter ⁇ .
  • each disparity value at the pixel point (y, x) of one of the images (such as the left image) under n cost aggregation functions is calculated, and each disparity value can be obtained according to step S131.
  • the matching costs obtained under a cost aggregation function follow the ascending sequence from small to large, and obtain several matching costs and their corresponding disparity values from each ascending sequence according to the preset rules, and then each first after matching
  • the matching cost group can be expressed as
  • the first first matching cost group (min_left 1,1 ,min_l_cost 1,1 ),(min_left 1,2 ,min_l_cost 1,2 ),...,(min_left 1,j ,min_l_cost 1,j ),... ,(min_left 1,m1 ,min_l_cost 1,m1 );
  • the second first matching cost group (min_left 2,1 ,min_l_cost 2,1 ),(min_left 2,2 ,min_l_cost 2,2 ),...,(min_left 2,j ,min_l_cost 2,j ),... ,(min_left 2,m2 ,min_l_cost 2,m2 );
  • the i-th first matching cost group (min_left i,1 ,min_l_cost i,1 ),(min_left i,2 , min_l_cost i,2 ),...,(min_left i,j ,min_l_cost i,j ),... ,(min_left i,mi ,min_l_cost i,mi );
  • the n-th first matching cost group (min_left n,1 ,min_l_cost n,1 ),(min_left n,2 ,min_l_cost n,2 );
  • the value of the subscript m1 in the first matching cost group is determined by the minimum matching cost min_l_cost 1,1 and the noise parameter ⁇ , so that min_l_cost 1,j and min_l_cost 1, m1-1 are less than or equal to min_l_cost 1,1 + ⁇ , also makes min_l_cost 1,m1+1 greater than min_l_cost 1,1 + ⁇ ; similarly, the value of the subscript mi in the i-th matching cost group is determined by the minimum matching cost min_l_cost i,1 and the noise parameter ⁇ , making min_l_cost i, j and min_l_cost 1, mi-1 is less than or equal to min_l_cost i,1 + ⁇ , which also makes min_l_cost i,mi+1 greater than min_l_cost i,1 + ⁇ .
  • the noise parameter ⁇ is a parameter that measures the level of image noise, and can be specifically set according to the left image obtained in step S110, and there is no limitation here.
  • Step S140 the verification step: Perform a left-right consistency check on the disparity value corresponding to each matching cost, and if the disparity value corresponding to the matching cost passes the left-right consistency check, then the disparity value corresponding to the matching cost Perform special weighting.
  • this step S140 may include steps S141-S148, which are described as follows.
  • Step S141 For the disparity value corresponding to each matching cost in the first matching cost group, obtain a second pixel point corresponding to the first pixel point in another image according to the disparity value corresponding to the matching cost.
  • the need min_l_cost i, j corresponding disparity value matching the cost min_left i, j for consistency check when the left and right, according to min_left i, j and the right image to obtain a first pixel (y, x) of the corresponding Two pixels (y, x-min_left i, j ).
  • Step S142 Perform cost aggregation on the second pixel according to the cost function corresponding to the matching cost in step S141 and multiple preset disparity values to obtain a cost aggregation function corresponding to the second pixel.
  • cost_right(0,...,d max ) cost_volume_right(y,x-min_left i,j ,0,...,d max )
  • cost_right() represents the cost aggregation function corresponding to the second pixel (y, x-min_left i, j ) in the right image
  • cost_volume_right() represents the cost function for performing cost aggregation operations on the right image
  • Step S143 Obtain the matching cost corresponding to each disparity value at the second pixel according to the cost aggregation function corresponding to the second pixel, and obtain a second matching cost group.
  • each disparity value at the second pixel point (y, x-min_left i, j ) under the cost aggregation function cost_right(0,...,d max ), denoted as min_r_cost i,k ; then ,
  • the matching cost corresponding to each disparity value (such as 0,...,d max ) is arranged from smallest to largest: min_r_cost i,1 ,min_r_cost i,2 ,...,min_r_cost i,k ,...,min_r_cost i,h .
  • the second matching cost group can be expressed as:
  • the disparity value corresponding to the matching cost min_r_cost i,k is min_right i,k .
  • a method similar to that of obtaining the first matching cost group is adopted to obtain the second matching cost group, specifically: according to a preset rule, from the matching cost corresponding to each disparity value at the second pixel point Several matching costs are selected to obtain the second matching cost group; the preset rule here includes: arranging the matching costs corresponding to each disparity value at the second pixel in ascending order, and determining from the arrangement result that it is less than or equal to a noise Several matching costs of the threshold are used as the selection objects of several matching costs.
  • the noise threshold here is the sum of the smallest matching cost in the permutation result and the preset noise parameter ⁇ .
  • the value of the subscript mi in the second matching cost group is determined by the minimum matching cost min_r_cost i,1 and the noise parameter ⁇ , so that min_r_cost i,j and min_r_cost i,mi-1 are less than or equal to min_r_cost i,1 + ⁇ also makes min_r_cost i,mi+1 greater than min_r_cost i,1 + ⁇ .
  • the noise parameter ⁇ is a parameter that measures the level of image noise, and can be specifically set according to the right image obtained in step S110, and there is no limitation here.
  • Step S144 The disparity value corresponding to each matching cost in the first matching cost group is sequentially compared with the disparity value corresponding to each matching cost in the second matching cost group.
  • the left and right consistency check includes:
  • i is the sequence number of the cost aggregation function
  • j is the sequence number of each matching cost in the first matching cost group in ascending order
  • k is the matching cost in the second matching cost group in ascending order
  • is the preset check threshold
  • ⁇ 2,...,5 ⁇ is the preset check threshold
  • the meaning of sequential comparison is: first, the disparity value min_left 1,1 corresponding to the smallest matching cost is connected with min_right i,1 , min_right i,2 , ...min_right i,k ..., min_right i , respectively ,mi are compared, and the disparity value min_left 1,2 corresponding to the next smallest matching cost is compared with min_right i,1 , min_right i,2 ,...min_right i,k ..., min_right i,mi, respectively, until Finally, compare min_left 1, m1 with min_right i,1 , min_right i,2 ,...min_right i,k ..., min_right i,mi, respectively.
  • LRC left-right consistency detection
  • the specific method of using LRC is: according to the left and right input images, the left and right disparity maps are obtained respectively, and the disparity value of a point in the left image is obtained, and the corresponding point in the right image is found and the disparity of the corresponding point is obtained. , If the absolute value of the two parallaxes is greater than the threshold, the point identified in the left picture is marked as the occlusion point.
  • step S145 it is judged whether the absolute value of the comparison result is less than the preset check threshold value, if yes, go to step S146, otherwise go to step S148. In a specific embodiment, if
  • Step S146 the determining a first set of matching cost is the cost of matching min_l_cost i, j corresponding disparity value min_left i, j by the left and right consistency check.
  • Step S147 the group if the first matching cost matching cost min_l_cost i, j corresponding disparity value min_left i, j left and right consistency check, then the min_left i, j special weight determination.
  • Step S148 the determining a first set of matching cost is the cost of matching min_l_cost i, j corresponding disparity value min_left i, j is no consistency check by the left and right.
  • Step S150 weighted voting step: weighted voting is performed according to the special weighted result of the disparity value corresponding to each matching cost and the matching cost corresponding to each disparity value at the first pixel point, and the weighted vote corresponding to each disparity value is calculated Value; obtain the best disparity value of the first pixel from the weighted voting value corresponding to each disparity value.
  • step S150 may include steps S151-S153, which are described as follows.
  • Step S151 Obtain each matching cost in the first matching cost group, a disparity value corresponding to each matching cost, and a special weighted result of the disparity value corresponding to each matching cost.
  • Step S152 Perform weighted voting according to each matching cost in the first matching cost group, the disparity value corresponding to each matching cost, and the special weighted result of the disparity value corresponding to each matching cost, and calculate each disparity value The corresponding weighted vote value.
  • weighted voting is performed on each matching cost min_l_cost i,j and the disparity value min_left i,j corresponding to each matching cost in the first matching cost group, and a disparity value d corresponding to The weighted vote value of is expressed as
  • i is the sequence number of the cost aggregation function
  • j is the sequence number of each matching cost in the first matching cost group in ascending order
  • min_l_cost 1j is the jth in the ascending order of the first cost aggregation function
  • the disparity value corresponding to the matching cost, ex_left_right i,j is the special weighted result of the disparity value min_left i,j , w 1 (), w 2 () are curve functions for modifying the weighted cost, d ⁇ (0,..., d max ), ⁇ is the preset interval threshold and ⁇ 3,...,10 ⁇ .
  • Step S153 comparing the weighted voting values corresponding to the respective disparity values, and determining the disparity value corresponding to the largest weighted voting value in the comparison result as the optimal disparity value of the first pixel.
  • the binocular vision stereo matching method disclosed in the present application further includes a statistical learning step S160.
  • the verification step S140 it is selected to enter the weighted voting step S50 or enter the statistical learning step S160 according to the input instruction of the user.
  • the statistical learning step will be described below.
  • Step S160 statistical learning step: obtain the best disparity value of the first pixel through a statistical learning model.
  • step S160 may include steps S161-S163, which are described as follows.
  • Step S161 Obtain each matching cost in the first matching cost group, a disparity value corresponding to each matching cost, and a special weighted result of the disparity value corresponding to each matching cost.
  • Step S162 Input each matching cost in the first matching cost group, the disparity value corresponding to each matching cost, and the special weighted result of the disparity value corresponding to each matching cost into an optimal disparity statistical learning model, and calculate Obtain statistical learning values;
  • the statistical learning value is calculated and expressed as
  • t(d) f_model(min_left i,j ,min_l_cost i,j ,ex_left_right i,j )
  • ex_left_right i,j Is the special weighted result of the disparity value min_left i,j
  • f_model() is the best disparity statistical learning model and includes but not limited to the statistical learning model of SVM, ANN or CNN, d ⁇ [0,d max ].
  • f_model() in formula (3) represents a machine learning model, which is trained from the training set.
  • Each element in the training set is ((min_left i,j ,min_l_cost i,j ,ex_left_right i,j )
  • the language is expressed as: ⁇ (min_left i,j ,min_l_cost i,j ,ex_left_right i,j )
  • the statistical learning model When the training is completed, the statistical learning model is fixed and becomes a function expression with fixed parameters. The parameters of the function expression are trained based on the training data in the training set. At this time, if you input any pair of left and right images, the statistical learning model can be used to obtain the input data (min_left i,j ,min_l_cost i,j ,ex_left_right i,j ) of any pixel of the left image.
  • the disparity value d is the best disparity value corresponding to the pixel.
  • Step S163 Obtain the best disparity value of the first pixel according to the statistical learning value.
  • the statistical learning value t(d) is obtained according to step S162, and the disparity value d therein is obtained as the best disparity value of the first pixel (y, x).
  • the statistical learning step S160 provided in this application is another method for obtaining the optimal disparity value of the first pixel. It is different from the weighted voting method in step S150, and the same method as the weighted voting method can be obtained. Technical effect, this statistical learning step helps enrich the technical solutions of this application.
  • a binocular vision stereo matching system 30 based on graph cutting is correspondingly disclosed.
  • the system includes a memory 301 and a processor 302, where the memory 301 is used to store a program, and the processor 302 is used to execute the program stored in the memory 301 to implement the methods described in steps S110-S160.
  • this embodiment also provides an image vision stereo matching method. Please refer to FIG. 4.
  • the image vision stereo matching method includes steps S210-S220, which are described below.
  • Step S210 Acquire images of at least two viewpoints.
  • multiple cameras can be used to capture images of the stereo matching object, so that images from multiple viewpoints can be obtained.
  • Step S220 Perform stereo matching on each pixel in one of the images by using the binocular vision stereo matching method described in the embodiment to obtain the optimal disparity value of each pixel.
  • the binocular vision stereo matching method in the first embodiment obtains the best disparity value of one pixel in the image, and the matching corresponding in another image can be found according to the best disparity value. Then, you can continue to calculate the best disparity value of all pixels in the image according to this method, so that one-to-one stereo matching of pixels between two or more images can be realized, and then the effect of image stereo matching can be achieved.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium may include: read-only memory, random access memory, magnetic disk, optical disk, hard disk, etc.
  • the computer executes the program to realize the above-mentioned functions.
  • the program is stored in the memory of the device, and when the program in the memory is executed by the processor, all or part of the above functions can be realized.
  • the program can also be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a mobile hard disk, and saved by downloading or copying.
  • a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a mobile hard disk, and saved by downloading or copying.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种基于极值校验和加权投票的双目视觉立体匹配方法,该方法包括获取两个视点下的图像;根据预设的多个代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,得到各个代价函数分别对应的代价聚合函数;根据各个代价聚合函数分别获取第一像素点处各个视差值对应的匹配代价;对每个匹配代价对应的视差值进行左右一致性校验,若该匹配代价对应的视差值通过左右一致性校验,则对该匹配代价对应的视差值进行特别加权;根据每个匹配代价对应的视差值的特别加权结果以及第一像素点处各个视差值对应的匹配代价进行加权投票,计算得到每个视差值对应的加权投票值;从各个视差值对应的加权投票值中获取第一像素点的最佳视差值。

Description

一种基于极值校验和加权投票的双目视觉立体匹配方法 技术领域
本发明涉及双目视觉技术领域,具体涉及一种基于极值校验和加权投票的双目视觉立体匹配方法。
背景技术
众所周知,场景中的光线在人眼这个精密的成像系统中被采集,通过神经中枢被送入包含有数以亿计的神经元的大脑中被并行的处理,得到了实时、高清晰、准确的深度感觉信息。这使得人类对环境的适应能力大大提高,很多复杂的动作能够得以完成:如行走、体育运动、驾驶车辆以及进行科学实验等。而计算机视觉正是使用计算机来模拟人的视觉系统的学科,目的是根据获取的两幅平面图像恢复3D图像。当前,计算机立体视觉的水平与人类的双目视觉水平还相距甚远,因此对它的研究仍然是一个非常活跃的邻域。
双目立体视觉(Binocular Stereo Vision)是计算机视觉的一种重要形式,它是基于视差原理并利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对应点间的位置偏差,来获取物体三维几何信息的方法。由此可知,它通过模拟人的视觉系统来处理现实世界,对于立体视觉匹配的研究,能够大大的增强计算机或机器人对环境的感知能力,使得机器人能够更好的适应环境、更加智能,从而能够更好的为人们服务。经过多年的技术发展,双目立体视觉已在机器人视觉、航空测绘、反求工程、军事运用、医学成像和工业检测等邻域中得以应用。
当前,双目立体视觉融合了两取像设备获得的图像并观察它们之间的差别,使计算机可以获得准确的深度信息,建立特征间的对应关系,将同一空间物理点在不同图像中的映像点对应起来,通常将这种差别称作视差(disparity)。然而,双目立体视觉中最重要但又非常困难的问题就是立体视觉匹配问题,即从不同视点图像中找到匹配的对应点。
为在不同视点图像中找到匹配的对应点,可以采用全局匹配误差最小且上边沿光滑性能约束的方法,但该方法计算量十分巨大,几乎不可能在现有的处理器上进行实时计算。另一种办法是采用一个像素周边区域的像素来估计该像素点,如用一个矩形区域、自适应生长区域或最小生成树等等,但在区域内,对像素的匹配代价的加权仍然只能采用上面 所谓的颜色(亮度)、纹理、梯度等与视差没有直接关系的特征来进行计算,因此,在实用过程中,都还有较大的不鲁棒性。
发明内容
本发明主要解决的技术问题是如何从不同的视点图像中找到匹配的对应点,以提高双目视觉立体匹配的准确度和鲁棒性。为解决上述技术问题,本申请提供一种基于极值校验和加权投票的双目视觉立体匹配方法。
根据第一方面,一种实施例中提供一种基于加权投票的双目视觉立体匹配方法,包括以下步骤:
获取步骤:获取两个视点下的图像;
聚合步骤:根据预设的多个代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,得到各个所述代价函数分别对应的代价聚合函数,所述第一像素点为该图像中的任意一像素点;
计算步骤:根据各个所述代价聚合函数分别获取所述第一像素点处各个视差值对应的匹配代价;
校验步骤:对每个所述匹配代价对应的视差值进行左右一致性校验,若该匹配代价对应的视差值通过所述左右一致性校验,则对该匹配代价对应的视差值进行特别加权;
加权投票步骤:根据每个所述匹配代价对应的视差值的特别加权结果以及所述第一像素点处各个视差值对应的匹配代价进行加权投票,计算得到每个视差值对应的加权投票值;从各个视差值对应的加权投票值中获取所述第一像素点的最佳视差值。
在所述计算步骤中,所述根据各个所述代价聚合函数分别获取所述第一像素点处各个视差值对应的匹配代价,包括:对于每个所述代价聚合函数,计算该代价聚合函数下每个视差值在所述第一像素点处的极小值,将极小值作为所述第一像素点处该视差值对应的匹配代价;根据每个所述代价聚合函数下所述第一像素点处各个视差值对应的匹配代价得到第一匹配代价组。
在所述校验步骤中,对每个所述匹配代价对应的视差值进行左右一致性校验,包括:对于所述第一匹配代价组中的每个匹配代价对应的视差值,根据该匹配代价对应的视差值得到另一幅图像中与所述第一像素点相对应的第二像素点;根据该匹配代价对应的代价函数和预设的多个 视差值对所述第二像素点进行代价聚合,得到所述第二像素点对应的代价聚合函数;根据所述第二像素点对应的代价聚合函数获取所述第二像素点处各个视差值对应的匹配代价,得到第二匹配代价组;将所述第一匹配代价组中每个匹配代价对应的视差值依次与所述第二匹配代价组中各个匹配代价对应的视差值进行比较,判断比较结果的绝对值小于预设的校验阈值,则确定所述第一匹配代价组中该匹配代价对应的视差值通过所述左右一致性校验。
在所述加权投票步骤中,所述根据每个所述匹配代价对应的视差值的特别加权结果以及所述第一像素点处各个视差值对应的匹配代价进行加权投票,计算得到每个视差值对应的加权投票值,包括:获取所述第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果;根据所述第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果进行加权投票,计算得到每个视差值对应的加权投票值。
在所述校验步骤之后还包括统计学习步骤,所述统计学习步骤包括:获取所述第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果;将所述第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果输入至一最佳视差统计学习模型,根据计算取得的统计学习值获得所述第一像素点的最佳视差值。
根据第二方面,一种实施例中提供一种图像视觉立体匹配方法,包括:
获取至少两个视点的图像;
通过上述第一方面中所述的双目视觉立体匹配方法对其中一幅图像中的各个像素点进行立体匹配,分别得到各个像素点的最佳视差值。
根据第三方面,一种实施例中提供一种基于极值校验和加权投票的双目视觉立体匹配系统,包括:
存储器,用于存储程序;
处理器,用于通过执行所述存储器存储的程序以实现上述第一方面或第二方面所述的方法。
根据第四方面,一种实施例提供一种计算机可读存储介质,包括程 序,所述程序能够被处理器执行以实现上述第一方面或第二方面所述的方法。
本申请的有益效果是:
依据上述实施例的一种基于极值校验和加权投票的双目视觉立体匹配方法,该方法包括:获取两个视点下的图像;根据预设的多个代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,得到各个代价函数分别对应的代价聚合函数;根据各个代价聚合函数分别获取第一像素点处各个视差值对应的匹配代价;对每个匹配代价对应的视差值进行左右一致性校验,若该匹配代价对应的视差值通过左右一致性校验,则对该匹配代价对应的视差值进行特别加权;根据每个匹配代价对应的视差值的特别加权结果以及第一像素点处各个视差值对应的匹配代价进行加权投票,计算得到每个视差值对应的加权投票值;从各个视差值对应的加权投票值中获取第一像素点的最佳视差值。一方面,该方法根据代价聚合函数求出的各个视差值对应的匹配代价,从最小的匹配代价开始,逐一对每个匹配代价进行左右一致性校验,从而使得通过左右一致性校验的视差值得到特别加权,利于准确地获取各个视差值对应的加权投票结果,进而通过加权投票结果得到鲁棒性较高的代价聚合函数以及得到各个像素点的准确率较高的最佳视差值。另一方面,通过本申请提供的技术方案可以有效解决立体匹配时发生误匹配的问题,利于在不同的视点图像中准确地找到匹配的对应点,提高立体匹配的精确度。
附图说明
图1为一种实施例中基于极值校验和加权投票的双目视觉立体匹配方法的流程图;
图2为左右一致性校验的流程图;
图3为另一种实施例中双目视觉立体匹配方法的流程图;
图4为一种实施例中图像视觉立体匹配方法的流程图;
图5为一种实施例中基于极值校验和加权投票的双目视觉立体匹配系统的结构图。
具体实施方式
下面通过具体实施方式结合附图对本发明作进一步详细说明。其中不同实施方式中类似元件采用了相关联的类似的元件标号。在以下的实 施方式中,很多细节描述是为了使得本申请能被更好的理解。然而,本领域技术人员可以毫不费力的认识到,其中部分特征在不同情况下是可以省略的,或者可以由其他元件、材料、方法所替代。在某些情况下,本申请相关的一些操作并没有在说明书中显示或者描述,这是为了避免本申请的核心部分被过多的描述所淹没,而对于本领域技术人员而言,详细描述这些相关操作并不是必要的,他们根据说明书中的描述以及本领域的一般技术知识即可完整了解相关操作。
另外,说明书中所描述的特点、操作或者特征可以以任意适当的方式结合形成各种实施方式。同时,方法描述中的各步骤或者动作也可以按照本领域技术人员所能显而易见的方式进行顺序调换或调整。因此,说明书和附图中的各种顺序只是为了清楚描述某一个实施例,并不意味着是必须的顺序,除非另有说明其中某个顺序是必须遵循的。
本文中为部件所编序号本身,例如“第一”、“第二”等,仅用于区分所描述的对象,不具有任何顺序或技术含义。而本申请所说“连接”、“联接”,如无特别说明,均包括直接和间接连接(联接)。
在双目视觉的立体匹配中,一个关键问题是寻找在左右图像中的匹配点,以得到两幅图像中对应像素的水平位置差,也称之为视差,从而进一步可以计算出该像素点的深度。
不在同一深度的像素点,完全可能有相同的颜色、纹理和梯度等,所以这常常会导致立体匹配时发生错配,从而进一步导致视差计算出现较大的错误,大大影响了双目视觉在深度测量中的应用。为了克服这一点,在现有的双目图像的立体匹配方法中,一般会采用像素点周边区域的像素点来估计该像素点,由于周边区域的像素点可能存在与中心像素不在同一深度的情形,因此现有的方法还有较大的不鲁棒性。通常,快速立体匹配算法主要是通过代价阵计算、代价聚合、WTA(winner-take-all)、后处理等步骤来实现,其中,WTA虽作为一种快速高效的求取视差方法,但它也存在受噪声或其它干扰的问题,导致在最小视差的对应点上,匹配代价因无法达到最小而发生严重的视差估计错误,这一情形在室外场景视频中表面的尤为突出。为了克服这一缺陷,且提高匹配代价的鲁棒性,本申请是在现有方法的技术上,采用极值校验和加权投票技术(非WTA技术)对每一种代价函数的多个极小值进行加权投票,根据投票结果决定哪一个具体的视差值作为双目视觉立体 匹配的最佳视差估计值。本申请提供的技术方案可通过各个代价函数进行代价聚合运算,根据代价聚合函数求出的各个视差值对应的匹配代价,从最小的匹配代价开始,逐一对每个匹配代价进行左右一致性校验,从而使得通过左右一致性校验的视差值得到特别加权,利于准确地获取各个视差值对应的加权投票结果,进而通过加权投票结果得到鲁棒性较高的代价聚合函数以及得到各个像素点的准确率较高的最佳视差值。此外,通过本申请提供的技术方案可以有效解决立体匹配时发生误匹配的问题,利于在不同的视点图像中准确地找到匹配的对应点,提高立体匹配的精确度。
实施例一:
请参考图1,本申请公开一种基于极值校验和加权投票的双目视觉立体匹配方法,其主要包括步骤S110-S150,分别说明如下。
步骤S110,获取步骤:获取两个视点下的图像。在一实施例中,通过双目相机对立体匹配对象进行取像,由于双目相机构成了两个取像视点,则在这两个取像视点下分别得到一帧图像。
步骤S120,聚合步骤:根据预设的多个代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,得到各个代价函数分别对应的代价聚合函数,该第一像素点为该图像中的任意一像素点。
在一实施例中,对于每个代价函数,计算该代价函数下每个视差值在第一像素点处的函数值,聚合各个视差值在第一像素点处的函数值,得到该代价函数对应的代价聚合函数。
需要说明的是,本申请中的代价函数包括但不限于颜色、梯度、rank、NCC或mutual-information对应的代价函数;其中,关于颜色的代价函数可以参考技术文献“基于颜色的代价函数参考文献[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1994,Vol.16(9),pp.920-932CrossRef”,关于梯度的代价函数可以参考技术文献“杨新,一种基于梯度算子的图像匹配算法[J].电子学报,1999(10):30-33”;关于rank的代价函数可以参考技术文献“A constraint to improve the reliability ofstereo matching using the rank transform:Acoustics,Speech,and Signal Processing,1999.on 1999 IEEE International Conference,1999[C]”,关于NCC的代价函数可以参考技术文献“博客文章,图像处理之基于NCC模板匹配识别,查询网址为https://blog.csdn.net/jia20003/article/details/ 48852549,认为NCC是一种基于统计学计算两组样本数据相关性的算法”。由于列举的各类代价函数全部属于现有技术,因此这里不再一一进行说明。此外,本领域的技术人员应当理解,随着技术的发展,未来还可能出现一些其它种类的代价函数,这些未来出现的代价函数仍可以应用至本实施例公开的技术方案,且不对本实施例的技术方案构成限制。
需要说明的是,本实施例中的视差值为取值范围[0,d max]中的任意值,其中,d max表示视差值的最大允许值,选取情况由用户进行设定。
例如,根据现有的一种代价函数和预设的多个视差值(如0,…,d max),选取其中一幅图像(如左图像)上的第一像素点(y,x)进行代价聚合,用公式表示为
cost_left(0,…,d max)=cost_volume_left(y,x,0,…,d max)
其中,cost_left()表示左图像中第一像素点(y,x)对应的代价聚合函数,cost_volume_left()表示针对左图像进行代价聚合运算的代价函数。
依次类推,在本实施例中,根据的n种代价函数和预设的多个视差值(如0,…,d max),选取其中一幅图像(如左图像)上的第一像素点(y,x)进行代价聚合,用公式分别表示为
cost_left 1(0,…,d max)=cost_volume_left 1(y,x,0,…d max)
cost_left 2(0,…,d max)=cost_volume_left 2(y,x,0,…d max)
...
cost_left i(0,…,d max)=cost_volume_left i(y,x,0,…d max)
cost_left n(0,…,d max)=cost_volume_left n(y,x,0,…d max)
其中,下标i表示每个代价函数的序号,i∈{1,2,…,n};cost_left()表示左图像中像素点(y,x)对应的代价聚合函数,cost_volume_left()表示针对左图像进行代价聚合运算的代价函数。由于每种代价函数对应一个代价聚合函数,如此可得到n个代价聚合函数。
步骤S130,计算步骤:根据各个代价聚合函数分别获取第一像素点处各个视差值对应的匹配代价。在一实施例中,见图2,该步骤S130可包括步骤S131-S132,分别说明如下。
步骤S131,对于每个代价聚合函数,计算该代价聚合函数下每个视差值在第一像素点处的极小值,将极小值作为第一像素点处该视差值对应的匹配代价。
例如,计算一个代价聚合函数cost_left(0,…,d max)下每个视差值在第一像素点(y,x)处的极小值,记作min_l_cost j;那么,各个视差值(如0,…,d max)对应的匹配代价按照从小到大排列的升序列为:min_l_cost 1,min_l_cost 2,…,min_l_cost j,...,min_l_cost h。则该升序列中每个匹配代价及其对应的视差值可进行一一匹配,分别表示为:
(min_left 1,min_l_cost 1),(min_left 2,min_l_cost 2),...,(min_left j,min_l_cost j),…,(min_left h,min_l_cost h)
其中,下标j表示匹配代价组中每个匹配代价在升序列中的序号,j∈(1,2,…,h),h表示任意一个代价函数下第一像素点(y,x)处各个视差值对应的匹配代价的总数量;匹配代价min_l_cost j对应的视差值为min_left j
步骤S132,根据每个代价聚合函数下第一像素点处各个视差值对应的匹配代价得到第一匹配代价组。
在一实施例中,根据预设规则从每个代价聚合函数下第一像素点处各个视差值对应的匹配代价中选取若干个匹配代价以得到所述第一匹配代价组。这里的预设规则包括:将第一像素点处各个视差值对应的匹配代价进行升序排列,从排列结果中确定小于或等于一噪声阈值的几个匹配代价作为若干个匹配代价的选取对象,所述噪声阈值为排列结果中最小的匹配代价与预设噪声参数δ之和。
例如,在本实施例中,计算n个代价聚合函数下每个视差值在其中一幅图像(例如左图像)的像素点(y,x)处的极小值,可按照步骤S131获取每个代价聚合函数下得到的匹配代价按照从小到大的升序列,按照预设规则从每个升序列中获取若干个匹配代价及其对应的视差值,则一一匹配后的每个第一匹配代价组可表示为
第1个第一匹配代价组:(min_left 1,1,min_l_cost 1,1),(min_left 1,2,min_l_cost 1,2),…,(min_left 1,j,min_l_cost 1,j),...,(min_left 1,m1,min_l_cost 1,m1);
第2个第一匹配代价组:(min_left 2,1,min_l_cost 2,1),(min_left 2,2,min_l_cost 2,2),…,(min_left 2,j,min_l_cost 2,j),...,(min_left 2,m2,min_l_cost 2,m2);
...
第i个第一匹配代价组:(min_left i,1,min_l_cost i,1),(min_left i,2, min_l_cost i,2),…,(min_left i,j,min_l_cost i,j),...,(min_left i,mi,min_l_cost i,mi);
第n个第一匹配代价组:(min_left n,1,min_l_cost n,1),(min_left n,2,min_l_cost n,2);
...,(min_left n,j,min_l_cost n,j),…,(min_left n,mn,min_l_cost n,mn)。
其中,下标i表示每个代价函数或者每个代价聚合函数的序号,i∈{1,2,…,n},n表示代价聚合函数的总数量;下标j表示第一匹配代价组中每个匹配代价在升序列中的序号,j∈{1,2,…,mi},mi表示任何一个第一匹配代价组中匹配代价的总数量;匹配代价min_l_cost i,j对应的视差值为min_left i,j
其中,下标j表示匹配代价组中每个匹配代价在升序列中的序号,j∈{1,2,…,mi},下标mi表示每个匹配代价组中匹配代价的总数量,使得j=1,…,mi。也就是说,第一个匹配代价组中下标m1的取值由最小匹配代价min_l_cost 1,1和噪声参数δ决定,使得min_l_cost 1,j且min_l_cost 1,m1-1小于或等于min_l_cost 1,1+δ,也使得min_l_cost 1,m1+1大于min_l_cost 1,1+δ;同样,第i个匹配代价组中下标mi的取值由最小匹配代价min_l_cost i,1和噪声参数δ决定,使得min_l_cost i,j且min_l_cost 1,mi-1小于或等于min_l_cost i,1+δ,也使得min_l_cost i,mi+1大于min_l_cost i,1+δ。需要说明的是,其中噪声参数δ是衡量图像噪声水平的参数,可根据步骤S110获取的左图像而具体设定,这里不做限制。
步骤S140,校验步骤:对每个匹配代价对应的视差值进行左右一致性校验,若该匹配代价对应的视差值通过左右一致性校验,则对该匹配代价对应的视差值进行特别加权。在一实施例中,见图2,该步骤S140可包括步骤S141-S148,分别说明如下。
步骤S141,对于第一匹配代价组中的每个匹配代价对应的视差值,根据该匹配代价对应的视差值得到另一幅图像中与第一像素点相对应的第二像素点。
例如,需要对匹配代价min_l_cost i,j对应的视差值min_left i,j进行左右一致性校验时,根据min_left i,j得到右图像中与第一像素点(y,x)相对应的第二像素点(y,x-min_left i,j)。
步骤S142,根据步骤S141中该匹配代价对应的代价函数和预设的 多个视差值对第二像素点进行代价聚合,得到第二像素点对应的代价聚合函数。
例如,根据匹配代价min_l_cost i,j对应的代价函数,且选取另一幅图像(如右图像)上的第二像素点(y,x-min_left i,j)进行代价聚合,用公式表示为
cost_right(0,…,d max)=cost_volume_right(y,x-min_left i,j,0,…,d max)
其中,cost_right()表示右图像中第二像素点(y,x-min_left i,j)对应的代价聚合函数,cost_volume_right()表示针对右图像进行代价聚合运算的代价函数。
步骤S143,根据第二像素点对应的代价聚合函数获取第二像素点处各个视差值对应的匹配代价,得到第二匹配代价组。
例如,计算代价聚合函数cost_right(0,…,d max)下每个视差值在第二像素点(y,x-min_left i,j)处的极小值,记作min_r_cost i,k;那么,各个视差值(如0,…,d max)对应的匹配代价按照从小到大排列为:min_r_cost i,1,min_r_cost i,2,…,min_r_cost i,k,...,min_r_cost i,h
则第二匹配代价组可以表示为:
(min_right i,1,min_r_cost i,1),(min_right i,2,min_r_cost i,2),...,(min_right i,k,min_r_cost i,k),…,(min_right i,m,min_r_cost i,mi)
其中,匹配代价min_r_cost i,k对应的视差值为min_right i,k
需要说明的是,本实施例中采用与得到第一匹配代价组相类似的方法得到第二匹配代价组,具体为:根据预设规则从第二像素点处各个视差值对应的匹配代价中选取若干个匹配代价以得到所述第二匹配代价组;这里的预设规则包括:将第二像素点处各个视差值对应的匹配代价进行升序排列,从排列结果中确定小于或等于一噪声阈值的几个匹配代价作为若干个匹配代价的选取对象,这里的噪声阈值为排列结果中最小的匹配代价与预设噪声参数δ之和。也就是说,第二匹配代价组中下标mi的取值由最小匹配代价min_r_cost i,1和噪声参数δ决定,使得min_r_cost i,j且min_r_cost i,mi-1小于或等于min_r_cost i,1+δ,也使得min_r_cost i,mi+1大于min_r_cost i,1+δ。需要说明的是,其中噪声参数δ是衡量图像噪声水平的参数,可根据步骤S110获取的右图像而具体设定,这里不做限制。
步骤S144,将第一匹配代价组中每个匹配代价对应的视差值依次与 第二匹配代价组中各个匹配代价对应的视差值进行比较。
在一具体实施例中,对于第一匹配代价组中的任意一个匹配代价min_l_cost i,j对应的视差值min_left i,j,和第二匹配代价组中的任意一个匹配代价min_r_cost i,k对应的视差值min_right i,k,进行左右一致性校验包括:
|min_right i,k—min_left i,j|<ε
(1)其中,i为所述代价聚合函数的序号,j为所述第一匹配代价组中各个匹配代价在升序排列中的序号,k为所述第二匹配代价组中各个匹配代价在升序排列中的序号,ε为预设的校验阈值,且ε∈{2,…,5}。
需要说明的是,依次进行比较的含义是:先将最小的匹配代价对应的视差值min_left 1,1分别与min_right i,1、min_right i,2、...min_right i,k…、min_right i,mi进行比较,再将次小的匹配代价对应的视差值min_left 1,2分别与min_right i,1、min_right i,2、...min_right i,k…、min_right i,mi进行比较,直到最后,将min_left 1,m1分别与min_right i,1、min_right i,2、...min_right i,k…、min_right i,mi进行比较。本领域的技术人员应当理解,只要min_right i,1、min_right i,2、...min_right i,k…、min_right i,m中的任何一个视差值使得min_left 1,1满足公式(1),那么,视差值min_left 1,1就通过左右一致性校验。
需要说明的是,左右一致性检测(left-right consistency,LRC)是立体匹配中常见的后处理手段,常常用来实现遮挡检测。比如,一些点只出现在一幅图像中,而在另一幅图像中看不到,如果不针对遮挡区域做一些特殊处理是不可能通过单幅图像提供的有限信息得到遮挡点的正确视差的。运用LRC的具体做法为:根据左右两幅输入图像,分别得到左右两幅视差图,对于左图中的一个点求得其视差值,找到右图中的对应点并求得对应点的视差,若两视差的绝对值大于阈值,则将左图中确定的点标记为遮挡点。
步骤S145,判断比较结果的绝对值是否小于预设的校验阈值,如是则进入步骤S146,反之进入步骤S148。在一具体实施例中,假如|min_right j,k—min_left i,j|<ε,则进入步骤S146。
本领域的技术人员应当理解,只要min_right i,1、min_right i,2、...min_right i,k…、min_right i,m中的一个视差值使得min_left i,j 满足公式(1),那么就可认为视差值min_left i,j通过左右一致性校验。
步骤S146,确定第一匹配代价组中匹配代价min_l_cost i,j对应的视差值min_left i,j通过左右一致性校验。
步骤S147,若确定第一匹配代价组中匹配代价min_l_cost i,j对应的视差值min_left i,j通过左右一致性校验,则对min_left i,j进行特别加权。
特别加权过程可以为:ex_left_right i,j=ex_left_right(min_left i,j)=v1,其中v1为用户预设的权重值,例如令v1>1。
步骤S148,确定第一匹配代价组中匹配代价min_l_cost i,j对应的视差值min_left i,j没有通过左右一致性校验。此时,可以使得ex_left_right i,j=ex_left_right(min_left i,j)=v0,例如令v0=1,按照普通情况处理。
步骤S150,加权投票步骤:根据每个匹配代价对应的视差值的特别加权结果以及第一像素点处各个视差值对应的匹配代价进行加权投票,计算得到每个视差值对应的加权投票值;从各个视差值对应的加权投票值中获取第一像素点的最佳视差值。
在一实施例中,见图3,步骤S150可包括步骤S151-S153,分别说明如下。
步骤S151,获取第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果。
步骤S152,根据第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果进行加权投票,计算得到每个视差值对应的加权投票值。
在一具体实施例中,例如对于第一匹配代价组中的每个匹配代价min_l_cost i,j以及每个匹配代价对应的视差值min_left i,j进行加权投票,计算得到一视差值d对应的加权投票值,用公式表示为
Figure PCTCN2019076888-appb-000001
其中,i为所述代价聚合函数的序号,j为所述第一匹配代价组中各个匹配代价在升序排列中的序号,min_l_cost 1,j为第一个代价聚合函数下升序排列中第j个匹配代价对应的视差值,ex_left_right i,j为视差值min_left i,j的特别加权结果,w 1()、w 2()均为修改加权代价的曲线函数,d ∈(0,…,d max),Δ为预设的区间阈值且Δ∈{3,…,10}。
步骤S153,比较各个视差值对应的加权投票值,确定比较结果中最大的加权投票值对应的视差值作为第一像素点的最佳视差值。
例如,根据步骤S152可得到视差值d分别为0,…,d max时分别对应的加权投票值weigthed_vote(0)、…、weigthed_vote(d)、…、weigthed_vote(d max),根据公式d*=arg max weighted_vote(d)可以从中确定最大的加权投票值对应的视差值,从而获得像素点(y,x)的最佳视差值d*。
在另一个实施例中,见图3,本申请公开的双目视觉立体匹配方法还包括统计学习步骤S160。在校验步骤S140之后,根据用户的输入指令选择进入加权投票步骤S50或进入统计学习步骤S160,下面将对统计学习步骤进行说明。
步骤S160,统计学习步骤:通过一个统计学习模型得到第一像素点的最佳视差值。在一实施例中,步骤S160可包括步骤S161-S163,分别说明如下。
步骤S161,获取第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果。
步骤S162,将第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果输入至一最佳视差统计学习模型,计算取得统计学习值;
在一具体实施例中,例如对于第一匹配代价组中的每个匹配代价min_l_cost i,j以及每个匹配代价对应的视差值min_left i,j,计算取得统计学习值,用公式表示为
t(d)=f_model(min_left i,j,min_l_cost i,j,ex_left_right i,j)
(3)
其中,i为代价聚合函数的序号,满足1<=i<=n,j为第一匹配代价组中各个匹配代价在升序排列中的序号,满足1<=j<=m,ex_left_right i,j为视差值min_left i,j的特别加权结果,f_model()为所述最佳视差统计学习模型且包括但不限于SVM、ANN或CNN的统计学习模型,d∈[0,d max]。
需要说明的是,关于SVM的统计学习模型可以参考技术文献“张 浩然.支持向量机.《计算机科学》,2002”,关于ANN的统计学习模型可以参考技术文献“Simon Haykin.《Neural Networks and Learning Machines》,2008,人工神经网络”,关于CNN的统计学习模型可以参考技术文献“What Do We Understand About Convolutional Networks?Isma Hadji,Richard P.Wildes(Submitted on 23 Mar 2018);Arxiv:1803.08834”。由于列举的各类统计学习模型全部属于现有技术,因此这里不再一一进行说明。此外,本领域的技术人员应当理解,随着技术的发展,未来还可能出现一些其它种类的统计学习模型,这些未来出现的统计学习模型仍可以应用至本实施例公开的技术方案,且不对本实施例的技术方案构成限制。
需要说明的是,公式(3)中的f_model()表示一种机器学习模型,是由训练集进行训练而来,训练集中的每一个元素是((min_left i,j,min_l_cost i,j,ex_left_right i,j)|d i,j),这里的d ij是第一像素点(y,x)可选取的视差值,一般通过结构光等物理方法获得,那么训练集的用数学中的集合语言表示为:{(min_left i,j,min_l_cost i,j,ex_left_right i,j)|1<=i<=n,1<=j<=mi}。当训练完成后,这个统计学习模型就固定下来,就变成一个拥有固定参数的函数表达式,函数表达式的参数是根据训练集中的训练数据进行训练而来。此时,若输入任意的左右两幅图像对,通过该统计学习模型就可以得到根据左图像的任意一个像素点的输入数据(min_left i,j,min_l_cost i,j,ex_left_right i,j)来获得相应的统计学习值t(d);也就是说,除了训练集之外的任何左右图像对,只要给出左图中任意一像素点点的三个信息量{(min_left i,j,min_l_cost i,j,ex_left_right i,j)|1<=i<=n,1<=j<=mi},就能取得关于该像素点的统计学习值t(d),该统计学习值t(d)中的视差值d就是像素点对应的最佳视差值。
步骤S163,依据统计学习值获得第一像素点的最佳视差值。在一具体实施例中,根据步骤S162取得统计学习值t(d),获取其中的视差值d以作为第一像素点(y,x)的最佳视差值。
需要说明的是,本申请提供的统计学习步骤S160是另一种求取第一像素点的最佳视差值的方法,区别于步骤S150中的加权投票方法,可以获得与加权投票方法相同的技术效果,该统计学习步骤有助于丰富本申请的技术方案。
在本实施例中,还相应地公开了一种基于图割的双目视觉立体匹配 系统30。请参考图5,该系统包括存储器301和处理器302,其中,存储器301用于存储程序,而处理器302用于通过执行存储器301存储的程序以实现步骤S110-S160中所述的方法。
实施例二:
在基于实施例一中双目视觉立体匹配方法的基础上,本实施例还提供一种图像视觉立体匹配方法,请参考图4,该图像视觉立体匹配方法包括步骤S210-S220,下面分别说明。
步骤S210,获取至少两个视点的图像。在一具体实施例中,可通过多个相机来对立体匹配对象进行取像,如此可获得多个视点下的图像。
步骤S220,通过实施例一种所述的双目视觉立体匹配方法对其中一幅图像中的各个像素点进行立体匹配,分别得到各个像素点的最佳视差值。
本领域的技术人员可以理解,实施例一中的双目视觉立体匹配方法获得的是图像中一个像素点的最佳视差值,根据该最佳视差值可以找到另一个图像中的匹配对应点,那么,可以根据该方法继续计算图像中所有像素点的最佳视差值,如此可实现两幅或多幅图像之间像素点的一一立体匹配,进而达到图像立体匹配的效果。
本领域技术人员可以理解,上述实施方式中各种方法的全部或部分功能可以通过硬件的方式实现,也可以通过计算机程序的方式实现。当上述实施方式中全部或部分功能通过计算机程序的方式实现时,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器、随机存储器、磁盘、光盘、硬盘等,通过计算机执行该程序以实现上述功能。例如,将程序存储在设备的存储器中,当通过处理器执行存储器中程序,即可实现上述全部或部分功能。另外,当上述实施方式中全部或部分功能通过计算机程序的方式实现时,该程序也可以存储在服务器、另一计算机、磁盘、光盘、闪存盘或移动硬盘等存储介质中,通过下载或复制保存到本地设备的存储器中,或对本地设备的系统进行版本更新,当通过处理器执行存储器中的程序时,即可实现上述实施方式中全部或部分功能。
以上应用了具体个例对本发明进行阐述,只是用于帮助理解本发明,并不用以限制本发明。对于本发明所属技术领域的技术人员,依据本发明的思想,还可以做出若干简单推演、变形或替换。

Claims (16)

  1. 一种基于极值校验和加权投票的双目视觉立体匹配方法,其特征在于,包括以下步骤:
    获取步骤:获取两个视点下的图像;
    聚合步骤:根据预设的多个代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,得到各个所述代价函数分别对应的代价聚合函数,所述第一像素点为该图像中的任意一像素点;
    计算步骤:根据各个所述代价聚合函数分别获取所述第一像素点处各个视差值对应的匹配代价;
    校验步骤:对每个所述匹配代价对应的视差值进行左右一致性校验,若该匹配代价对应的视差值通过所述左右一致性校验,则对该匹配代价对应的视差值进行特别加权;
    加权投票步骤:根据每个所述匹配代价对应的视差值的特别加权结果以及所述第一像素点处各个视差值对应的匹配代价进行加权投票,计算得到每个视差值对应的加权投票值;从各个视差值对应的加权投票值中获取所述第一像素点的最佳视差值。
  2. 如权利要求1所述的双目视觉立体匹配方法,其特征在于,在所述聚合步骤中,所述根据预设的多个代价函数和预设的多个视差值对其中一幅图像中的第一像素点进行代价聚合,得到各个所述代价函数分别对应的代价聚合函数,包括:
    对于每个所述代价函数,计算该代价函数下每个视差值在所述第一像素点处的函数值,聚合各个视差值在所述第一像素点处的函数值,得到该代价函数对应的代价聚合函数。
  3. 如权利要求2所述的双目视觉立体匹配方法,其特征在于,所述代价函数包括但不限于颜色、梯度、rank或NCC对应的代价函数;所述视差值为取值范围[0,d max]中的任意值,其中,d max表示所述视差值的最大允许值。
  4. 如权利要求1所述的双目视觉立体匹配方法,其特征在于,在所述计算步骤中,所述根据各个所述代价聚合函数分别获取所述第一像素点处各个视差值对应的匹配代价,包括:
    对于每个所述代价聚合函数,计算该代价聚合函数下每个视差值在所述第一像素点处的极小值,将极小值作为所述第一像素点处该视差值 对应的匹配代价;
    根据每个所述代价聚合函数下所述第一像素点处各个视差值对应的匹配代价得到第一匹配代价组。
  5. 如权利要求4所述的双目视觉立体匹配方法,其特征在于,在所述校验步骤中,对每个所述匹配代价对应的视差值进行左右一致性校验,包括:
    对于所述第一匹配代价组中的每个匹配代价对应的视差值,根据该匹配代价对应的视差值得到另一幅图像中与所述第一像素点相对应的第二像素点;
    根据该匹配代价对应的代价函数和预设的多个视差值对所述第二像素点进行代价聚合,得到所述第二像素点对应的代价聚合函数;
    根据所述第二像素点对应的代价聚合函数获取所述第二像素点处各个视差值对应的匹配代价,得到第二匹配代价组;
    将所述第一匹配代价组中每个匹配代价对应的视差值依次与所述第二匹配代价组中各个匹配代价对应的视差值进行比较,判断比较结果的绝对值小于预设的校验阈值,则确定所述第一匹配代价组中该匹配代价对应的视差值通过所述左右一致性校验。
  6. 如权利要求5所述的双目视觉立体匹配方法,其特征在于,
    根据预设规则从每个代价聚合函数下所述第一像素点处各个视差值对应的匹配代价中选取若干个匹配代价以得到所述第一匹配代价组,且从所述第二像素点处各个视差值对应的匹配代价中选取若干个匹配代价以得到所述第二匹配代价组;
    所述预设规则包括:将所述第一像素点处或所述第二像素点处各个视差值对应的匹配代价进行升序排列,从排列结果中确定小于或等于一噪声阈值的几个匹配代价作为若干个匹配代价的选取对象,所述噪声阈值为排列结果中最小的匹配代价与预设噪声参数δ之和。
  7. 如权利要求6所述的双目视觉立体匹配方法,其特征在于,对于所述第一匹配代价组中的每个匹配代价min_l_cost i,j对应的视差值min_left i,j,和所述第二匹配代价组中的每个匹配代价min_r_cost i,k对应的视差值min_right i,k,进行所述左右一致性校验包括:
    |min_right i,k—min_left i,j|<ε
    其中,i为所述代价聚合函数的序号,j为所述第一匹配代价组中各 个匹配代价在升序排列中的序号,k为所述第二匹配代价组中各个匹配代价在升序排列中的序号,ε为预设的校验阈值,且ε∈{2,…,5}。
  8. 如权利要求7所述的双目视觉立体匹配方法,其特征在于,在所述加权投票步骤中,所述根据每个所述匹配代价对应的视差值的特别加权结果以及所述第一像素点处各个视差值对应的匹配代价进行加权投票,计算得到每个视差值对应的加权投票值,包括:
    获取所述第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果;
    根据所述第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果进行加权投票,计算得到每个视差值对应的加权投票值。
  9. 权利要求8所述的双目视觉立体匹配方法,其特征在于,对于所述第一匹配代价组中的每个匹配代价min_l_cost i,j以及每个匹配代价对应的视差值min_left i,j进行加权投票,计算得到一视差值d对应的加权投票值,用公式表示为
    Figure PCTCN2019076888-appb-100001
    其中,i为所述代价聚合函数的序号,j为所述第一匹配代价组中各个匹配代价在升序排列中的序号,min_l_cost 1,j为第一个代价聚合函数下升序排列中第j个匹配代价对应的视差值,ex_left_right i,j为视差值min_left i,j的特别加权结果,w 1()、w 2()均为修改加权代价的曲线函数,d∈[0,d max],Δ为预设的区间阈值且Δ∈{3,…,10}。
  10. 如权利要求9所述的双目视觉立体匹配方法,其特征在于,在所述加权投票步骤中,所述从各个视差值对应的加权投票值中获取所述第一像素点的最佳视差值,包括:
    比较各个视差值对应的加权投票值,确定比较结果中最大的加权投票值对应的视差值作为所述第一像素点的最佳视差值。
  11. 如权利要求1-10中任一项所述的双目视觉立体匹配方法,其特征在于,在所述校验步骤之后还包括统计学习步骤,所述统计学习步骤包括:
    获取所述第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果;
    将所述第一匹配代价组中的各个匹配代价,每个匹配代价对应的视差值,以及每个匹配代价对应的视差值的特别加权结果输入至一最佳视差统计学习模型,根据计算取得的统计学习值来获得所述第一像素点的最佳视差值。
  12. 如权利要求11所述的双目视觉立体匹配方法,其特征在于,对于所述第一匹配代价组中的每个匹配代价min_l_cost i,j以及每个匹配代价对应的视差值min_left i,j进行统计学习,计算取得统计学习值,用公式表示为
    t(d)=f_model(min_left i,j,min_l_cost i,j,ex_left_right i,j)
    其中,i为所述代价聚合函数的序号,j为所述第一匹配代价组中各个匹配代价在升序排列中的序号,ex_left_right i,j为视差值min_left i,j的特别加权结果,f_model()为所述最佳视差统计学习模型且包括但不限于SVM、ANN或CNN的统计学习模型,d∈[0,d max]。
  13. 如权利要求11所述的双目视觉立体匹配方法,其特征在于,在所述校验步骤之后,根据用户的输入指令选择进入所述加权投票步骤或进入所述统计学习步骤。
  14. 一种图像视觉立体匹配方法,其特征在于,包括:
    获取至少两个视点的图像;
    通过权利要求1-13中任一项所述的双目视觉立体匹配方法对其中一幅图像中的各个像素点进行立体匹配,分别得到各个像素点的最佳视差值。
  15. 一种基于极值校验和加权投票的双目视觉立体匹配系统,其特征在于,包括:
    存储器,用于存储程序;
    处理器,用于通过执行所述存储器存储的程序以实现如权利要求1-14中任一项所述的方法。
  16. 一种计算机可读存储介质,其特征在于,包括程序,所述程序能够被处理器执行以实现如权利要求1-14中任一项所述的方法。
PCT/CN2019/076888 2019-03-04 2019-03-04 一种基于极值校验和加权投票的双目视觉立体匹配方法 WO2020177060A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/076888 WO2020177060A1 (zh) 2019-03-04 2019-03-04 一种基于极值校验和加权投票的双目视觉立体匹配方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/076888 WO2020177060A1 (zh) 2019-03-04 2019-03-04 一种基于极值校验和加权投票的双目视觉立体匹配方法

Publications (1)

Publication Number Publication Date
WO2020177060A1 true WO2020177060A1 (zh) 2020-09-10

Family

ID=72337391

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/076888 WO2020177060A1 (zh) 2019-03-04 2019-03-04 一种基于极值校验和加权投票的双目视觉立体匹配方法

Country Status (1)

Country Link
WO (1) WO2020177060A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680510A (zh) * 2013-12-18 2015-06-03 北京大学深圳研究生院 Radar视差图优化方法、立体匹配视差图优化方法及系统
CN105187812A (zh) * 2015-09-02 2015-12-23 中国兵器工业计算机应用技术研究所 一种双目视觉立体匹配算法
US20180063516A1 (en) * 2016-07-29 2018-03-01 Applied Minds, Llc Methods and Associated Devices and Systems for Enhanced 2D and 3D Vision
CN108682026A (zh) * 2018-03-22 2018-10-19 辽宁工业大学 一种基于多匹配基元融合的双目视觉立体匹配方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680510A (zh) * 2013-12-18 2015-06-03 北京大学深圳研究生院 Radar视差图优化方法、立体匹配视差图优化方法及系统
CN105187812A (zh) * 2015-09-02 2015-12-23 中国兵器工业计算机应用技术研究所 一种双目视觉立体匹配算法
US20180063516A1 (en) * 2016-07-29 2018-03-01 Applied Minds, Llc Methods and Associated Devices and Systems for Enhanced 2D and 3D Vision
CN108682026A (zh) * 2018-03-22 2018-10-19 辽宁工业大学 一种基于多匹配基元融合的双目视觉立体匹配方法

Similar Documents

Publication Publication Date Title
US11232286B2 (en) Method and apparatus for generating face rotation image
WO2022001236A1 (zh) 三维模型生成方法、装置、计算机设备及存储介质
CN104008538B (zh) 基于单张图像超分辨率方法
KR20180087994A (ko) 스테레오 매칭 방법 및 영상 처리 장치
CN111340077B (zh) 基于注意力机制的视差图获取方法和装置
US11080833B2 (en) Image manipulation using deep learning techniques in a patch matching operation
CN109978934B (zh) 一种基于匹配代价加权的双目视觉立体匹配方法及系统
WO2019041660A1 (zh) 人脸去模糊方法及装置
US20220343525A1 (en) Joint depth prediction from dual-cameras and dual-pixels
CN109978928B (zh) 一种基于加权投票的双目视觉立体匹配方法及其系统
CN114429555A (zh) 由粗到细的图像稠密匹配方法、系统、设备及存储介质
CN115311186A (zh) 一种红外与可见光图像跨尺度注意力对抗融合方法及终端
CN112419419A (zh) 用于人体姿势和形状估计的系统和方法
CN117237431A (zh) 深度估计模型的训练方法、装置、电子设备及存储介质
CN118261985B (zh) 一种基于立体视觉的三坐标机智能定位检测方法及系统
CN109961092A (zh) 一种基于视差锚点的双目视觉立体匹配方法及系统
CN115841602A (zh) 基于多视角的三维姿态估计数据集的构建方法及装置
CN113610969B (zh) 一种三维人体模型生成方法、装置、电子设备及存储介质
JP6359985B2 (ja) デプス推定モデル生成装置及びデプス推定装置
US20230401737A1 (en) Method for training depth estimation model, training apparatus, and electronic device applying the method
CN114401446B (zh) 人体姿态迁移方法、装置、系统、电子设备以及存储介质
WO2020177060A1 (zh) 一种基于极值校验和加权投票的双目视觉立体匹配方法
CN115359508A (zh) 通过专家的神经元优化以提高的效率执行复杂优化任务
WO2020177061A1 (zh) 一种基于极值校验的双目视觉立体匹配方法及系统
CN114841887A (zh) 一种基于多层次差异学习的图像恢复质量评价方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19918045

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19918045

Country of ref document: EP

Kind code of ref document: A1