CN103325120A - Rapid self-adaption binocular vision stereo matching method capable of supporting weight - Google Patents

Rapid self-adaption binocular vision stereo matching method capable of supporting weight Download PDF

Info

Publication number
CN103325120A
CN103325120A CN2013102689033A CN201310268903A CN103325120A CN 103325120 A CN103325120 A CN 103325120A CN 2013102689033 A CN2013102689033 A CN 2013102689033A CN 201310268903 A CN201310268903 A CN 201310268903A CN 103325120 A CN103325120 A CN 103325120A
Authority
CN
China
Prior art keywords
pixel
parallax
support
window
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013102689033A
Other languages
Chinese (zh)
Inventor
张葛祥
王涛
关桃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN2013102689033A priority Critical patent/CN103325120A/en
Publication of CN103325120A publication Critical patent/CN103325120A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a rapid self-adaption binocular vision stereo matching method capable of supporting a weight. The method includes the following steps of reading an existing binocular vision image pair to be matched, calculating the matching cost, aggregating the matching cost in a weighted mode, calculating initial parallax, correcting the initial parallax, obtaining a final parallax matrix, generating a parallax graph and outputting a result. The rapid self-adaption binocular vision stereo matching method capable of supporting the weight can be used for the technical field of stereo display, and the stereo matching effect is improved.

Description

A kind of quick self-adapted support-weight binocular vision solid matching method
Technical field
The present invention relates to the image display technology field, relate in particular to a kind of quick self-adapted support-weight binocular vision solid matching method.
Background technology
Vision is an important means in the human knowledge world, the perception world, in the human cognition to external world approximately 75% information be to obtain by vision system.From traditional black-and-white photograph, black and white television set high-resolution color digital photo, digital TV in high resolution till now, human more and more higher to the visual experience requirement.Although traditional two-dimensional video can provide the two dimensional surface information of high definition, the human lives is in three-dimensional world, and the two dimensional surface video can't be given people's a kind of " on the spot in person " visual experience all the time.The break traditions limitation in two-dimensional video " the simple eye world of seeing " of binocular stereo vision, utilize the computer simulation mankind's vision system, obtain the three-dimensional information of scene by two width of cloth in the scene or several two dimension views, the mankind just can experience real three-dimensional world by the stereo display video.Binocular stereo vision is an important research field of computer vision, comprise four steps: Image Acquisition, camera calibration, Stereo matching and three-dimensional reconstruction, wherein Stereo matching is gordian technique, and the precision of Stereo matching directly has influence on the effect of three-dimensional reconstruction.
Although still there are many problems in actual applications in existing a large amount of solid matching method at present.Stereo matching can be divided into two classes according to the difference of optimization method: overall solid matching method and local Stereo matching side.Overall situation solid matching method matching precision is high, but computation structure is complicated, is not easy to hardware and realizes, the sectional perspective matching process is simple in structure, be easy to hardware and realize, but matching precision is also relatively low.Since Yoon proposition self-adaptation support-weight method, the Stereo matching performance of partial approach improves greatly, even surpassed some global approach, but still there is a major issue in Yoon self-adaptation support-weight method: computing velocity is slow, and computing time is longer than other local algorithm.Therefore, invent the quick self-adapted support-weight method tool that a kind of computing velocity is fast, matching performance is high and be of great significance, be conducive to Stereo Matching Technology is applied in the practical problems.
Summary of the invention
Technical matters to be solved by this invention is to overcome the deficiency of Yoon self-adaptation support-weight method, the quick self-adapted support-weight method of a kind of extension-based order conversion is proposed, take the right left view of binocular stereo image as image to be matched, take right view as matching image, for each pixel to be matched in the left view finds corresponding match point in right view, namely try to achieve the disparity map of left view.
The present invention is for solving its technical matters, and the technology that adopts comprises following steps:
S1, read existing binocular image to be matched to I lAnd I r, size and the Color Channel information of acquisition image, wherein, I lThe expression left view is image to be matched, I rThe expression right view is matching image;
S2, calculating I l, I rCoupling cost between middle pixel comprises:
S21, treat matching image I l, determine square support window N, calculate and support center pixel and the gray scale difference diff (p that supports pixel in the window, q), wherein, diff (p, q)=and I (p)-I (q), I (p) and I (q) are respectively central pixel point p and support the gray-scale value of pixel q;
S22, according to S21 gained gray scale difference diff (p, q), with each pixel definition is in 5 grades in the square support window N, five grades are as follows:
Figure BDA00003439816200021
Wherein, s and t are the threshold values that rule of thumb arranges, and satisfy this principle that affects that reduces as far as possible picture noise;
S23, statistics draw initial similarity measure value S d:
Make fuz represent the numerical matrix that obtains after the order conversion, matrix size is identical with square support window N size, fuz = - 2 diff ( p , q ) < - s - 1 - s &le; diff ( p , q ) < - t 0 - t &le; diff ( p , q ) &le; t 1 t < diff ( p , q ) &le; s 2 s < diff ( p , q ) , Respectively to I lIn pixel to be matched and I rMiddle candidate matches pixel carries out the order conversion, can obtain two order transformation matrix fuz lAnd fuz r, statistics order transformation matrix fuz lWith order transformation matrix fuz rCorrespondence position has the number of same levels in square support window, obtains initial similarity measure value S d, wherein, S d = &Sigma; q &Element; N m , m = 1 if fuz 1 = fuz r 0 otherwise , M represents order transformation matrix fuz lAnd fuz rWhether have same levels at correspondence position, if identical, m=1 then, otherwise m=0;
Initial similarity measure value corresponding to each parallax value d ∈ D in the square statistic window M of n * n in S24, the statistics image to be matched centered by the pixel to be matched obtains the coupling cost between the candidate matches pixel in pixel to be matched and the matching image according to ERT similarity measurement function
Figure BDA00003439816200025
Wherein, d represents pixel to be matched and candidate matches pixel parallax in the horizontal direction, D={d Min... d Max, C ERT ( q , q &OverBar; d ) = &Sigma; n &times; n S d ( q ) ;
S3, weighting polymerization coupling cost comprise:
S31, calculating support-weight w (p, q): utilize color similarity and how much proximities, calculate the support-weight w (p, q) that the interior support of Matching supporting window pixel q treats matched pixel p, w (p, q)=f s(Δ c Pq) f p(Δ g Pq),
Figure BDA00003439816200032
Wherein, f s(Δ c Pq) represent by the definite cluster intensity of color similarity, f p(Δ g Pq) represent by how much definite cluster intensity of proximity, Δ c PqRepresent two pixel color c pAnd c qAt the Euclidean distance of RGB color space, c p=[R p, G p, B p], c q=[R q, G q, B q], &Delta;c pq = ( R p - R q ) 2 + ( G p - G q ) 2 + ( B p - B q ) 2 , Δ g PqThe Euclidean distance on the locus of expression center pixel and support pixel, establishing pixel p is p (x, y) at the coordinate of image area, pixel q is q (x', y') at the coordinate of image area, then γ c, γ pBe user-specified parameters, be used for respectively the impact on the support-weight size of adjustable colors similarity and how much proximities;
S32, according to S24 gained coupling cost
Figure BDA00003439816200035
With S31 gained support-weight w (p, q), weighting polymerization coupling cost obtains: E ( p , p &OverBar; d ) = &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q ) c ERT ( q , q &OverBar; d ) &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q ) , Wherein,
Figure BDA00003439816200037
The matched pixel point that represents respectively image pixel p to be matched, q correspondence in matching image when parallax is d, N pSupport window size in the expression reference picture,
Figure BDA000034398162000312
Corresponding support window size in the expression target image, and
S4, calculating initial parallax: gained weighting polymerization coupling cost among the S3 is adopted suboptimization method WTA(Winner-Take-All, the victor is a king), draw the maximum weighted polymerization result, the parallax value that the maximum weighted polymerization result is corresponding is the initial parallax d of this pixel p, the initial parallax result of each pixel is saved in the initial parallax matrix, and the initial parallax matrix is: d p = arg max d &Element; D E ( p , p &OverBar; d ) ;
S5, S4 gained initial parallax is proofreaied and correct, is obtained final parallax matrix, comprising:
S51, definite correcting window N centered by pixel p to be corrected c, be that each pixel is distributed a suitable support-weight w adaptively in the correction window according to color similarity and how much proximities c,
Figure BDA00003439816200039
The initial parallax of all pixels distributes in S52, the observation correction window, the number of times that statistical parallax d ∈ D occurs in correction window, and the number of times that in correction window, occurs of each parallax value of polymerization d and corresponding weights, the parallax that maximum polymerization result is corresponding then is the final parallax d of pixel to be corrected P_final, and the result is saved in final parallax matrix, wherein d p _ final = arg max d &Element; D { &Sigma; q &Element; N c w c ( p , q ) &times; k } , k = 1 if d p ( q ) = d 0 otherwise , K represents whether the initial parallax of pixel equals to treat statistical parallax d in the correction window, if equate, and k=1 then, otherwise k=0;
S6, generation disparity map, Output rusults: with the final parallax value d of S5 gained P_finalBe mapped to corresponding gray space [0,255], the mapping ratio is t, obtains representing the gray level image of parallax information.
Further, s<t described in the S22.
Further, pixel should be as far as possible from same depth in the described correction window of S51, and the square correction window of self-adaptation support-weight satisfies this condition.
The present invention is by the quick self-adapted support-weight method of a kind of extension-based order conversion, take the right left view of binocular stereo image as image to be matched, take right view as matching image, for each pixel to be matched in the left view finds corresponding match point in right view, namely try to achieve the disparity map of left view, with respect to other computing method, computing velocity is fast, matching performance is high, more is conducive to Stereo Matching Technology is applied in the practical problems.
Description of drawings
Fig. 1 is method step schematic diagram of the present invention.
Embodiment
Below in conjunction with embodiment the present invention is described in further detail.
A kind of quick self-adapted support-weight binocular vision solid matching method its objective is and tries to achieve fast the right high precision dense disparity map of image to be matched.The Teddy standard testing image that the present embodiment provides take the Middlebury test platform is to as experimental subjects, wherein take left view as image to be matched, take right view as matching image, for each pixel to be matched in the left view finds corresponding match point in right view.In order to find the solution this problem, shown in Figure 1 according to flow process, need take following steps:
S1: read binocular image to be matched pair.The Teddy standard testing image that input Middlebury test platform provides pair, wherein left view I l, right view I rRead image to be matched to comprising the information such as image size and Color Channel.
S2: use expansion order transforming function transformation function, calculate I lAnd I rCoupling cost between pixel.
At first determine pixel p to be matched, determine square support window size N=25, calculate center pixel p and the gray scale difference diff (p that supports pixel q in the square support window, q)=I (p)-I (q), I (p) wherein, I (q) represents respectively pixel p, the gray-scale value of q.
Then according to the size of diff (p, q) value, will support each pixel definition to 5 grade in the window.
Figure BDA00003439816200041
Left view and right view are carried out the order conversion, obtain two order transformation matrix fuz l(p),
Figure BDA00003439816200051
Order transformation matrix fuz in the square support window in the view of the statistics left and right sides l(p) and
Figure BDA00003439816200052
The number that has same levels at correspondence position obtains pixel p to be matched and candidate matches pixel
Figure BDA00003439816200053
Between initial similarity measure value S d
S d ( q , q &OverBar; d ) = &Sigma; q &Element; N m
m = 1 if ( fuz l ( p ) = fuz r ( p &OverBar; d ) ) 0 otherwise
Add up at last initial similarity measure value corresponding to each parallax value d ∈ D in 3 * 3 square statistical windows take pixel p to be matched as window center, obtain the coupling cost between pixel to be matched and candidate matches pixel
Figure BDA00003439816200056
Wherein, initialization parallax collection D={0,1,2,3 ... 56,57,58,59}.
S3: power polymerization coupling cost:
At first, utilize color similarity and how much proximities, calculate and support in the Matching supporting window that pixel q treats the support-weight w (p, q) of matched pixel p:
w ( p , q ) = f s ( &Delta;c pq ) &CenterDot; f p ( &Delta;g pq ) = exp ( - ( &Delta;c pq 19 + &Delta;g pq 12.5 ) )
&Delta;c pq = ( R p - R q ) 2 + ( G p - G q ) 2 + ( B p - B q ) 2
&Delta;g pq = ( p ( x , y ) - q ( x &prime; , y &prime; ) ) 2
Then, use square support-weight window, polymerization coupling cost and corresponding support-weight, the size of support window is N = N p = N p &OverBar; d = 25 .
E ( p , p &OverBar; d ) = &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q ) c ERT ( q , q &OverBar; d ) &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q )
S4: calculate initial parallax from polymerization result.Adopt suboptimization method WTA, have parallax value corresponding to maximum weighted result and then be the initial parallax d of this pixel p, and the result is saved in the initial parallax matrix.
d p = arg max d &Element; D E ( p , p &OverBar; d )
S5: initial parallax is proofreaied and correct
At first, determine correction window N centered by pixel p to be corrected c=21, and according to color similarity and how much proximities, correction pixels p treats the support-weight w of correction pixels q in the calculation correction window c(p, q):
w c ( p , q ) = exp ( - ( &Delta;c pq 19 + &Delta;g pq 10.5 ) )
Then the initial parallax of observing all pixels in the correction window distributes, the number of times that statistical parallax d ∈ D occurs in correction window, and the number of times that in calibration window, occurs of each parallax value of polymerization d and corresponding weights, the parallax that maximum polymerization result is corresponding then is the final parallax d of pixel to be corrected P_final, and final parallax result is saved in final parallax matrix.
d p _ final = arg max d &Element; D { &Sigma; q &Element; N c w c ( p , q ) &times; k }
k = 1 if d p ( q ) = d 0 otherwise
S6: generate disparity map and Output rusults.Parallax value in the final parallax matrix is mapped to corresponding gray space [0,255], mapping ratio t=4, the mapping result of parallax value is as follows among the parallax collection D:
0×4=0 1×4=4 ... 13×4=52 14×4=56
15×4=60 16×4=64 ... 28×4=112 29×4=116
30×4=120 31×4=124 ... 43×4=172 44×4=176
45×4=180 46×4=184 ... 58×4=232 59×4=236
Wherein, parallax value is larger, and more near 255, brighter in disparity map, parallax value is less after the mapping, and is more near 0, darker in disparity map after the mapping.
The present invention is with I lFor image to be matched, with I rBe matching image, for each pixel in the left view finds corresponding match point in right view, try to achieve the disparity map of left view.Table 1 is the quantitative comparison of embodiment of the invention result and Yoon self-adaptation support-weight methods and results.As can be seen from Table 1, the present invention is in the unshielding zone, degree of depth discontinuity zone, and All Ranges erroneous matching rate all the erroneous matching rate than Yoon self-adaptation support-weight method is low, and the Stereo matching time of the present invention is about 1/20 of Yoon self-adaptation support-weight method match time, and matching speed is faster.
Table 1
Figure BDA00003439816200063
Necessary be pointed out that at this, top embodiment just is used for further setting forth the present invention, so that those of ordinary skill in the art understands the present invention better.The present invention has disclosed its first-selected embodiment by literal; but but can understand wherein optimization and alterability by reading these technology explanatory notes; and improve not departing from scope and spirit of the present invention, but such improvement should still belong to the protection domain of claim of the present invention.

Claims (3)

1. a quick self-adapted support-weight binocular vision solid matching method is characterized in that, may further comprise the steps:
S1, read existing binocular image to be matched to I lAnd I r, size and the Color Channel information of acquisition image, wherein, I lThe expression left view is image to be matched, I rThe expression right view is matching image;
S2, calculating I l, I rCoupling cost between middle pixel comprises:
S21, treat matching image I l, determine square support window N, calculate and support center pixel and the gray scale difference diff (p that supports pixel in the window, q), wherein, diff (p, q)=and I (p)-I (q), I (p) and I (q) are respectively central pixel point p and support the gray-scale value of pixel q;
S22, according to S21 gained gray scale difference diff (p, q), with each pixel definition is in 5 grades in the square support window N, five grades are as follows:
Figure FDA00003439816100011
Wherein, s and t are the threshold values that rule of thumb arranges, and satisfy this principle that affects that reduces as far as possible picture noise;
S23, statistics draw initial similarity measure value S d:
Make fuz represent the numerical matrix that obtains after the order conversion, matrix size is identical with square support window N size, fuz = - 2 diff ( p , q ) < - s - 1 - s &le; diff ( p , q ) < - t 0 - t &le; diff ( p , q ) &le; t 1 t < diff ( p , q ) &le; s 2 s < diff ( p , q ) , Respectively to I lIn pixel to be matched and I rMiddle candidate matches pixel carries out the order conversion, can obtain two order transformation matrix fuz lAnd fuz r, statistics order transformation matrix fuz lWith order transformation matrix fuz rCorrespondence position has the number of same levels in square support window, obtains initial similarity measure value S d, wherein, S d = &Sigma; q &Element; N m , m = 1 if fuz l = fuz r 0 otherwise , M represents order transformation matrix fuz lAnd fuz rWhether have same levels at correspondence position, if identical, m=1 then, otherwise m=0;
Initial similarity measure value corresponding to each parallax value d ∈ D in the square statistic window M of n * n in S24, the statistics image to be matched centered by the pixel to be matched obtains the coupling cost between the candidate matches pixel in pixel to be matched and the matching image according to ERT similarity measurement function
Figure FDA00003439816100015
Wherein, d represents pixel to be matched and candidate matches pixel parallax in the horizontal direction, D={d Min... d Max, C ERT ( q , q &OverBar; d ) = &Sigma; n &times; n S d ( q ) ;
S3, weighting polymerization coupling cost comprise:
S31, calculating support-weight w (p, q): utilize color similarity and how much proximities, calculate the support-weight w (p, q) that the interior support of Matching supporting window pixel q treats matched pixel p, w (p, q)=f s(Δ c Pq) f p(Δ g Pq),
Figure FDA00003439816100022
Figure FDA00003439816100023
Wherein, f s(Δ c Pq) represent by the definite cluster intensity of color similarity, f p(Δ g Pq) represent by how much definite cluster intensity of proximity, Δ c PqRepresent two pixel color c pAnd c qAt the Euclidean distance of RGB color space, c p=[R p, G p, B p], c q=[R q, G q, B q], &Delta;c pq = ( R p - R q ) 2 + ( G p - G q ) 2 + ( B p - B q ) 2 , Δ g PqThe Euclidean distance on the locus of expression center pixel and support pixel, establishing pixel p is p (x, y) at the coordinate of image area, pixel q is q (x', y') at the coordinate of image area, then
Figure FDA00003439816100025
γ c, γ pBe user-specified parameters, be used for respectively the impact on the support-weight size of adjustable colors similarity and how much proximities;
S32, according to S24 gained coupling cost
Figure FDA00003439816100026
With S31 gained support-weight w (p, q), weighting polymerization coupling cost obtains: E ( p , p - d ) = &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q ) c ERT ( q , q &OverBar; d ) &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q ) , Wherein,
Figure FDA00003439816100028
The matched pixel point that represents respectively image pixel p to be matched, q correspondence in matching image when parallax is d, N pSupport window size in the expression reference picture,
Figure FDA00003439816100029
Corresponding support window size in the expression target image, and
Figure FDA000034398161000210
S4, calculating initial parallax: gained weighting polymerization coupling cost among the S3 is adopted suboptimization method WTA(Winner-Take-All, the victor is a king), draw the maximum weighted polymerization result, the parallax value that the maximum weighted polymerization result is corresponding is the initial parallax d of this pixel p, the initial parallax result of each pixel is saved in the initial parallax matrix, and the initial parallax matrix is: d p = arg max d &Element; D E ( p , p &OverBar; d ) ;
S5, S4 gained initial parallax is proofreaied and correct, is obtained final parallax matrix, comprising:
S51, definite correcting window N centered by pixel p to be corrected c, be that each pixel is distributed a suitable support-weight w adaptively in the correction window according to color similarity and how much proximities c,
The initial parallax of all pixels distributes in S52, the observation correction window, the number of times that statistical parallax d ∈ D occurs in correction window, and the number of times that in correction window, occurs of each parallax value of polymerization d and corresponding weights, the parallax that maximum polymerization result is corresponding then is the final parallax d of pixel to be corrected P_final, and the result is saved in final parallax matrix, wherein d p _ final = arg max d &Element; D { &Sigma; q &Element; N c w c ( p , q ) &times; k } , k = 1 if d p ( q ) = d 0 otherwise , K represents whether the initial parallax of pixel equals to treat statistical parallax d in the correction window, if equate, and k=1 then, otherwise k=0;
S6, generation disparity map, Output rusults: with the final parallax value d of S5 gained P_finalBe mapped to corresponding gray space [0,255], the mapping ratio is t, obtains representing the gray level image of parallax information.
2. a kind of quick self-adapted support-weight binocular vision solid matching method according to claim 1 is characterized in that: s<t described in the S22.
3. a kind of quick self-adapted support-weight binocular vision solid matching method according to claim 1 is characterized in that: pixel should be as far as possible from same depth in the described correction window of S51, and the square correction window of self-adaptation support-weight satisfies this condition.
CN2013102689033A 2013-06-30 2013-06-30 Rapid self-adaption binocular vision stereo matching method capable of supporting weight Pending CN103325120A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013102689033A CN103325120A (en) 2013-06-30 2013-06-30 Rapid self-adaption binocular vision stereo matching method capable of supporting weight

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013102689033A CN103325120A (en) 2013-06-30 2013-06-30 Rapid self-adaption binocular vision stereo matching method capable of supporting weight

Publications (1)

Publication Number Publication Date
CN103325120A true CN103325120A (en) 2013-09-25

Family

ID=49193843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013102689033A Pending CN103325120A (en) 2013-06-30 2013-06-30 Rapid self-adaption binocular vision stereo matching method capable of supporting weight

Country Status (1)

Country Link
CN (1) CN103325120A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971366A (en) * 2014-04-18 2014-08-06 天津大学 Stereoscopic matching method based on double-weight aggregation
CN104123727A (en) * 2014-07-26 2014-10-29 福州大学 Stereo matching method based on self-adaptation Gaussian weighting
CN104200453A (en) * 2014-09-15 2014-12-10 西安电子科技大学 Parallax image correcting method based on image segmentation and credibility
CN104637043A (en) * 2013-11-08 2015-05-20 株式会社理光 Supporting pixel selection method and device and parallax determination method
CN104820991A (en) * 2015-05-15 2015-08-05 武汉大学 Multi-soft-constraint stereo matching method based on cost matrix
CN104915941A (en) * 2014-03-11 2015-09-16 株式会社理光 Method and apparatus for calculating parallax
CN106156748A (en) * 2016-07-22 2016-11-23 浙江零跑科技有限公司 Traffic scene participant's recognition methods based on vehicle-mounted binocular camera
CN106254850A (en) * 2016-08-23 2016-12-21 深圳市捷视飞通科技股份有限公司 The image matching method of double vision point three-dimensional video-frequency and device
TWI566203B (en) * 2013-12-16 2017-01-11 財團法人工業技術研究院 Method and system for depth refinement and data aggregation
CN107025660A (en) * 2016-02-01 2017-08-08 北京三星通信技术研究有限公司 A kind of method and apparatus for determining binocular dynamic visual sensor image parallactic
CN108154529A (en) * 2018-01-04 2018-06-12 北京大学深圳研究生院 The solid matching method and system of a kind of binocular image
CN108230273A (en) * 2018-01-05 2018-06-29 西南交通大学 A kind of artificial compound eye camera three dimensional image processing method based on geological information
CN108305269A (en) * 2018-01-04 2018-07-20 北京大学深圳研究生院 A kind of image partition method and system of binocular image
CN108381549A (en) * 2018-01-26 2018-08-10 广东三三智能科技有限公司 A kind of quick grasping means of binocular vision guided robot, device and storage medium
WO2020177061A1 (en) * 2019-03-04 2020-09-10 北京大学深圳研究生院 Binocular stereo vision matching method and system based on extremum verification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120163704A1 (en) * 2010-12-23 2012-06-28 Electronics And Telecommunications Research Institute Apparatus and method for stereo matching
CN102572485A (en) * 2012-02-02 2012-07-11 北京大学 Self-adaptive weighted stereo matching algorithm, stereo display and collecting device and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120163704A1 (en) * 2010-12-23 2012-06-28 Electronics And Telecommunications Research Institute Apparatus and method for stereo matching
CN102572485A (en) * 2012-02-02 2012-07-11 北京大学 Self-adaptive weighted stereo matching algorithm, stereo display and collecting device and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KUK-JIN YOON ET AL.: "Adaptive Support-Weight Approach for Correspondence Search", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, vol. 28, no. 4, 30 April 2006 (2006-04-30), pages 650 - 656 *
TAO GUAN ET AL.: "Performance enhancement of Adaptive Support-Weight approach by tuning parameters", 《2012 IEEE FIFTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE》, 18 October 2012 (2012-10-18), pages 206 - 211, XP032331182, DOI: doi:10.1109/ICACI.2012.6463153 *
ZHENG GU ET AL.: "Local stereo matching with adaptive support-weight, rank transform and disparity calibration", 《PATTERN RECOGNITION LETTERS》, vol. 29, no. 9, 1 July 2008 (2008-07-01), pages 1230 - 1235, XP022663891, DOI: doi:10.1016/j.patrec.2008.01.032 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104637043A (en) * 2013-11-08 2015-05-20 株式会社理光 Supporting pixel selection method and device and parallax determination method
CN104637043B (en) * 2013-11-08 2017-12-05 株式会社理光 Pixel selecting method, device, parallax value is supported to determine method
TWI566203B (en) * 2013-12-16 2017-01-11 財團法人工業技術研究院 Method and system for depth refinement and data aggregation
CN104915941A (en) * 2014-03-11 2015-09-16 株式会社理光 Method and apparatus for calculating parallax
CN104915941B (en) * 2014-03-11 2017-08-04 株式会社理光 The method and apparatus for calculating parallax
CN103971366A (en) * 2014-04-18 2014-08-06 天津大学 Stereoscopic matching method based on double-weight aggregation
CN104123727B (en) * 2014-07-26 2017-02-15 福州大学 Stereo matching method based on self-adaptation Gaussian weighting
CN104123727A (en) * 2014-07-26 2014-10-29 福州大学 Stereo matching method based on self-adaptation Gaussian weighting
CN104200453A (en) * 2014-09-15 2014-12-10 西安电子科技大学 Parallax image correcting method based on image segmentation and credibility
CN104200453B (en) * 2014-09-15 2017-01-25 西安电子科技大学 Parallax image correcting method based on image segmentation and credibility
CN104820991A (en) * 2015-05-15 2015-08-05 武汉大学 Multi-soft-constraint stereo matching method based on cost matrix
CN104820991B (en) * 2015-05-15 2017-10-03 武汉大学 A kind of multiple soft-constraint solid matching method based on cost matrix
CN107025660B (en) * 2016-02-01 2020-07-10 北京三星通信技术研究有限公司 Method and device for determining image parallax of binocular dynamic vision sensor
CN107025660A (en) * 2016-02-01 2017-08-08 北京三星通信技术研究有限公司 A kind of method and apparatus for determining binocular dynamic visual sensor image parallactic
CN106156748A (en) * 2016-07-22 2016-11-23 浙江零跑科技有限公司 Traffic scene participant's recognition methods based on vehicle-mounted binocular camera
CN106254850A (en) * 2016-08-23 2016-12-21 深圳市捷视飞通科技股份有限公司 The image matching method of double vision point three-dimensional video-frequency and device
CN108154529A (en) * 2018-01-04 2018-06-12 北京大学深圳研究生院 The solid matching method and system of a kind of binocular image
CN108305269A (en) * 2018-01-04 2018-07-20 北京大学深圳研究生院 A kind of image partition method and system of binocular image
CN108154529B (en) * 2018-01-04 2021-11-23 北京大学深圳研究生院 Stereo matching method and system for binocular images
CN108305269B (en) * 2018-01-04 2022-05-10 北京大学深圳研究生院 Image segmentation method and system for binocular image
CN108230273A (en) * 2018-01-05 2018-06-29 西南交通大学 A kind of artificial compound eye camera three dimensional image processing method based on geological information
CN108230273B (en) * 2018-01-05 2020-04-07 西南交通大学 Three-dimensional image processing method of artificial compound eye camera based on geometric information
CN108381549A (en) * 2018-01-26 2018-08-10 广东三三智能科技有限公司 A kind of quick grasping means of binocular vision guided robot, device and storage medium
CN108381549B (en) * 2018-01-26 2021-12-14 广东三三智能科技有限公司 Binocular vision guide robot rapid grabbing method and device and storage medium
WO2020177061A1 (en) * 2019-03-04 2020-09-10 北京大学深圳研究生院 Binocular stereo vision matching method and system based on extremum verification

Similar Documents

Publication Publication Date Title
CN103325120A (en) Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN105744256B (en) Based on the significant objective evaluation method for quality of stereo images of collection of illustrative plates vision
CN101610425B (en) Method for evaluating stereo image quality and device
CN107301664A (en) Improvement sectional perspective matching process based on similarity measure function
Nalpantidis et al. Biologically and psychophysically inspired adaptive support weights algorithm for stereo correspondence
CN103226820B (en) The two-dimensional maximum entropy division night vision image fusion target detection algorithm improved
CN103971366B (en) A kind of solid matching method being polymerize based on double weights
CN102999913A (en) Local three-dimensional matching method based on credible point spreading
CN104036502B (en) A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology
CN104036493B (en) No-reference image quality evaluation method based on multifractal spectrum
CN106600632A (en) Improved matching cost aggregation stereo matching algorithm
CN103780895B (en) A kind of three-dimensional video quality evaluation method
CN110084782A (en) Full reference image quality appraisement method based on saliency detection
CN109242834A (en) It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method
CN108460794B (en) Binocular three-dimensional infrared salient target detection method and system
CN104200453A (en) Parallax image correcting method based on image segmentation and credibility
CN105139401A (en) Depth credibility assessment method for depth map
CN111027415A (en) Vehicle detection method based on polarization image
CN105654493A (en) Improved method for optimizing optical affine-invariant binocular stereo matching cost and parallax
CN110096993A (en) The object detection apparatus and method of binocular stereo vision
CN111553296B (en) Two-value neural network stereo vision matching method based on FPGA
CN111882516B (en) Image quality evaluation method based on visual saliency and deep neural network
CN103065320A (en) Synthetic aperture radar (SAR) image change detection method based on constant false alarm threshold value
CN114677479A (en) Natural landscape multi-view three-dimensional reconstruction method based on deep learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130925