CN103985128A - Three-dimensional matching method based on color intercorrelation and self-adaptive supporting weight - Google Patents

Three-dimensional matching method based on color intercorrelation and self-adaptive supporting weight Download PDF

Info

Publication number
CN103985128A
CN103985128A CN201410223156.6A CN201410223156A CN103985128A CN 103985128 A CN103985128 A CN 103985128A CN 201410223156 A CN201410223156 A CN 201410223156A CN 103985128 A CN103985128 A CN 103985128A
Authority
CN
China
Prior art keywords
pixel
color
formula
image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410223156.6A
Other languages
Chinese (zh)
Other versions
CN103985128B (en
Inventor
龚文彪
任建乐
陆恺立
刘琳
顾国华
钱惟贤
路东明
任侃
于雪莲
吕芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201410223156.6A priority Critical patent/CN103985128B/en
Publication of CN103985128A publication Critical patent/CN103985128A/en
Application granted granted Critical
Publication of CN103985128B publication Critical patent/CN103985128B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a three-dimensional matching method based on color intercorrelation and the self-adaptive supporting weight. According to the three-dimensional matching method, a pixel color intercorrelation function is provided, and the weight of pixels in a matching window is determined according to the color similarity, Euclidean distance similarity and color intercorrelation similarity, so that the initial parallax value of three-dimensional matching is obtained; then, three-step optimization is conducted on an initial parallax image and comprises the steps of detection of consistency between the left side and the right side, median filtering, and histogram statistics. According to the three-dimensional matching method based on color intercorrelation and the self-adaptive supporting weight, the average accuracy rate is high, and the robustness is high.

Description

A kind of solid matching method that supports weight based on relevant in color and self-adaptation
Technical field
The invention belongs to three-dimensional image reconstruction technical field, be specifically related to a kind of solid matching method that supports weight based on relevant in color and self-adaptation.
Background technology
Binocular vision Stereo matching is an important research direction of computer vision, and it is according to left and right image, to obtain the technical way of object depth information.The solid matching method that Chinese scholars proposes at present, can be divided into global area solid matching method and the large class of regional area solid matching method two.Global area solid matching method according to minimize global energy function and repeatedly iteration finally obtain dense disparity map, the correct matching rate of the Stereo matching parallax result that these class methods obtain is higher, but these class methods exist calculation of complex, and operation time is long, be unfavorable for the shortcomings such as realization.The execution efficiency of sectional perspective matching process is higher, and calculated amount is little, is easy to realize, and also can obtains higher correct matching rate under a suitable coupling cost relation.
Sectional perspective matching process has following requirement to left and right images match window: on the one hand, match window size will be tried one's best greatly, to comprise abundant grey scale change in window, thereby obtain more reliable matching relationship, if window is too little, the image single for texture, scene repeats, the parallax that match window calculates will be inaccurate; On the other hand, match window size is as far as possible little, to avoid the erroneous matching of the discontinuous place of image parallactic pixel.This is actually the problem of a contradiction.For the selection of match window size, domestic and international many scholars have carried out correlative study, and K.Zhang is in a kind of mode of crossing skeleton, and passing threshold is set the size of restriction brachium, thereby obtains best match window; Veksler proposes pointwise self-adaptation choosing method, thereby obtains suitable support window.The people such as Yoon have proposed self-adaptation and have supported window (ASW) (referring to document Adaptive Support-Weight Approach for Correspondence Search), solid matching method based on this window is according to the color similarity of match point in each pixel and window, Euclidean distance similarity is selected suitable match window cost relation, thereby calculate each pixel parallax value accurately, but the method is for parallax discrete point in match window, only according to the color of self and distance similarity, mate and can not obtain correct parallax value, easily in matching process, cause erroneous judgement.
Summary of the invention
The object of this is to propose to plant the solid matching method that supports weight based on relevant in color and self-adaptation, can well solve image slices vegetarian refreshments and at degree of depth point of discontinuity place, easily occur the problem of erroneous matching, effectively improve the accuracy of regional area solid matching method, and there is stronger robustness.
In order to solve the problems of the technologies described above, the invention provides a kind of solid matching method that supports weight based on relevant in color and self-adaptation, step is as follows:
Step 1, define related function in the pixel color of left image and right image, obtain correlated components rin in color, left image is identical with the interior correlated components rin expression formula of color of right image, all as shown in formula (1),
rin=[rce1,rec2,rce3] (1)
In formula (1), rce1, rec2, rce3 are followed successively by the interior related function of three colors of left image or right image, concrete as shown in formula (2), (3), (4),
rce1=I R(x,y)-I G(x,y) (2)
rce2=I G(x,y)-I B(x,y) (3)
rce3=I B(x,y)-I R(x,y) (4)
In formula (2), (3), (4), I r(x, y), I g(x, y), I b(x, y) is followed successively by R channel components, the G channel components of left image or right image slices vegetarian refreshments, the pixel value of B channel components, x, y represent current pixel point in the row direction with column direction on coordinate figure;
Step 2, the match window weighting function of foundation as shown in formula (5),
w ( p , q ) = exp ( - Δc pq τ c ) · exp ( - Δd pq τ d ) · exp ( - Δr pq τ r ) = exp ( - ( Δc pq τ c + Δd pq τ d + Δr pq τ r ) ) - - - ( 5 )
In formula (5), w (p, q) is match window weighting function, and exp is exponential function symbol, and p is the central pixel point of left images match window, and q is the pixel except central pixel point in left images match window, Δ c pqfor the RGB color similarity of pixel p and q, Δ d pqfor the Euclidean distance similarity of pixel p and q, Δ r pqfor the similarity of correlated components rin in the color of pixel p and q, τ cfor the ratio of the shared match window weighting function of color similarity, τ dfor the ratio of the shared match window weighting function of Euclidean distance similarity, τ rratio for the relevant shared match window weighting function of similarity in color;
In formula (5), color similarity Δ c pq, Euclidean distance similarity Δ d pq, relevant similarity Δ r in color pqcomputing method according to this as shown in formula (6), (7), (8):
Δc pq = | | p - q | | 2 = ( p R - q R ) 2 + ( p G - q G ) 2 - ( p B - q B ) 2 - - - ( 6 )
Δd pq = | | d p - d q | | 2 = ( x p - x q ) 2 + ( y p - y q ) 2 - - - ( 7 )
Δr pq = | | rin p - rin q | | 2 = ( rce 1 p - rec 1 q ) 2 + ( rce 2 p - rec 2 q ) 2 + ( rce 3 p - rec 3 q ) 2 - - - ( 8 )
In formula (6), (7), (8), p r, p g, p bbe followed successively by R, G, B color pixel values that pixel p is ordered, q r, q g, q bthe R of ordering for pixel q according to this, G, B color pixel values, (x p, y p) be the coordinate figure of pixel p, (x q, y q) represent the coordinate figure of pixel q, rin pfor correlated components value in the color at pixel p place, rin qfor correlated components value in the color at pixel q place;
Step 3, according to match window weighting function w (p, q), calculate the initial parallax value of left and right image Stereo matching;
Step 4, initial parallax value is carried out to left and right consistency check, medium filtering and statistics with histogram successively, obtain final accurate parallax result.
Compared with prior art, its remarkable advantage is in the present invention: (1) the present invention supports on Weight algorithm and increased correlated components in color in traditional self-adaptation, can effectively improve the correct matching rate of left and right image, and not affect the computing velocity of method; (2) the present invention carries out three step optimizations to the initial parallax value obtaining, and rejects the wrong parallax point that may occur in matching process, and with around correct parallax point fill, thereby further improved the correct matching rate of image; (3) the present invention has overcome that conventional stereo matching algorithm is single at texture, scene repeats, the inaccurate problem of the resulting parallax in partial occlusion place, significantly improved the accuracy rate of Stereo matching, and suitable with the speed of traditional sectional perspective matching algorithm in its computing velocity, make the present invention is applied to becomes possibility in real system.
Accompanying drawing explanation
Fig. 1 is the inventive method process flow diagram.
Fig. 2 is emulation experiment of the present invention 4 width international standard images used, (a), (b), (c), (d) be followed successively by " Tsukuba ", " Venus ", " Teddy ", " Cones ".
Fig. 3 is the standard disparity map in the standard picture storehouse of 4 width figure in Fig. 2.
Fig. 4 is used the inventive method 4 width standard pictures in Fig. 2 to be processed to the disparity map obtaining.
Fig. 5 contrasts by 4 width figure in 4 width figure and Fig. 4 in Fig. 3 the parallax mistake point diagram obtaining, and in figure, the point of black represents parallax erroneous point.
Embodiment
In conjunction with Fig. 1, the present invention is based on solid matching method relevant in color and self-adaptation support weight, step is as follows:
Step 1, define related function in the pixel color of left image and right image, obtain correlated components rin in color, left image is identical with the interior correlated components rin expression formula of color of right image, all as shown in formula (1),
rin=[rce1,rec2,rce3] (1)
In formula (1), rce1, rec2, rce3 are followed successively by the interior related function of three colors of left image or right image, concrete as shown in formula (2), (3), (4),
rce1=I R(x,y)-I G(x,y) (2)
rce2=I G(x,y)-I B(x,y) (3)
rce3=I B(x,y)-I R(x,y) (4)
In formula (2), (3), (4), I r(x, y), I g(x, y), I b(x, y) is followed successively by R channel components, the G channel components of left image or right image slices vegetarian refreshments, the pixel value of B channel components, x, y represent current pixel point in the row direction with column direction on coordinate figure;
Step 2, according to correlated components rin in the color of step 1 definition, color similarity, Euclidean distance similarity that combining adaptive supports in Weight algorithm (ASW) are set up match window weighting function, match window weighting function is as shown in formula (5)
w ( p , q ) = exp ( - Δc pq τ c ) · exp ( - Δd pq τ d ) · exp ( - Δr pq τ r ) = exp ( - ( Δc pq τ c + Δd pq τ d + Δr pq τ r ) ) - - - ( 5 )
In formula (5), w (p, q) is match window weighting function, and exp is exponential function symbol, and p is the central pixel point of left images match window, and q is the pixel except central pixel point in left images match window, Δ c pqfor the RGB color similarity of pixel p and q, Δ d pqfor the Euclidean distance similarity of pixel p and q, Δ r pqfor the similarity of correlated components rin in the color of pixel p and q, τ cfor the ratio of the shared match window weighting function of color similarity, τ dfor the ratio of the shared match window weighting function of Euclidean distance similarity, τ rratio for the relevant shared match window weighting function of similarity in color;
In formula (5), color similarity Δ c pq, Euclidean distance similarity Δ d pq, relevant similarity Δ r in color pqcomputing method according to this as shown in formula (6), (7), (8):
Δc pq = | | p - q | | 2 = ( p R - q R ) 2 + ( p G - q G ) 2 - ( p B - q B ) 2 - - - ( 6 )
Δd pq = | | d p - d q | | 2 = ( x p - x q ) 2 + ( y p - y q ) 2 - - - ( 7 )
Δr pq = | | rin p - rin q | | 2 = ( rce 1 p - rec 1 q ) 2 + ( rce 2 p - rec 2 q ) 2 + ( rce 3 p - rec 3 q ) 2 - - - ( 8 )
In formula (6), (7), (8), p r, p g, p bbe followed successively by R, G, B color pixel values that pixel p is ordered, q r, q g, q bthe R of ordering for pixel q according to this, G, B color pixel values, (x p, y p) be the coordinate figure of pixel p, (x q, y q) represent the coordinate figure of pixel q, rin pfor correlated components value in the color at pixel p place, rin qfor correlated components value in the color at pixel q place;
Step 3, according to match window weighting function w (p, q), calculate the initial parallax value of left and right image Stereo matching, wherein mate cost function as shown in formula (9),
E ( p , p d ‾ ) = Σ q ∈ N P , q d ‾ ∈ N P d ‾ w ( p , q ) w ( p d ‾ , q d ‾ ) e m ( q , q d ‾ ) Σ q ∈ N P , q d ‾ ∈ N P d ‾ w ( p , q ) w ( p d ‾ , q d ‾ ) - - - ( 9 )
In formula (9), for coupling cost function, N pfor left images match window, N qfor right images match window, p is the central pixel point of left images match window, and q is the pixel except central pixel point in left images match window, for the central pixel point of right images match window, for the pixel except central pixel point in right images match window, for the coupling weighting function of right image, for left and right image at pixel q and the matching value at place, as shown in formula (10),
e m ( q , q d ‾ ) = | I L ( q ) - I R ( q d ‾ ) | - - - ( 10 )
In formula (10), I l(q) be left image at the pixel value at pixel q place, for right image is at pixel the pixel value at place;
So, according to the victor is a king algorithm (WTA), obtain the initial parallax value of pixel p, calculate as shown in formula (11),
d p = arg max d ∈ D p E ( q , q d ‾ ) - - - ( 11 )
In formula (11), D p={ d min..., d max, d minfor the minimum parallax value that may exist in the matching image of left and right, d maxfor the maximum disparity value that may exist in the matching image of left and right;
Step 4, initial parallax value is carried out to three step optimizations, obtain final accurate parallax result.
4.1 left and right consistency checks
Whether the initial parallax value that determining step three obtains meets relation shown in formula (12), if met, thinks that the parallax value that this match point calculates is correct parallax value and is retained; If do not met, think that the parallax value that these match point computing method go out is wrong, and rejected,
D L(x,y)=D R(x,y-D L(x,y)) (12)
In formula (12), x, y are that image slices vegetarian refreshments is expert at, the coordinate figure on column direction, D lthe parallax value that (x, y) calculates when from the picture search to the right of left image, D rthe parallax value that (x, y) calculates when from the picture search left of right image;
4.2 medium filtering
In step 4.1, reject after wrong parallax value, need to fill the vacant position of rejecting in error matching points parallax value by correct parallax value, in the present invention, adopt the medium filtering template of N * N (N is arbitrary constant), wrong parallax value is filled, obtain the locational correct parallax value of this point, experimental result shows, the method can effectively improve the correct matching rate of Stereo matching;
4.3 adopt the mode of statistics with histogram further to optimize the disparity map of step 4.2 acquisition
In order further to optimize disparity map, reject wrong parallax point, each parallax point on the disparity map obtaining after medium filtering is adopted to the mode of statistics with histogram, at each point on disparity map, take and set up the neighborhood window of a m * n (m, n are arbitrary constant) centered by this point, pixel in this neighborhood window is carried out to statistics with histogram, use the parallax value that is worth maximum this central point of parallax point replacement on histogram.
In the situations such as that above-mentioned three step optimizations can solve is single at texture, scene repeats, image blocks, easily produce the problem of error matching points, thereby improve the accuracy of final parallax result.
The beneficial effect of the inventive method can further illustrate by following emulation experiment:
Emulation experiment is used the inventive method respectively to " Tsukuba " shown in Fig. 2, " Venus ", " Teddy ", " Cones " four width international standard figure process, in Fig. 2, (a), (b), (c), (d) are followed successively by " Tsukuba ", " Venus ", " Teddy ", " Cones ".Fig. 3 is the standard disparity map in the standard picture storehouse of 4 width figure in Fig. 2, (a), (b), (c), (d) " Tsukuba " in corresponding diagram 2 successively in Fig. 3, " Venus ", " Teddy ", the standard disparity map of " Cones ".Fig. 4 is used the inventive method 4 width standard pictures in Fig. 2 to be processed to the disparity map obtaining, (a), (b), (c), (d) " Tsukuba " in corresponding diagram 2 successively in Fig. 4, " Venus ", " Teddy ", the result disparity map of " Cones ".In Fig. 5, (a), (b), (c), (d) contrast by 4 width figure (a), (b), (c), (d) in 4 width figure (a), (b), (c), (d) and Fig. 4 in Fig. 3 the parallax mistake point diagram obtaining, in figure, the point of black represents parallax erroneous point, and the difference of the parallax value calculating and the parallax value of standard is greater than a pixel.From the experimental result picture of Fig. 2, Fig. 3, Fig. 4, Fig. 5, can find out intuitively, method of the present invention can solve the matching error point that texture is single, scene repeats, the image place of blocking produces to a great extent, therefore, the present invention can improve the coupling accuracy of left and right image.
Emulation experiment is also used the inventive method and other existing regional area Stereo Matching Algorithm, comprise the adaptive weighting Stereo Matching Algorithm (VSW) based on virtual window, regional area solid matching method (GradAdaptWgt) based on gradient similarity, self-adaptation supports the solid matching method (ASW) of weight, Stereo Matching Algorithm based on crossing skeleton (Cross-Based) is respectively to " Tsukuba " shown in Fig. 2, " Venus ", " Teddy ", " Cones " four width international standard figure process, and the erroneous matching rate of the inventive method and other existing regional area Stereo Matching Algorithm is added up, statistics is as shown in table 1.In table 1, " Non " represents unshielding region mistake matching rate, " Disc " represents degree of depth discontinuity zone mistake matching rate, " All " represents that All Ranges mistake matching rate is (wherein on the position of respective point, the difference of the parallax value calculating and standard parallax value is greater than 1 pixel, think that this point is wrong parallax point), " Avg.rank " hurdle represents the average error matching rate in above-mentioned three kinds of regions.From table 1, carry and can find out, when 4 width international standard images are tested, in " Non " unshielding region, " Disc " degree of depth discontinuity zone and the test of " All " expression All Ranges, although the erroneous matching rate that independent a certain region the inventive method is calculated is not best, but for overall erroneous matching rate, the inventive method is all better than existing other regional area Stereo Matching Algorithm.
Table 1

Claims (6)

1. based on relevant in color and self-adaptation, support a solid matching method for weight, it is characterized in that, step is as follows:
Step 1, define related function in the pixel color of left image and right image, obtain correlated components rin in color, left image is identical with the interior correlated components rin expression formula of color of right image, all as shown in formula (1),
rin=[rce1,rec2,rce3] (1)
In formula (1), rce1, rec2, rce3 are followed successively by the interior related function of three colors of left image or right image, concrete as shown in formula (2), (3), (4),
rce1=I R(x,y)-I G(x,y) (2)
rce2=I G(x,y)-I B(x,y) (3)
rce3=I B(x,y)-I R(x,y) (4)
In formula (2), (3), (4), I r(x, y), I g(x, y), I b(x, y) is followed successively by R channel components, the G channel components of left image or right image slices vegetarian refreshments, the pixel value of B channel components, x, y represent current pixel point in the row direction with column direction on coordinate figure;
Step 2, the match window weighting function of foundation as shown in formula (5),
In formula (5), w (p, q) is match window weighting function, and exp is exponential function symbol, and p is the central pixel point of left images match window, and q is the pixel except central pixel point in left images match window, Δ c pqfor the RGB color similarity of pixel p and q, Δ d pqfor the Euclidean distance similarity of pixel p and q, Δ r pqfor the similarity of correlated components rin in the color of pixel p and q, τ cfor the ratio of the shared match window weighting function of color similarity, τ dfor the ratio of the shared match window weighting function of Euclidean distance similarity, τ rratio for the relevant shared match window weighting function of similarity in color;
In formula (5), color similarity Δ c pq, Euclidean distance similarity Δ d pq, relevant similarity Δ r in color pqcomputing method according to this as shown in formula (6), (7), (8):
In formula (6), (7), (8), p r, p g, p bbe followed successively by R, G, B color pixel values that pixel p is ordered, q r, q g, q bthe R of ordering for pixel q according to this, G, B color pixel values, (x p, y p) be the coordinate figure of pixel p, (x q, y q) represent the coordinate figure of pixel q, rin pfor correlated components value in the color at pixel p place, rin qfor correlated components value in the color at pixel q place;
Step 3, according to match window weighting function w (p, q), calculate the initial parallax value of left and right image Stereo matching;
Step 4, initial parallax value is carried out to left and right consistency check, medium filtering and statistics with histogram successively, obtain final accurate parallax result.
2. the solid matching method that supports weight based on relevant in color and self-adaptation as claimed in claim 1, is characterized in that, the coupling cost function using in step 3 as shown in formula (9),
In formula (9), for coupling cost function, N pfor left images match window, N qfor right images match window, p is the central pixel point of left images match window, and q is the pixel except central pixel point in left images match window, for the central pixel point of right images match window, for the pixel except central pixel point in right images match window, for the coupling weighting function of right image, for left and right image at pixel q and the matching value at place, as shown in formula (10),
In formula (10), I l(q) be left image at the pixel value at pixel q place, for right image is at pixel the pixel value at place.
3. the solid matching method that supports weight based on relevant in color and self-adaptation as claimed in claim 1, is characterized in that, in step 3, according to the initial parallax value of the victor is a king algorithm obtains pixel p, calculate as shown in formula (11),
In formula (11), D p={ d min..., d max, d minfor the minimum parallax value that may exist in the matching image of left and right, d maxfor the maximum disparity value that may exist in the matching image of left and right.
4. the solid matching method that supports weight based on relevant in color and self-adaptation as claimed in claim 1, is characterized in that, in step 4, the process of left and right consistency check is:
Whether the initial parallax value that determining step three obtains meets relation shown in formula (12), if met, thinks that the parallax value that this match point calculates is correct parallax value and is retained; If do not met, think that the parallax value that these match point computing method go out is wrong, and rejected,
D L(x,y)=D R(x,y-D L(x,y)) (12)
In formula (12), x, y are that image slices vegetarian refreshments is expert at, the coordinate figure on column direction, D lthe parallax value that (x, y) calculates when from the picture search to the right of left image, D rthe parallax value that (x, y) calculates when from the picture search left of right image.
5. the solid matching method that supports weight based on relevant in color and self-adaptation as claimed in claim 4, is characterized in that, in step 4, the process of medium filtering is:
The medium filtering template that adopts N * N, the wrong disparity value that left and right consistency check is gone out is filled, and obtains the locational correct parallax value of this point, and N is arbitrary constant.
6. the solid matching method that supports weight based on relevant in color and self-adaptation as claimed in claim 5, is characterized in that, in step 4, the process of statistics with histogram is:
Each point to the disparity map obtaining after medium filtering is set up the neighborhood window of m * n size centered by this point, and the pixel in neighborhood window is carried out to statistics with histogram, uses the parallax value that is worth maximum this central point of parallax point replacement on histogram.
CN201410223156.6A 2014-05-23 2014-05-23 A kind of solid matching method for supporting weight based on related and self adaptation in color Expired - Fee Related CN103985128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410223156.6A CN103985128B (en) 2014-05-23 2014-05-23 A kind of solid matching method for supporting weight based on related and self adaptation in color

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410223156.6A CN103985128B (en) 2014-05-23 2014-05-23 A kind of solid matching method for supporting weight based on related and self adaptation in color

Publications (2)

Publication Number Publication Date
CN103985128A true CN103985128A (en) 2014-08-13
CN103985128B CN103985128B (en) 2017-03-15

Family

ID=51277086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410223156.6A Expired - Fee Related CN103985128B (en) 2014-05-23 2014-05-23 A kind of solid matching method for supporting weight based on related and self adaptation in color

Country Status (1)

Country Link
CN (1) CN103985128B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616304A (en) * 2015-02-11 2015-05-13 南京理工大学 Self-adapting support weight stereo matching method based on field programmable gate array (FPGA)
CN105807786A (en) * 2016-03-04 2016-07-27 深圳市道通智能航空技术有限公司 UAV automatic obstacle avoidance method and system
CN108682026A (en) * 2018-03-22 2018-10-19 辽宁工业大学 A kind of binocular vision solid matching method based on the fusion of more Matching units
CN108742495A (en) * 2018-03-26 2018-11-06 天津大学 A kind of three-dimensional stereo laparoscope system and solid matching method for medical field
CN108765486A (en) * 2018-05-17 2018-11-06 长春理工大学 Based on sparse piece of aggregation strategy method of relevant Stereo matching in color
CN108876841A (en) * 2017-07-25 2018-11-23 成都通甲优博科技有限责任公司 The method and system of interpolation in a kind of disparity map parallax refinement
CN109993781A (en) * 2019-03-28 2019-07-09 北京清微智能科技有限公司 Based on the matched anaglyph generation method of binocular stereo vision and system
CN116188558A (en) * 2023-04-27 2023-05-30 华北理工大学 Stereo photogrammetry method based on binocular vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662695A (en) * 2009-09-24 2010-03-03 清华大学 Method and device for acquiring virtual viewport
CN103310421A (en) * 2013-06-27 2013-09-18 清华大学深圳研究生院 Rapid stereo matching method and disparity map obtaining method both aiming at high-definition image pair
CN103646396A (en) * 2013-11-29 2014-03-19 清华大学深圳研究生院 Matching cost algorithm of binocular stereo matching algorithm, and non-local stereo matching algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662695A (en) * 2009-09-24 2010-03-03 清华大学 Method and device for acquiring virtual viewport
CN103310421A (en) * 2013-06-27 2013-09-18 清华大学深圳研究生院 Rapid stereo matching method and disparity map obtaining method both aiming at high-definition image pair
CN103646396A (en) * 2013-11-29 2014-03-19 清华大学深圳研究生院 Matching cost algorithm of binocular stereo matching algorithm, and non-local stereo matching algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KUK-JIN YOON, ET AL.: "Adaptive Support-Weight Approach for Correspondence Search", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
TAO GUAN, ET AL.: "Performance Enhancement of Adaptive Support-Weight Approach by Tuning Parameters", 《2012 IEEE FIFTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE(ICACI)》 *
ZHENG GU, ET AL.: "Local stereo matching with adaptive support-weight, rank transform and disparity calibration", 《PATTERN RECOGNITION LETTERS》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616304A (en) * 2015-02-11 2015-05-13 南京理工大学 Self-adapting support weight stereo matching method based on field programmable gate array (FPGA)
CN105807786A (en) * 2016-03-04 2016-07-27 深圳市道通智能航空技术有限公司 UAV automatic obstacle avoidance method and system
CN108876841A (en) * 2017-07-25 2018-11-23 成都通甲优博科技有限责任公司 The method and system of interpolation in a kind of disparity map parallax refinement
CN108682026A (en) * 2018-03-22 2018-10-19 辽宁工业大学 A kind of binocular vision solid matching method based on the fusion of more Matching units
CN108682026B (en) * 2018-03-22 2021-08-06 江大白 Binocular vision stereo matching method based on multi-matching element fusion
CN108742495A (en) * 2018-03-26 2018-11-06 天津大学 A kind of three-dimensional stereo laparoscope system and solid matching method for medical field
CN108765486A (en) * 2018-05-17 2018-11-06 长春理工大学 Based on sparse piece of aggregation strategy method of relevant Stereo matching in color
CN109993781A (en) * 2019-03-28 2019-07-09 北京清微智能科技有限公司 Based on the matched anaglyph generation method of binocular stereo vision and system
CN109993781B (en) * 2019-03-28 2021-09-03 北京清微智能科技有限公司 Parallax image generation method and system based on binocular stereo vision matching
CN116188558A (en) * 2023-04-27 2023-05-30 华北理工大学 Stereo photogrammetry method based on binocular vision
CN116188558B (en) * 2023-04-27 2023-07-11 华北理工大学 Stereo photogrammetry method based on binocular vision

Also Published As

Publication number Publication date
CN103985128B (en) 2017-03-15

Similar Documents

Publication Publication Date Title
CN103985128A (en) Three-dimensional matching method based on color intercorrelation and self-adaptive supporting weight
CN103248906B (en) Method and system for acquiring depth map of binocular stereo video sequence
CN102136136B (en) Luminosity insensitivity stereo matching method based on self-adapting Census conversion
CN102930530B (en) Stereo matching method of double-viewpoint image
CN103957397B (en) A kind of low resolution depth image top sampling method based on characteristics of image
CN104517317A (en) Three-dimensional reconstruction method of vehicle-borne infrared images
CN103458261B (en) Video scene variation detection method based on stereoscopic vision
CN105956597A (en) Binocular stereo matching method based on convolution neural network
CN103310421B (en) The quick stereo matching process right for high-definition image and disparity map acquisition methods
CN106600632B (en) A kind of three-dimensional image matching method improving matching cost polymerization
CN104200453B (en) Parallax image correcting method based on image segmentation and credibility
CN104574375A (en) Image significance detection method combining color and depth information
CN104966286A (en) 3D video saliency detection method
CN110688905B (en) Three-dimensional object detection and tracking method based on key frame
CN106447661A (en) Rapid depth image generating method
CN103325120A (en) Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN106997478B (en) RGB-D image salient target detection method based on salient center prior
CN103440664A (en) Method, system and computing device for generating high-resolution depth map
CN104065954B (en) A kind of disparity range method for quick of high definition three-dimensional video-frequency
CN104517095A (en) Head division method based on depth image
CN105374039A (en) Monocular image depth information estimation method based on contour acuity
CN103714549A (en) Stereo image object segmentation method based on rapid local matching
CN102740096A (en) Space-time combination based dynamic scene stereo video matching method
CN103077542A (en) Compression method for interest region of depth map
CN105787932A (en) Stereo matching method based on segmentation cross trees

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170315

Termination date: 20200523