CN108682026A - A kind of binocular vision solid matching method based on the fusion of more Matching units - Google Patents

A kind of binocular vision solid matching method based on the fusion of more Matching units Download PDF

Info

Publication number
CN108682026A
CN108682026A CN201810249294.XA CN201810249294A CN108682026A CN 108682026 A CN108682026 A CN 108682026A CN 201810249294 A CN201810249294 A CN 201810249294A CN 108682026 A CN108682026 A CN 108682026A
Authority
CN
China
Prior art keywords
pixel
window
matching
formula
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810249294.XA
Other languages
Chinese (zh)
Other versions
CN108682026B (en
Inventor
孙福明
杜仁鹏
蔡希彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Applied Technology Co Ltd
Original Assignee
Liaoning University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning University of Technology filed Critical Liaoning University of Technology
Priority to CN201810249294.XA priority Critical patent/CN108682026B/en
Publication of CN108682026A publication Critical patent/CN108682026A/en
Application granted granted Critical
Publication of CN108682026B publication Critical patent/CN108682026B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

Binocular vision is based on the solid matching method that the solid matching method that more Matching units merge includes computer, two video cameras and the fusion of more Matching units.Video camera selects ZED binocular cameras.The solid matching method of more Matching unit fusions includes three phases:Initial matching cost stage, cost function polymerization stage and parallax post-processing stages.In the initial matching cost stage, the cost function blended comprising Color Primitive and gradient primitive is devised, automatic adjusument is carried out by Kalman's factor alpha;It is based on RGB color in cost function polymerization stage and distance relation devises Adaptive matching window, is polymerize by correlation between window intra-reference-image and cost function;Final stage carries out parallax post-processing by LRC or so consistency and based on sub-pix rank adaptive weighting medium filtering.The present invention compares the accuracy and real-time that classical adaptive weighting algorithm improves algorithm, and is demonstrated by higher robustness in complicated region.

Description

A kind of binocular vision solid matching method based on the fusion of more Matching units
Technical field
The present invention relates to a kind of binocular vision solid matching methods based on the fusion of more Matching units, can be widely applied to The technical fields such as unmanned and robot navigation.
Background technology
Stereo matching is widely used in the necks such as unmanned and robot navigation as the core technology in binocular vision Domain.Its basic principle is that the X-Y scheme of scene is obtained by binocular camera, and matched algorithm obtains the parallax of point to be matched, into And obtain the depth information of scene.Stereo Matching Algorithm is since its high accuracy characteristic is by the extensive concern of scholar.However, Due to the complexity of texture in natural scene and the discontinuity of scene depth, the reality of Stereo Matching Algorithm restrict always Using.
The Stereo Matching Algorithm of mainstream can be divided into based on global Stereo Matching Algorithm and three-dimensional based on part at present With algorithm.It is low to calculate complexity, efficiency due to being related to the operation of energy function based on global solid matching method, it is difficult to Meet actual demand.Based on the solid matching method of part, only local data item is grasped in cost function polymerization stage Make, real-time is high, but accuracy rate is relatively low, and Weight algorithm (ASW) is adaptively supported until Yoon propositions are classical, accurate Degree get a qualitative improvement, but since its window is fixed, and can not reflect the feature and texture information of image.
Sectional perspective matching algorithm can be subdivided into according to the difference of Matching unit based on region, feature based and base In the solid matching method of phase.It is adaptive to support weight method since Matching unit is single and window is fixed, lead to algorithm It is relatively low in the accuracy of complex texture region, and since complicated weight computing causes algorithm real-time low.
For the classic algorithm problem, some scholars based on gradient and or a variety of Matching unit information such as color come State the initial matching cost of image so that algorithm accuracy has obtained further raising.However, not abandoning complexity Weight computing so that the real-time of the algorithm is still relatively low and robustness is weak.
Lin et al. is fitted the Gauss weight function in former algorithm using linear function, and height is obtained to original image down-sampling This pyramid sample, and cost aggregate function is calculated using hierarchical clustering algorithm, it is improved again while improving algorithm real-time The accuracy of algorithm.But due to being limited by sampling number, the image after sampling is excessively fuzzy, leads to the algorithm robust Property is poor.
All the time, researcher studies self-adapting window, and by using reference picture with it is to be matched The registration of image adaptive window abandons the parallax value of some apparent errors, in cost polymerization stage has abandoned complicated power The operation of value.But due to not accounting for the correlation between reference picture and Matching power flow function, this makes the algorithm Accuracy be not better than classic algorithm.
In view of the above problems, to improve robustness of the algorithm in actual scene application, herein in initial matching cost rank Section, color-match primitive and gradient Matching unit is blended, and meet natural scene condition using Middlebury platforms Cones images carry out automatic adjusument to the proportionality coefficient α of color and gradient Matching unit;It is first in cost function polymerization stage First being directed to stationary window in conventional method leads to algorithm low problem of accuracy under the conditions of complex texture, according between pixel Colouring information is extended central pixel point with space length information, then uses initial matching cost function and reference chart It replaces traditional complicated energy function and weights operation to carry out cost polymerization as the correlation calculations between pixel, not only carries The high accuracy of algorithm, while also reducing the time complexity of algorithm;Finally, LRC or so is taken to gained disparity map Consistency detection and use are a kind of based on sub-pixel adaptive weighting medium filtering operation, improve the standard of algorithm again Exactness.
Invention content
In view of the problems of the existing technology the purpose of the present invention is and provides a kind of binocular vision and be based on more matching bases The solid matching method of member fusion, the average error hiding rate of the obtained disparity map of method using the present invention have obtained greatly Improvement, image is clear and accurate, and the precision of robot or unmanned machine running orbit is greatly improved.
The technical solution adopted is that:
A kind of solid matching method that binocular vision is merged based on more Matching units, including computer, two video cameras and The solid matching method of more Matching unit fusions, it is characterised in that:
Two video cameras select the ZED binocular cameras of Astrolabes companies production, and to ZED binocular camera shootings Machine is set as follows:
1) Image Acquisition:
Computer is connected by downloading ZED SDK and CUDA, and by 3.0 interfaces of USB.Pass through webcam in MATLAB Functional link ZED binocular cameras, and picture collection is carried out by snapshot functions;
2) camera calibration:
The purpose of camera calibration, it is intended to obtain accurate camera interior and exterior parameter.Intrinsic parameter mainly includes the coke of left and right camera lens Away from parallax range;Outer parameter includes mainly spin matrix and left and right camera of two video cameras relative to world coordinate system Relative translation matrix.The acquiescence inside and outside parameter of video camera is obtained from official manual herein;
3) image parameter is set:
Polar curve is carried out by calibrating parameters to correct so that collected left images meet epipolar-line constraint condition.And pass through It is fixed that ZED SDK embed reseting for ZED Explorer.exe plug-in units progress parameters;
4) Stereo matching:
Core of the Stereo matching as binocular vision system.The purpose of Stereo matching is to collected left and right Image carries out imaging point matching, obtains parallax value by match point, and obtain the depth information of scene.
ZED binocular cameras can not only obtain the image information of high quality, but also with small, and low in energy consumption is excellent Point.Especially because having embedded ZED SDK function libraries and CUDA parallel processings, the processing time of system is greatly reduced. ZED binocular cameras can be installed on robot or unmanned machine.The actual scene of binocular intake, through more Matching units The Stereo matching of fusion is handled, and can achieve the purpose that real scene, then by being arranged in robot or unmanned machine Computer disposal sends out navigation instruction to robot or unmanned machine control with drive system;Or by being arranged in machine People or unmanned machine outer computer, connect with computer by LAN, Ethernet or conducting wire, are handled through computer, To robot or unmanned machine control navigation instruction is sent out with drive system.Robot or unmanned is greatly improved The precision of machine running orbit.
The stereo matching module method of more Matching unit fusions, including following processes:
The present invention uses improved ASW solid matching methods.
The ASW Stereo Matching Algorithms being currently known are:
In Stereo Matching Algorithm, general acquiescence two images meet epipolar-line constraint condition, i.e. corresponding of left images With point, the same line position in two images is set.The core of ASW algorithms is to measure between image pixel using weight is supported Similitude when, only when adjacent pixel come from same depth when, the support of these adjacent pixels is only effectively, it with The identical parallax of pixel to be matched.Therefore, the support weight w of thereabout pixel is proportional to window parallax probability P r, such as Shown in formula (1):
w(p,q)∝Pr(dp=dq) (1)
Wherein, p is pixel to be matched, and q is other pixels in addition to pixel to be matched in window, and d is required parallax.w (p, q) to the color of image and apart from related, as shown in formula (2):
W (p, q)=kf (Δ cpq,Δgpq) (2)
Wherein, Δ cpq,ΔgpqRespectively represent 2 points of p and the q distance on LAB color spaces and geometric space respectively, k For proportionality coefficient, concrete numerical value is obtained by testing, and f is Laplce's kernel function, and the two is mutual indepedent, such as formula (3) institute Show:
f(Δcpq,Δgpq)=f (Δ cpq)·f(Δgpq) (3)
Wherein Δ cpq, Δ gpqIt calculates as shown in formula (4) (5):
cp=| Lp,ap,bp|, cq=| Lq,aq,bq| it is Lab triple channel color space chromacity values, the L in Lab color spaces Component is used to indicate the brightness of pixel, and value range is [0,100], indicates from black to pure white;A is indicated from red to green Range, value range is [127, -128];B indicates the range from yellow to blue, and value range is [127, -128]. (px,py) and (qx,qy) be geometric space coordinate value.Grouping intensity is defined using Laplce's kernel function, such as formula (6) (7) shown in.
γcpIt is obtained by testing, γ is taken in paperc=7, γp=36 (value is generally related to window size).It Afterwards cost polymerization is carried out using formula (8).
Wherein initial matching costAs shown in formula (9):
Ic(q) withFor the gray value for two pixels that parallax in reference picture and image stationary window to be matched is d. Final disparity map is determined finally by WTA (Winner-Takes-All) method.
The present invention includes mainly using improved ASW matching process:Left and right reference picture reads in stage, initial of left and right 4 stages are post-processed with cost stage, left and right cost function polymerization stage and parallax, parallax post-processes 4 stages, main to wrap Containing LRC or so consistency detections and filtering operation.Wherein:
One, initial matching cost, which calculates, is:
The primitive that ASW algorithms are calculated using the half-tone information of image as Matching power flow.The present invention passes through to gradient primitive Interceptive value is set with the mean value of R, G, B triple channel Color Primitive, and by the thought of Kalman filtering merge pixel R, G, B colors and gradient information carry out automatic adjusument by control coefrficient α and are improved to make.Detailed process is as follows:
(1) color and Grads threshold t1, t2 is respectively set, calculates initial cost eg, ec, as shown in formula (10) (11).
(2) automatic adjusument side reaction coefficient calculates final initial matching cost, final initial matching cost such as formula (12) shown in.
Two, improved self-adapting window expansion algorithm:
Traditional ASW matching algorithms are since window is fixed, and when handling complex texture region, robustness is poor.This paper roots According to the color between pixel self-adapting window method is used with space length.Known central pixel point p (x, y) to be matched, in x It is respectively p (x-1, y), p (x+1, y) and p (x, y-1), p (x, y+1) with each neighborhood territory pixel point on the directions y.Different from traditional Self-adapting window extended method carries out pixel-expansion with central pixel point half-tone information, and herein with central point R, G, B are as expansion Primitive is opened up, when neighborhood territory pixel and central pixel point triple channel information meet following formula (13) condition and carry out window expansion simultaneously.
Ir,g,b(x,y)-Ir,g,b(x-1, y) < t (13)
T is pre-set color threshold value, and t ∈ (0,1).When leading to the same area picture due to discontinuous texture in image When plain saltus step, it is difficult to make neighborhood territory pixel triple channel Pixel Information while meeting formula (13).Based on this characteristic, herein to tradition Stationary window make improvement.When neighborhood territory pixel carries out window under the conditions of meeting formula (13) and adaptively expands, if scene In cause window excessive there are texture repeat region so that calculating excessively complicated when cost polymerize, this is not inconsistent the real-time of hop algorithm Property require.Brachium is blocked to self-adapting window setting herein according to image geometry characteristic.When meeting following formula (14) to window Mouth brachium is blocked.
Wherein, the transverse and longitudinal coordinate value of pixel centered on p (x), p (y), q (x), q (y) are the coordinate value of neighborhood territory pixel.It is logical It crosses and test image experimental setup minimum brachium L is opened to tsukuba under middlebury platforms, teddy, cones, venus tetra-min =5, threshold value Lmax=11.
Self-adapting window, as shown in Fig. 3 figures.First, lateral magnification is carried out to center pixel I (x, y), if final window Mouth brachium is long less than most galianconism, with minimum brachium LminInstead of raw footage.In window process of expansion, when neighborhood territory pixel and center When pixel space distance is more than interceptive value, window is blocked.Final result is reference with central point, up and down as shown in Fig. 4 figures The four different length brachium L in left and rightu,Ld,Ll,Lr.Following formula is distinguished to the ranks size value of self-adapting window herein (15), shown in (16):
Rows=Lu+Ld (15)
Cows=Ll+Lr (16)
Three, Matching power flow aggregating algorithm:
After initial matching cost has been calculated, cost function polymerization is carried out, the weight for being different from classic algorithm complexity is transported It calculates.It is adaptive to reference picture herein by cost function and the correlation and reference picture of the reference picture correlation of itself It answers window internal variance and reference picture to be calculated with cost function covariance, obtains its correlation function, initial matching is used in combination Cost function subtracts and carries out cost polymerization after correlation function, the significantly very high accuracy and real-time of algorithm.Specific mistake Journey is as follows:
First, R, G, B triple channel pixel value with Matching power flow convolution of functions of pixel to be matched in match window are calculated Mean value, as shown in formula (17).
Wherein, p is window center point pixel coordinate, and it includes each pixel coordinate of central point, I (x, y) that I (x, y), which is in window, For self-adapting window required above, LuFor sum of all pixels in window.After the mean value that each channel convolution function has been calculated, meter Its covariance function is calculated, as shown in formula (18):
In above formula, mc,meThe mean value of triple channel element and Matching power flow function, specific formula respectively in self-adapting window Shown in following formula (19), (20):
Then, the variance matrix θ for calculating triple channel element composition in reference picture self-adapting window, is described in detail below Shown in formula (21).
Wherein, matrix θ each elements are calculated as shown in formula (22):
It can be obtained shown in coefficient matrix such as formula (23) by formula (17)-(20):
Due to vce(c ∈ { r, g, b }) is the vector of 1*3, can obtain γknValue include the vector of triple channel.When having been calculated Related coefficient γknLater, the volume of reference picture triple channel pixel and related coefficient is subtracted by the mean value of each point cost function Product so that the Matching power flow of left and right two images is more independent.Finally, shown in the following formula of initial cost function (24):
Due to γknIt is vectorial for triple channel, therefore n ∈ (1,2,3).After final Matching power flow has been calculated, by certainly It adapts to window and carries out Matching power flow polymerization, shown in specific formula such as formula (25).
Herein only to polymerizeing on the basis of reference picture, image to be matched is not added.The experimental results showed that carrying Its accuracy is not reduced while high algorithm real-time.Formula (26) is finally used, WTA (Winner-Takes- are passed through All) parallax value when algorithm picks cost function minimum is the pixel value of disparity map.
Four, parallax post-processes
1, LRC or so consistency detection
In Stereo Matching Algorithm, since left images are there are parallax, occlusion issue is inevitable always.It is obtaining most Before whole disparity map, LRC or so consistency algorithm is used to carry out parallax post processing operations first herein.
Parallax d is calculated when using left image as reference picturel, parallax d is obtained by reference picture of right imager.When Meet following formula (27) condition:
|dl-dr| > δ (27)
δ is threshold value, δ ∈ (0,1).δ values herein are 1.It is more than δ when meeting horizontal parallax absolute value of the difference, then it is assumed that It is to block a little.Parallax value smaller in horizontal parallax is a little taken to carry out parallax filling to blocking.
2, adaptive weighting medium filtering
After carrying out cost aggregating algorithm, often there is more salt-pepper noise in obtained disparity map, it is necessary to figure As carrying out medium filtering.However traditional filtering often ignores the correlation between pixel.Herein be based on space in pixel it Between color and the difference of distance different weights are assigned to pixel in window, shown in specific weights operation such as formula (28).
γc, γdFor constant, obtained by testing.γ is taken in paper through a large number of experimentsc=0.1, γd=9.k1, K2 is obtained with surrounding pixel point in the difference of color space and metric space by center pixel, respectively by following formula (29) and (30) It obtains.
Window size is 19*19.Obtaining in window adaptive median filter is carried out after the weights of each pixel.Specifically Process is as follows:
(1) it is multiplied with respective weights to each grey scale pixel value in window in addition to central point, obtains new gray value, It is calculated and is obtained using formula (31).
I'(q)=wI (q) (31)
(2) the new value of each pixel in window including central point is ranked up, fetch bit is closest near intermediate value 2 pixel value I'(q of central point1),I'(q2), it takes it to be worth to new sub-pix rank gray value and replaces former central point picture The gray value of element is calculated by formula (32) and is obtained.
Five, algorithm description
The detailed implementation of algorithm is as described in Table 1.Table 1
Six, experimental result
This paper algorithm experimental environment is Intel core i7-67003.5HZ CPU, 12G memories, and Matlab2016a is flat Platform.In order to verify the validity of put forward algorithm herein, the adaptive of classics compared under middlebury platforms first herein The disparity map of Weight algorithm, if Fig. 6 the 1st is classified as tsukuba, teddy, cones, tetra- width test images of venus, the 2nd is classified as it Standard disparity map, the 3rd is classified as classical ASW algorithm patterns, and the 4th is classified as disparity map obtained by this paper algorithms.
Self-adapting window minimax brachium is respectively L in experimentmin=5, Lmax=11, minimal gradient and color threshold Respectively t1=2, t2=7, Kalman factor alpha=0.11.Auto-adapted weight filter weight coefficient γc=0.1, γd=9, window Mouth is 19*19.Herein to gained disparity map under middlebury platforms three test indexs:Nonocc (is missed de-occlusion region Matching rate), all (all areas error hiding rate), disc (depth discontinuity zone error hiding rate) tests compared traditional The ASW innovatory algorithms of adaptive three-dimensional matching algorithm and De-Maeztu based on gradient, the comparison is as follows shown in table 2.
2. matching algorithm of table assesses table (unit:%)
Classical adaptive weighting and its innovatory algorithm are compared as can be seen from Table 1, and the present invention is used based on more matching bases The Stereo Matching Algorithm of member fusion is in Venus, Teddy, and under Cones images, either in all areas, de-occlusion region is also It is under depth discontinuity zone scene, this paper algorithms have apparent advantage.Under the Tsukuba images of dark, non- Occlusion area also has a certain upgrade.GradAdaptWgt algorithm of the comparison based on single gradient primitive, Tsukuba, Venus, Teddy is lower in the error hiding rate of all areas, and the error hiding rate of Cones images is suitable with original GradAdaptWgt algorithms. This paper algorithms carry out average error hiding rate comparison as shown in table 3 under middlebury platforms, achieve considerable effect Fruit.
Table 3. be averaged error hiding rate comparison
For the robustness of boosting algorithm, color and Kalman's coefficient ratio of gradient information compared herein.Selection meets certainly Cones images under right scene compare, the nonocc under different side reaction coefficients, all, each matching condition error hiding rates pair of disc Than wherein α ∈ (0.11:0.02:0.29).
It can be obtained by upper figure result, to realize Stereo Matching Algorithm robustness requirement, we are in practical applications by the value of α It is set as 0.23.And relative to classical adaptive support Weight algorithm, the operation of complicated weight has been abandoned herein, has been carried significantly The high real-time of algorithm.It is as shown in table 4 to compared classical adaptive support Weight algorithm herein on time complexity.
4. this paper algorithms of table compare (unit with classics ASW Algorithms T-cbmplexities:ms)
Experimental result shows that inventive algorithm greatly reduces the time complexity of algorithm.
Advantage of the present invention:
The present invention introduces cost function and reference picture phase by Fusion of Color Matching unit and gradient Matching unit Guan Xing, it is proposed that a kind of novel based on more Matching unit solid matching methods.Algorithm greatly promotes unmanned condition end The accuracy of scape information contributes to obtain more accurate depth map, and greatly reduces Algorithms T-cbmplexity, is Unpiloted practical application provides theoretical foundation.Further improve the navigation accuracy of robot or unmanned plane.
Description of the drawings
Fig. 1 is the left images that zed binocular cameras acquire after calibration and image setting.
Fig. 2 is improved ASW algorithm flow charts.
Fig. 3 is the lateral magnification figure in self-adapting window figure.
Fig. 4 is the Window Scale final effect figure in self-adapting window figure.
Fig. 5 is the de-occlusion region error hiding rate view in Cones image mismatch rate comparison diagrams.
Fig. 6 is that the whole region in Cones image mismatch rate comparison diagrams is compared with depth discontinuity zone error hiding rate View.
Specific implementation mode
A kind of solid matching method that binocular vision is merged based on more Matching units, including computer, two video cameras and The solid matching method of more Matching unit fusions, it is characterised in that:
1, video camera selects the ZED binocular cameras of Astrolabes companies production.
Four classes of video flowing resolution ratio point of ZED binocular cameras acquisition, design parameter is as shown in the following table 1:
Table 1:ZED outputting video streams patterns
2, Image Acquisition:
Computer is connected by downloading ZED SDK and CUDA, and by 3.0 interfaces of USB.Pass through webcam in MATLAB Functional link ZED binocular cameras, and picture collection is carried out by snapshot functions;
3, camera calibration:
The purpose of camera calibration, it is intended to obtain accurate camera interior and exterior parameter.Intrinsic parameter mainly includes the coke of left and right camera lens Away from parallax range;Outer parameter includes mainly spin matrix and left and right camera of two video cameras relative to world coordinate system Relative translation matrix.The acquiescence inside and outside parameter of video camera is obtained from official manual herein;
4, image parameter is set:
Polar curve is carried out by calibrating parameters to correct so that collected left images meet epipolar-line constraint condition.And pass through It is fixed that ZED SDK embed reseting for ZED Explorer.exe plug-in units progress parameters;
5, Stereo matching:
Core of the Stereo matching as binocular vision system.The purpose of Stereo matching is to collected left and right Image carries out imaging point matching, obtains parallax value by match point, and obtain the depth information of scene.
ZED binocular cameras can be installed on robot or unmanned machine.The actual scene of binocular intake, through more The Stereo matching processing of Matching unit fusion, can achieve the purpose that real scene, then by setting in robot or unmanned Computer disposal on machine sends out navigation instruction to robot or unmanned machine control with drive system.
The solid matching method of more Matching unit fusions, including following processes:
Improved ASW matching process is used using the improved present invention:
Improvement ASW algorithm flows proposed by the present invention include mainly:Left and right reference picture reads in stage, initial of left and right 4 stages are post-processed with cost stage, left and right cost function polymerization stage and parallax, parallax post-processes 4 stages, main to wrap Containing LRC or so consistency detections and filtering operation.Wherein:
One, initial matching cost, which calculates, is:
The primitive that ASW algorithms are calculated using the half-tone information of image as Matching power flow.The present invention passes through to gradient primitive Interceptive value is set with the mean value of R, G, B triple channel Color Primitive, and by the thought of Kalman filtering merge pixel R, G, B colors and gradient information carry out automatic adjusument by control coefrficient α and are improved to make.Detailed process is as follows:
(1) color and Grads threshold t1, t2 is respectively set, calculates initial cost eg, ec, as shown in formula (10) (11).
(2) automatic adjusument side reaction coefficient calculates final initial matching cost, final initial matching cost such as formula (12) shown in.
Two, improved self-adapting window expansion algorithm:Traditional ASW matching algorithms are complicated in processing since window is fixed Robustness is poor when texture region.Herein according to the color between pixel self-adapting window method is used with space length.It is known to wait for Matched central pixel point p (x, y), in the x and y direction each neighborhood territory pixel point be respectively p (x-1, y), p (x+1, y) and p (x, Y-1), p (x, y+1).Pixel expansion is carried out with central pixel point half-tone information different from traditional self-adapting window extended method Exhibition, herein with central point R, G, B are as extension primitive, when neighborhood territory pixel and central pixel point triple channel information meet such as simultaneously Lower formula (13) condition carries out window expansion.
Ir,g,b(x,y)-Ir,g,b(x-1, y) < t (13)
T is pre-set color threshold value, and t ∈ (0,1).When leading to the same area picture due to discontinuous texture in image When plain saltus step, it is difficult to make neighborhood territory pixel triple channel Pixel Information while meeting formula (13).Based on this characteristic, the present invention is to passing The stationary window of system makes improvement.When neighborhood territory pixel carries out window under the conditions of meeting formula (13) and adaptively expands, if field Cause window excessive so that calculating excessively complexity when cost polymerize, this is not inconsistent the reality of hop algorithm there are texture repeat region in scape The requirement of when property.Brachium is blocked to self-adapting window setting herein according to image geometry characteristic.When meeting following formula (14) pair Window brachium is blocked.
Wherein, the transverse and longitudinal coordinate value of pixel centered on p (x), p (y), q (x), q (y) are the coordinate value of neighborhood territory pixel.It is logical It crosses and test image experimental setup minimum brachium L is opened to tsukuba under middlebury platforms, teddy, cones, venus tetra-min =5, threshold value Lmax=11.
Self-adapting window, as shown in Fig. 3 figures.First, lateral magnification is carried out to center pixel I (x, y), if final window Mouth brachium is long less than most galianconism, with minimum brachium LminInstead of raw footage.In window process of expansion, when neighborhood territory pixel and center When pixel space distance is more than interceptive value, window is blocked.Final result is reference with central point, up and down as shown in Fig. 4 figures The four different length brachium L in left and rightu,Ld,Ll,Lr.Following formula is distinguished to the ranks size value of self-adapting window herein (15), shown in (16):
Rows=Lu+Ld (15)
Cows=Ll+Lr (16)
Three, Matching power flow aggregating algorithm:
After initial matching cost has been calculated, cost function polymerization is carried out, the weight for being different from classic algorithm complexity is transported It calculates.Herein by cost function in document [11] and the correlation and reference picture of the reference picture correlation of itself, to ginseng It examines image adaptive window internal variance and reference picture to be calculated with cost function covariance, obtains its correlation function, and Subtracted with initial matching cost function and carry out cost polymerization after correlation function, the significantly very high accuracy of algorithm with it is real-time Property.Detailed process is as follows:
First, R, G, B triple channel pixel value with Matching power flow convolution of functions of pixel to be matched in match window are calculated Mean value, as shown in formula (17).
Wherein, p is window center point pixel coordinate, and it includes each pixel coordinate of central point, N that q, which is in window,pFor 2.2 section institutes The self-adapting window asked, n are sum of all pixels in window.After the mean value that each channel convolution function has been calculated, its covariance is calculated Function, as shown in formula (18):
In above formula, mc,meThe mean value of triple channel element and Matching power flow function, specific formula respectively in self-adapting window Shown in following formula (19), (20):
Then, the variance matrix θ for calculating triple channel element composition in reference picture self-adapting window, is described in detail below Shown in formula (21).
Wherein, matrix θ each elements are calculated as shown in formula (22):
It can be obtained shown in coefficient matrix such as formula (23) by formula (17)-(20):
Due to vce(c ∈ { r, g, b }) is the vector of 1*3, can obtain γknValue include the vector of triple channel.When having been calculated Related coefficient γknLater, the volume of reference picture triple channel pixel and related coefficient is subtracted by the mean value of each point cost function Product so that the Matching power flow of left and right two images is more independent.Finally, shown in the following formula of initial cost function (24):
Due to γknIt is vectorial for triple channel, therefore n ∈ (1,2,3).After final Matching power flow has been calculated, by certainly It adapts to window and carries out Matching power flow polymerization, shown in specific formula such as formula (25).
Herein only to polymerizeing on the basis of reference picture, image to be matched is not added.The experimental results showed that carrying Its accuracy is not reduced while high algorithm real-time.Formula (26) is finally used, WTA (Winner-Takes- are passed through All) parallax value when algorithm picks cost function minimum is the pixel value of disparity map.
Four, parallax post-processes
1, LRC or so consistency detection
In Stereo Matching Algorithm, since left images are there are parallax, occlusion issue is inevitable always.It is obtaining most Before whole disparity map, LRC or so consistency algorithm is used to carry out parallax post processing operations first herein.
Parallax d is calculated when using left image as reference picturel, parallax d is obtained by reference picture of right imager.When Meet following formula (27) condition:
|dl-dr| > δ (27)
δ is threshold value, δ ∈ (0,1).δ values herein are 1.It is more than δ when meeting horizontal parallax absolute value of the difference, then it is assumed that It is to block a little.Parallax value smaller in horizontal parallax is a little taken to carry out parallax filling to blocking.
2, adaptive weighting medium filtering
After carrying out cost aggregating algorithm, often there is more salt-pepper noise in obtained disparity map, it is necessary to figure As carrying out medium filtering.However traditional filtering often ignores the correlation between pixel.Herein be based on space in pixel it Between color and the difference of distance different weights are assigned to pixel in window, shown in specific weights operation such as formula (28).
γc, γdFor constant, obtained by testing.γ is taken in paper through a large number of experimentsc=0.1, γd=9.k1, K2 is obtained with surrounding pixel point in the difference of color space and metric space by center pixel, respectively by following formula (29) and (30) It obtains.
Window size is 19*19.Obtaining in window adaptive median filter is carried out after the weights of each pixel.Specifically Process is as follows:
(1) it is multiplied with respective weights to each grey scale pixel value in window in addition to central point, obtains new gray value, It is calculated and is obtained using formula (31).
I'(q)=wI (q) (31)
(2) the new value of each pixel in window including central point is ranked up, fetch bit is closest near intermediate value 2 pixel value I'(q of central point1),I'(q2), it takes it to be worth to new sub-pix rank gray value and replaces former central point picture The gray value of element is calculated by formula (32) and is obtained.

Claims (1)

1. the solid matching method that a kind of binocular vision is merged based on more Matching units, including computer, two video cameras and more The solid matching method of Matching unit fusion, it is characterised in that:
The video camera selects ZED binocular cameras, and is set as follows to ZED binocular cameras:
(1), Image Acquisition:
Computer is connected by downloading ZED SDK and CUDA, and by 3.0 interfaces of USB;Pass through webcam functions in MATLAB ZED binocular cameras are connected, and picture collection is carried out by snapshot functions;
(2), camera calibration:
The purpose of camera calibration, it is intended to obtain accurate camera interior and exterior parameter;Intrinsic parameter mainly includes the focal length of left and right camera lens, base Linear distance;Outer parameter mainly include two video cameras relative to world coordinate system spin matrix and left and right camera it is relatively flat Move matrix;The present invention obtains the acquiescence inside and outside parameter of video camera from official manual;
(3), image parameter is set:
Polar curve is carried out by calibrating parameters to correct so that collected left images meet epipolar-line constraint condition;And pass through ZED It is fixed that SDK embeds reseting for ZED Explorer.exe plug-in units progress parameters;
(4), Stereo matching:
Core of the Stereo matching as binocular vision system;The purpose of Stereo matching be to collected left images into Row imaging point matches, and obtains parallax value by match point, and obtain the depth information of scene;
ZED binocular cameras can be installed on robot or unmanned machine, and the actual scene of binocular intake is through more matching bases The Stereo matching processing of member fusion, can achieve the purpose that real scene, then by being arranged in robot or unmanned machine Computer disposal sends out navigation instruction to robot or unmanned machine control with drive system;
The solid matching method of more Matching unit fusions, including following processes:
The present invention uses improved ASW matching process, includes mainly that left and right reference picture reads in stage, left and right initial matching cost Stage, left and right cost function polymerization stage and parallax post-process 4 stages;Parallax post-processes 4 stages, mainly left comprising LRC Right uniformity detects and filtering operation, wherein:
1), initial matching cost, which calculates, is:
The primitive that ASW algorithms are calculated using the half-tone information of image as Matching power flow;The present invention by gradient primitive and R, G, interceptive value is arranged in the mean value of B triple channels Color Primitive, and R, G, B color of pixel are merged by the thought of Kalman filtering With gradient information, automatic adjusument is carried out by control coefrficient α and is improved to make;Detailed process is as follows:
(1) color and Grads threshold t1, t2 is respectively set, calculates initial cost eg, ec, as shown in formula (10) (11);
(2) automatic adjusument side reaction coefficient calculates final initial matching cost, final initial matching cost such as formula (12) institute Show;
2), improved self-adapting window expansion algorithm:
The present invention uses self-adapting window method according to the color between pixel with space length, it is known that central pixel point to be matched P (x, y), each neighborhood territory pixel point is respectively p (x-1, y), p (x+1, y) and p (x, y-1), p (x, y+1) in the x and y direction;No It is same as traditional self-adapting window extended method and pixel-expansion is carried out with central pixel point half-tone information;The present invention with central point R, G, B are as extension primitive, when neighborhood territory pixel and central pixel point triple channel information meet following formula (13) condition and carry out simultaneously Window is expanded;
Ir,g,b(x,y)-Ir,g,b(x-1, y) < t (13)
T is pre-set color threshold value, and t ∈ (0,1);When leading to the same area pixel jump due to discontinuous texture in image When, it is difficult to make neighborhood territory pixel triple channel Pixel Information while meeting formula (13);Based on this characteristic, the present invention is to traditional fixation Window makes improvement;When neighborhood territory pixel carries out window under the conditions of meeting formula (13) and adaptively expands, if existing in scene Texture repeat region causes window excessive so that calculating excessively complexity when cost polymerize, this is not inconsistent the requirement of real-time of hop algorithm; The present invention blocks brachium according to image geometry characteristic to self-adapting window setting;When meeting following formula (14) to window brachium It is blocked;
Wherein, the transverse and longitudinal coordinate value of pixel centered on p (x), p (y), q (x), q (y) are the coordinate value of neighborhood territory pixel;By right Tsukuba under middlebury platforms, teddy, cones, venus tetra- open test image experimental setup minimum brachium Lmin=5, Threshold value Lmax=11;
Self-adapting window carries out lateral magnification to center pixel I (x, y) first, if final window brachium is less than most galianconism It is long, with minimum brachium LminInstead of raw footage;In window process of expansion, when neighborhood territory pixel and center pixel space length are more than When interceptive value, window is blocked;It is reference, up and down four different length brachium L with central pointu,Ld,Ll,Lr;The present invention The ranks size value of self-adapting window is distinguished shown in following formula (15), (16):
Rows=Lu+Ld (15)
Cows=Ll+Lr (16)
3), Matching power flow aggregating algorithm:
The mean value of R, G, B triple channel pixel value and Matching power flow convolution of functions of pixel to be matched in match window is calculated, it is such as public Shown in formula (17);
Wherein, p is window center point pixel coordinate, and it includes each pixel coordinate of central point, N that q, which is in window,pFor 2.2 sections it is required from Window is adapted to, n is sum of all pixels in window;After the mean value that each channel convolution function has been calculated, its covariance function is calculated, such as Shown in formula (18):
In above formula, mc,meThe mean value of triple channel element and Matching power flow function, specific formula are as follows respectively in self-adapting window Shown in formula (19), (20):
Then, the variance matrix θ for calculating triple channel element composition in reference picture self-adapting window, is described in detail below formula (21) shown in;
Wherein, matrix θ each elements are calculated as shown in formula (22):
It can be obtained shown in coefficient matrix such as formula (23) by formula (17)-(20):
Due to vce(c ∈ { r, g, b }) is the vector of 1*3, can obtain γknValue include the vector of triple channel;When phase relation has been calculated Number γknLater, reference picture triple channel pixel and the convolution of related coefficient are subtracted by the mean value of each point cost function so that left The Matching power flow of right two images is more independent;Finally, shown in the following formula of initial cost function (24):
Due to γknIt is vectorial for triple channel, therefore n ∈ (1,2,3);After final Matching power flow has been calculated, by adaptive Window carries out Matching power flow polymerization, shown in specific formula such as formula (25);
Image to be matched is not added only to polymerizeing on the basis of reference picture by the present invention;The experimental results showed that improving Its accuracy is not reduced while algorithm real-time;Formula (26) is finally used, is passed through WTA (Winner-Takes-All) Parallax value when algorithm picks cost function minimum is the pixel value of disparity map;
4), parallax post-processes
(1), LRC or so consistency detections
In Stereo Matching Algorithm, since left images are there are parallax, occlusion issue is inevitable always;Obtaining final parallax Before figure, LRC or so consistency algorithm is used to carry out parallax post processing operations first herein;
Parallax d is calculated when using left image as reference picturel, parallax d is obtained by reference picture of right imager;When meet such as Lower formula (27) condition:
|dl-dr| > δ (27)
δ is threshold value, δ ∈ (0,1);δ values herein are 1;It is more than δ when meeting horizontal parallax absolute value of the difference, then it is assumed that be to hide Catch point;Parallax value smaller in horizontal parallax is a little taken to carry out parallax filling to blocking;
(2), adaptive weighting medium filtering
After carrying out cost aggregating algorithm, often there is more salt-pepper noise in obtained disparity map, it is necessary to image into Row medium filtering;However traditional filtering often ignores the correlation between pixel;The present invention is based between pixel in space Color assigns pixel in window different weights from the difference of distance, shown in specific weights operation such as formula (28);
γc, γdFor constant, obtained by testing;γ is taken in paper through a large number of experimentsc=0.1, γd=9;K1, k2 are in Imago element is obtained with surrounding pixel point in the difference of color space and metric space, is obtained respectively by following formula (29) and (30);
Window size is 19*19;Adaptive median filter is carried out after the weights of each pixel obtaining in window, detailed process is such as Under:
(1) it is multiplied with respective weights to each grey scale pixel value in window in addition to central point, obtains new gray value, utilize formula (31) it calculates and obtains;
I'(q)=wI (q) (31)
(2) the new value of each pixel in window including central point is ranked up, fetch bit is near intermediate value closest to center 2 pixel value I'(q of point1),I'(q2), it takes it to be worth to new sub-pix rank gray value and replaces former central point pixel Gray value is calculated by formula (32) and is obtained.
CN201810249294.XA 2018-03-22 2018-03-22 Binocular vision stereo matching method based on multi-matching element fusion Active CN108682026B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810249294.XA CN108682026B (en) 2018-03-22 2018-03-22 Binocular vision stereo matching method based on multi-matching element fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810249294.XA CN108682026B (en) 2018-03-22 2018-03-22 Binocular vision stereo matching method based on multi-matching element fusion

Publications (2)

Publication Number Publication Date
CN108682026A true CN108682026A (en) 2018-10-19
CN108682026B CN108682026B (en) 2021-08-06

Family

ID=63800451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810249294.XA Active CN108682026B (en) 2018-03-22 2018-03-22 Binocular vision stereo matching method based on multi-matching element fusion

Country Status (1)

Country Link
CN (1) CN108682026B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544611A (en) * 2018-11-06 2019-03-29 深圳市爱培科技术股份有限公司 A kind of binocular vision solid matching method and system based on bit feature
CN109741385A (en) * 2018-12-24 2019-05-10 浙江大华技术股份有限公司 A kind of image processing system, method, apparatus, electronic equipment and storage medium
CN109978928A (en) * 2019-03-04 2019-07-05 北京大学深圳研究生院 A kind of binocular vision solid matching method and its system based on Nearest Neighbor with Weighted Voting
CN109978934A (en) * 2019-03-04 2019-07-05 北京大学深圳研究生院 A kind of binocular vision solid matching method and system based on matching cost weighting
CN110148168A (en) * 2019-05-23 2019-08-20 南京大学 A kind of three mesh camera depth image processing methods based on the biradical line of size
CN111553296A (en) * 2020-04-30 2020-08-18 中山大学 Two-value neural network stereo vision matching method based on FPGA
WO2020177061A1 (en) * 2019-03-04 2020-09-10 北京大学深圳研究生院 Binocular stereo vision matching method and system based on extremum verification
WO2020177060A1 (en) * 2019-03-04 2020-09-10 北京大学深圳研究生院 Binocular visual stereoscopic matching method based on extreme value checking and weighted voting
CN111754588A (en) * 2020-06-30 2020-10-09 江南大学 Variance-based binocular vision matching method capable of self-adapting window size
CN112184833A (en) * 2020-09-29 2021-01-05 南京蕴智科技有限公司 Hardware implementation system and method of binocular stereo matching algorithm
CN112200852A (en) * 2020-10-09 2021-01-08 西安交通大学 Space-time hybrid modulation stereo matching method and system
CN112308897A (en) * 2020-10-30 2021-02-02 江苏大学 Three-dimensional matching method based on neighborhood information constraint and self-adaptive window
CN112348871A (en) * 2020-11-16 2021-02-09 长安大学 Local stereo matching method
CN112802114A (en) * 2019-11-13 2021-05-14 浙江舜宇智能光学技术有限公司 Multi-vision sensor fusion device and method and electronic equipment
CN113052862A (en) * 2021-04-12 2021-06-29 北京机械设备研究所 Stereo matching method, device and equipment under outdoor scene based on multi-stage optimization
CN113516699A (en) * 2021-05-18 2021-10-19 哈尔滨理工大学 Stereo matching system based on super-pixel segmentation
CN113933315A (en) * 2021-10-13 2022-01-14 深圳市中纬智能有限公司 Patch circuit board collinearity detection method and system
CN116258759A (en) * 2023-05-15 2023-06-13 北京爱芯科技有限公司 Stereo matching method, device and equipment
WO2024032233A1 (en) * 2023-04-27 2024-02-15 华北理工大学 Stereophotogrammetric method based on binocular vision

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102611904A (en) * 2012-02-15 2012-07-25 山东大学 Stereo matching method based on image partitioning in three-dimensional television system
CN103985128A (en) * 2014-05-23 2014-08-13 南京理工大学 Three-dimensional matching method based on color intercorrelation and self-adaptive supporting weight
CN103996202A (en) * 2014-06-11 2014-08-20 北京航空航天大学 Stereo matching method based on hybrid matching cost and adaptive window
CN103996201A (en) * 2014-06-11 2014-08-20 北京航空航天大学 Stereo matching method based on improved gradient and adaptive window
CN104867135A (en) * 2015-05-04 2015-08-26 中国科学院上海微系统与信息技术研究所 High-precision stereo matching method based on guiding image guidance
US20160019437A1 (en) * 2014-07-18 2016-01-21 Samsung Electronics Co., Ltd. Stereo matching apparatus and method through learning of unary confidence and pairwise confidence
US20160173852A1 (en) * 2014-12-16 2016-06-16 Kyungpook National University Industry-Academic Cooperation Foundation Disparity computation method through stereo matching based on census transform with adaptive support weight and system thereof
CN106355570A (en) * 2016-10-21 2017-01-25 昆明理工大学 Binocular stereoscopic vision matching method combining depth characteristics
CN107301664A (en) * 2017-05-25 2017-10-27 天津大学 Improvement sectional perspective matching process based on similarity measure function
CN107301642A (en) * 2017-06-01 2017-10-27 中国人民解放军国防科学技术大学 A kind of full-automatic prospect background segregation method based on binocular vision
CN107392950A (en) * 2017-07-28 2017-11-24 哈尔滨理工大学 A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102611904A (en) * 2012-02-15 2012-07-25 山东大学 Stereo matching method based on image partitioning in three-dimensional television system
CN103985128A (en) * 2014-05-23 2014-08-13 南京理工大学 Three-dimensional matching method based on color intercorrelation and self-adaptive supporting weight
CN103996202A (en) * 2014-06-11 2014-08-20 北京航空航天大学 Stereo matching method based on hybrid matching cost and adaptive window
CN103996201A (en) * 2014-06-11 2014-08-20 北京航空航天大学 Stereo matching method based on improved gradient and adaptive window
US20160019437A1 (en) * 2014-07-18 2016-01-21 Samsung Electronics Co., Ltd. Stereo matching apparatus and method through learning of unary confidence and pairwise confidence
US20160173852A1 (en) * 2014-12-16 2016-06-16 Kyungpook National University Industry-Academic Cooperation Foundation Disparity computation method through stereo matching based on census transform with adaptive support weight and system thereof
CN104867135A (en) * 2015-05-04 2015-08-26 中国科学院上海微系统与信息技术研究所 High-precision stereo matching method based on guiding image guidance
CN106355570A (en) * 2016-10-21 2017-01-25 昆明理工大学 Binocular stereoscopic vision matching method combining depth characteristics
CN107301664A (en) * 2017-05-25 2017-10-27 天津大学 Improvement sectional perspective matching process based on similarity measure function
CN107301642A (en) * 2017-06-01 2017-10-27 中国人民解放军国防科学技术大学 A kind of full-automatic prospect background segregation method based on binocular vision
CN107392950A (en) * 2017-07-28 2017-11-24 哈尔滨理工大学 A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YINGYUN YANG等: "A New Stereo Matching Algorithm Based on Adaptive Window", 《2012 INTERNATIONAL CONFERENCE ON SYSTEMS AND INFORMATICS (ICSAI 2012)》 *
YONG-JUN CHANG等: "Region Based Stereo Matching Method with Gradient and Distance Information", 《2016 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA)》 *
卢迪等: "多种相似性测度结合的局部立体匹配算法", 《机器人》 *
林森等: "双目视觉立体匹配技术研究现状和展望", 《科学技术与工程》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544611B (en) * 2018-11-06 2021-05-14 深圳市爱培科技术股份有限公司 Binocular vision stereo matching method and system based on bit characteristics
CN109544611A (en) * 2018-11-06 2019-03-29 深圳市爱培科技术股份有限公司 A kind of binocular vision solid matching method and system based on bit feature
CN109741385A (en) * 2018-12-24 2019-05-10 浙江大华技术股份有限公司 A kind of image processing system, method, apparatus, electronic equipment and storage medium
CN109978928A (en) * 2019-03-04 2019-07-05 北京大学深圳研究生院 A kind of binocular vision solid matching method and its system based on Nearest Neighbor with Weighted Voting
CN109978934A (en) * 2019-03-04 2019-07-05 北京大学深圳研究生院 A kind of binocular vision solid matching method and system based on matching cost weighting
CN109978934B (en) * 2019-03-04 2023-01-10 北京大学深圳研究生院 Binocular vision stereo matching method and system based on matching cost weighting
WO2020177061A1 (en) * 2019-03-04 2020-09-10 北京大学深圳研究生院 Binocular stereo vision matching method and system based on extremum verification
WO2020177060A1 (en) * 2019-03-04 2020-09-10 北京大学深圳研究生院 Binocular visual stereoscopic matching method based on extreme value checking and weighted voting
CN109978928B (en) * 2019-03-04 2022-11-04 北京大学深圳研究生院 Binocular vision stereo matching method and system based on weighted voting
CN110148168A (en) * 2019-05-23 2019-08-20 南京大学 A kind of three mesh camera depth image processing methods based on the biradical line of size
CN112802114A (en) * 2019-11-13 2021-05-14 浙江舜宇智能光学技术有限公司 Multi-vision sensor fusion device and method and electronic equipment
CN111553296A (en) * 2020-04-30 2020-08-18 中山大学 Two-value neural network stereo vision matching method based on FPGA
CN111754588A (en) * 2020-06-30 2020-10-09 江南大学 Variance-based binocular vision matching method capable of self-adapting window size
CN111754588B (en) * 2020-06-30 2024-03-29 江南大学 Binocular vision matching method for self-adaptive window size based on variance
CN112184833B (en) * 2020-09-29 2023-09-15 南京蕴智科技有限公司 Hardware implementation system and method of binocular stereo matching algorithm
CN112184833A (en) * 2020-09-29 2021-01-05 南京蕴智科技有限公司 Hardware implementation system and method of binocular stereo matching algorithm
CN112200852A (en) * 2020-10-09 2021-01-08 西安交通大学 Space-time hybrid modulation stereo matching method and system
CN112200852B (en) * 2020-10-09 2022-05-20 西安交通大学 Stereo matching method and system for space-time hybrid modulation
CN112308897A (en) * 2020-10-30 2021-02-02 江苏大学 Three-dimensional matching method based on neighborhood information constraint and self-adaptive window
CN112348871B (en) * 2020-11-16 2023-02-10 长安大学 Local stereo matching method
CN112348871A (en) * 2020-11-16 2021-02-09 长安大学 Local stereo matching method
CN113052862A (en) * 2021-04-12 2021-06-29 北京机械设备研究所 Stereo matching method, device and equipment under outdoor scene based on multi-stage optimization
CN113516699A (en) * 2021-05-18 2021-10-19 哈尔滨理工大学 Stereo matching system based on super-pixel segmentation
CN113933315A (en) * 2021-10-13 2022-01-14 深圳市中纬智能有限公司 Patch circuit board collinearity detection method and system
CN113933315B (en) * 2021-10-13 2024-04-05 深圳市中纬智能有限公司 Method and system for detecting collinearity of chip circuit board
WO2024032233A1 (en) * 2023-04-27 2024-02-15 华北理工大学 Stereophotogrammetric method based on binocular vision
CN116258759A (en) * 2023-05-15 2023-06-13 北京爱芯科技有限公司 Stereo matching method, device and equipment
CN116258759B (en) * 2023-05-15 2023-09-22 北京爱芯科技有限公司 Stereo matching method, device and equipment

Also Published As

Publication number Publication date
CN108682026B (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN108682026A (en) A kind of binocular vision solid matching method based on the fusion of more Matching units
CN110378838B (en) Variable-view-angle image generation method and device, storage medium and electronic equipment
CN104685513B (en) According to the high-resolution estimation of the feature based of the low-resolution image caught using array source
CN102273208B (en) Image processing device and image processing method
CN103927741B (en) SAR image synthesis method for enhancing target characteristics
CN104160690B (en) The display packing of extracted region result and image processing apparatus
CN104299215B (en) The image split-joint method that a kind of characteristic point is demarcated and matched
CN109064409B (en) Visual image splicing system and method for mobile robot
CN104134200B (en) Mobile scene image splicing method based on improved weighted fusion
CN104036488B (en) Binocular vision-based human body posture and action research method
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
CN103856727A (en) Multichannel real-time video splicing processing system
CN110381268B (en) Method, device, storage medium and electronic equipment for generating video
CN109314753A (en) Medial view is generated using light stream
CN101394573B (en) Panoramagram generation method and system based on characteristic matching
Aires et al. Optical flow using color information: preliminary results
CN107077743A (en) System and method for the dynamic calibration of array camera
CN106791623A (en) A kind of panoramic video joining method and device
CN110855903A (en) Multi-channel video real-time splicing method
CN111127318A (en) Panoramic image splicing method in airport environment
CN109360235A (en) A kind of interacting depth estimation method based on light field data
CN101751672A (en) Image processing system
CN110363116A (en) Irregular face antidote, system and medium based on GLD-GAN
CN105046701B (en) A kind of multiple dimensioned well-marked target detection method based on patterned lines
CN109961399A (en) Optimal stitching line method for searching based on Image distance transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210708

Address after: 230031 102-f, Hefei R & D building, Chinese Academy of Sciences, northwest corner of intersection of Xiyou road and Shilian South Road, high tech Zone, Hefei City, Anhui Province

Applicant after: Hefei Jinren Technology Co.,Ltd.

Address before: 121000 Shiying street, Guta District, Jinzhou City, Liaoning Province

Applicant before: LIAONING University OF TECHNOLOGY

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210722

Address after: 230000, 12 / F, building e, intelligent technology park, economic development zone, Hefei, Anhui Province

Applicant after: Jiang Dabai

Address before: 230031 102-f, Hefei R & D building, Chinese Academy of Sciences, northwest corner of intersection of Xiyou road and Shilian South Road, high tech Zone, Hefei City, Anhui Province

Applicant before: Hefei Jinren Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221205

Address after: 230000, 12 / F, building e, intelligent technology park, economic development zone, Hefei, Anhui Province

Patentee after: CHINA APPLIED TECHNOLOGY Co.,Ltd.

Address before: 230000, 12 / F, building e, intelligent technology park, economic development zone, Hefei, Anhui Province

Patentee before: Jiang Dabai