CN102136136A - Luminosity insensitivity stereo matching method based on self-adapting Census conversion - Google Patents

Luminosity insensitivity stereo matching method based on self-adapting Census conversion Download PDF

Info

Publication number
CN102136136A
CN102136136A CN 201110065196 CN201110065196A CN102136136A CN 102136136 A CN102136136 A CN 102136136A CN 201110065196 CN201110065196 CN 201110065196 CN 201110065196 A CN201110065196 A CN 201110065196A CN 102136136 A CN102136136 A CN 102136136A
Authority
CN
China
Prior art keywords
disparity
adaptive
image
skeleton
census
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110065196
Other languages
Chinese (zh)
Other versions
CN102136136B (en
Inventor
徐贵力
倪炜基
周龙
汪凌燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201110065196A priority Critical patent/CN102136136B/en
Publication of CN102136136A publication Critical patent/CN102136136A/en
Application granted granted Critical
Publication of CN102136136B publication Critical patent/CN102136136B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于自适应Census变换的光度不敏感立体匹配方法,首先,根据图像的结构和色彩信息确定一种基于十字骨架的自适应区域,以此获得任意形状和大小的Census变换窗口;其次,利用Census变换后的汉明距作为匹配代价,采用局部优化方法计算初始视差;最后,提出一种基于视差统计直方图和左右一致性校验的两步提精方法,将基于十字骨架的自适应区域有机的融入提精过程,获得高精度的视差图。本发明可以对存在光照强度和曝光时间差异的立体图像对获得高精度的视差图,兼顾匹配精度和对幅度失真的鲁棒性,能够更好的适应无人机视觉导航的应用场景。

Figure 201110065196

The invention discloses a photometric insensitive stereo matching method based on adaptive Census transformation. First, an adaptive area based on a cross skeleton is determined according to the structure and color information of the image, so as to obtain a Census transformation window of any shape and size ;Secondly, using the Hamming distance after Census transformation as the matching cost, the local optimization method is used to calculate the initial disparity; finally, a two-step refinement method based on disparity statistical histogram and left-right consistency check is proposed, which will be based on the cross skeleton The self-adaptive area is organically integrated into the refinement process to obtain a high-precision disparity map. The present invention can obtain high-precision disparity maps for stereo image pairs with differences in illumination intensity and exposure time, taking into account matching accuracy and robustness to amplitude distortion, and can better adapt to the application scene of unmanned aerial vehicle visual navigation.

Figure 201110065196

Description

The insensitive solid matching method of luminosity based on self-adaptation Census conversion
Technical field
The present invention relates to the solid matching method in the stereo visual system, belong to computer vision field, be used for existing under the condition of illumination and difference in exposure, obtain high-precision dense parallax information at left and right sides view, thereby for recovering to provide reliable assurance based on the three-dimensional depth information of stereoscopic vision.
Background technology
Three-dimensional coupling is a vital task in the computer vision field, and it obtains dense disparity map by binocular or many orders images match, thus the three-dimensional depth information in the perception scene.Domestic and international many scholars further investigate this field.Current dense Stereo Matching Algorithm can be divided into the coupling cost calculating, the coupling cost accumulation, calculating/optimizations of parallax, parallax carry the essence four steps analyze and research.In these four steps, the calculating of coupling cost is as the basis of solid coupling, and its significance level is self-evident, has only the suitable coupling cost of selection could obtain high-precision disparity map, and common coupling cost can be divided into two classes:
First kind coupling cost is based on the hypothesis of brightness/color consistency, be that feature in the scene has identical brightness/color information in different images, as the absolute value of gray scale difference, gray scale difference square, the gray scale difference absolute value that blocks etc. all is based on this hypothesis.
Yet, owing to there is the influence of factors such as global brightness variation, local brightness variation and noise between image, cause the brightness/color information of character pair inequality, this phenomenon is referred to as amplitude distortion.Although it is the matching algorithm based on brightness/color consistency can obtain high-precision disparity map for the image that satisfies this hypothesis, quite responsive for amplitude distortion.
Another kind of coupling cost reaches the insensitive purpose of amplitude distortion by hypothesis lax or that abandon brightness/color consistency, as normalized crosscorrelation, Rank and the variation of Census nonparametric, mutual information and Laplce's gaussian sum medium filtering etc.Studies show that the Census non-parametric transformations all has good robustness under the condition of various amplitude distortions.But there is the selection problem of window size in tradition based on the Census conversion of stationary window.If mapping window is too little, then signal to noise ratio (S/N ratio) is low excessively, and it is low to cause mating the cost discrimination, easily low texture region is caused the mistake coupling; If mapping window is excessive, then can introduce too much outlier (Outlier), influence matching precision.
In addition, research also shows by the coupling cost of global approach optimization based on the Census conversion, will cause high calculation cost; Utilization will be difficult to obtain high-precision anaglyph based on the partial approach optimization of the fixing support window coupling cost based on the Census conversion, make the parallax discontinuity zone exist tangible prospect to amplify phenomenon.
Summary of the invention
Technical matters to be solved by this invention provides a kind of solid matching method, exists at left and right sides view under the condition of illumination and difference in exposure to obtain high-precision anaglyph.
For solving the problems of the technologies described above, the present invention takes following technical scheme to realize:
A kind of insensitive solid matching method of luminosity based on self-adaptation Census conversion is characterized in that may further comprise the steps:
(1) definite adaptive region based on crossing skeleton obtains the Census mapping window of arbitrary shape and size with this;
(2) utilize Hamming distance after the Census conversion as the coupling cost, adopt local optimization methods (Winner-Take-all) to calculate initial parallax;
(3) behind the acquisition initial parallax, utilize two steps of parallax to put forward smart method and further improve the parallax precision.
The aforementioned insensitive solid matching method of luminosity based on self-adaptation Census conversion is characterized in that the coupling cost in the three-dimensional matching process is calculated and the polymerization of coupling cost is organically merged.
The aforementioned insensitive solid matching method of luminosity based on self-adaptation Census conversion is characterized in that the preparation method of the Census mapping window of described arbitrary shape and size comprises the steps:
(1) makes up in reference picture and the target image each pixel respectively based on the adaptive region of crossing skeleton according to the structure of image and color information;
(2) adaptive region, the adaptive region of corresponding point in target image in reference picture to be matched obtained the Census mapping window by logic and operation.
The aforementioned insensitive solid matching method of luminosity based on self-adaptation Census conversion is characterized in that the specific algorithm of described adaptive region based on crossing skeleton is:
(1) any one pixel p in the image is determined the skeleton of a cross shape, this skeleton has comprised level and vertical both direction, uses H (p) and V (p) expression respectively, and the length of the four direction of skeleton can be expressed as (h p -, h p +, v p -, v p +), pixel p can be expressed as based on the adaptive region of crossing skeleton:
U ( p ) = ∪ q ∈ V ( p ) H ( q ) - - - ( 1 )
(2) according to the hypothesis of the corresponding same structure of similar color in the image, adopt following formula:
r * = max r ∈ [ 1 , L ] ( r Π i ∈ [ 1 , r ] δ ( p , p i ) ) - - - ( 2 )
Determine the length (h of center pixel p crossing skeleton four direction respectively p -, h p +, v p -, v p +), in the formula (2), δ is an indicator function, is used for weighing the heterochromia degree between different pixels, p iBe a pixel on the p cross direction, the coordinate representation of p in image is (x p, y p), r *Be the brachium of certain cross direction, L is along the hunting zone of p cross direction, works as p iAt the level left side of p, then p iCoordinate in image can be expressed as (x p-i, y p), L is the hunting zone along the horizontal left direction of pixel p, then r *Result of calculation be h p -, in like manner determine the length h of other three directions p +, v p -, v p +
The aforementioned insensitive solid matching method of luminosity based on self-adaptation Census conversion is characterized in that described two steps based on parallax statistic histogram and left and right sides consistency desired result carry precision method and comprise the steps:
(1) according to the hypothesis of parallax smooth change in similar color area, in the adaptive region that can reflect picture structure and color information, initial parallax is carried out putting forward essence based on the first step of statistic histogram;
(2) in order to get rid of insincere parallax, the horizontal parallax figure that the first step is carried after the essence carries out left and right sides consistency desired result, set up parallax and put the letter matrix, in adaptive region, credible parallax is adopted optimization based on the parallax statistic histogram subsequently, get rid of the influence of occlusion area and insincere parallax statistics.
The aforementioned insensitive solid matching method of luminosity based on self-adaptation Census conversion, it is characterized in that described parallax statistic histogram optimization specific implementation method is: for any one pixel to be matched, statistics its based on the adaptive region of crossing skeleton in the frequency of occurrence of different parallaxes, select to have the parallax value of maximum generate probability as optimizing the result.
So far, the insensitive solid matching method of luminosity based on self-adaptation Census conversion is finished.
The beneficial effect of patent of the present invention is: the present invention has not only kept the robustness of Census conversion for amplitude distortion, and do not adopting under the prerequisite of global optimization, solved existing solid matching method is difficult to obtain high-precision disparity map under the prerequisite of left and right sides view existence exposure and illumination difference problem, can better adapt to true navigation scenarios based on stereoscopic vision.
Description of drawings
Fig. 1 is an algorithm flow chart of the present invention;
Fig. 2 is the schematic diagram based on the adaptive region of crossing skeleton;
Fig. 3 carries smart process flow diagram for second step of parallax.
Embodiment
Below in conjunction with accompanying drawing and example patent of the present invention is further specified.
As shown in Figure 1, a kind of quick stereo matching process of cutting apart with self-adapting window based on color may further comprise the steps:
The first step: determine a kind of adaptive region, obtain the Census mapping window of arbitrary shape and size with this based on crossing skeleton.
1. as shown in Figure 2, determine that at first each pixel is based on the adaptive region of crossing skeleton in reference picture and the target image.Any one pixel p in the image is determined the skeleton of a cross shape, and this skeleton has comprised level and vertical both direction, uses H (p) and V (p) expression respectively, and pixel p can be expressed as based on the adaptive region of crossing skeleton:
U ( p ) = ∪ q ∈ V ( p ) H ( q ) - - - ( 1 )
The length of the four direction of skeleton can be expressed as (h p -, h p +, v p -, v p +); Hypothesis according to the corresponding same structure of similar color in the image, adopt following formula:
r * = max r ∈ [ 1 , L ] ( r Π i ∈ [ 1 , r ] δ ( p , p i ) ) - - - ( 2 )
Determine the length (h of center pixel p crossing skeleton four direction respectively p -, h p +, v p -, v p +), in the formula (2), δ is an indicator function, is used for weighing the heterochromia degree between different pixels, p iBe a pixel on the p cross direction, the coordinate representation of p in image is (x p, y p), r *Be the brachium of certain cross direction, L is (L=17 in the experiment) along the hunting zone of p cross direction, works as p iAt the level left side of p, then p iCoordinate in image can be expressed as (x p-i, y p), L is the hunting zone along the horizontal left direction of pixel p, then r *Result of calculation be h p -, in like manner determine the length h of other three directions p +, v p -, v p +
2. at last, determine the Census mapping window of any pixel.If U Ref(m) and U Tar(n) be illustrated respectively in corresponding point m under the parallax hypothesis d, n is based on the adaptive region of crossing skeleton, m then, and the Census mapping window of the arbitrary shape of n in correspondence image and size can be expressed as U d(m) and U d(n):
U d(m)={(x,y)|(x,y)∈U ref(m),(x-d,y)∈U tar(n)}(2)
U d(n)={(x,y)|(x,y)∈U tar(n),(x+d,y)∈U ref(m)}(3)
Because m=(x, y), n=(x-d, y), so U d(m) and U d(n) be of similar shape and size, and make N that (m n) represents its size.
Second step: utilize Hamming distance after the Census conversion as the coupling cost, adopt local optimization methods to calculate initial parallax.This process will mate that cost is calculated and the coupling cost is accumulated two steps and organically blended, be because previous step has been selected suitable Census mapping window according to the structure and the color information of image, this window has good adaptive to low texture region, parallax discontinuity zone etc. in the image; If further adopt stationary window that the coupling cost of Census conversion is carried out polymerization, not only can cause facing a difficult choice of low texture region and parallax discontinuity zone coupling again, also can increase suitable calculated amount.
The 3rd step: after obtaining initial parallax, propose a kind of two steps of parallax to put forward smart method and further improve the parallax precision.
1. in the local optimum process, owing to will mate the calculating of cost and the accumulation of coupling cost organically blends, thus can there be certain noise in the initial parallax, and second step was carried left and right sides consistency desired result in the smart process to the suitable sensitivity of noise.For this reason, according to the hypothesis of parallax smooth change in similar color area, employing can reflect the U of picture structure and color information RefInitial parallax is carried out the first step put forward essence.As shown in Figure 1, to any pixel m in the reference picture, set up the statistic histogram of an initial parallax
Figure BDA0000050788970000061
Statistics U Ref(m) frequency that all different initial parallaxes occur in, and select
Figure BDA0000050788970000062
Peak value carry parallax d as a result after the essence as the first step 1, can be expressed as:
d∈[d min,d max] (4)
2. in order further to detect the occlusion area in the anaglyph, get rid of insincere parallax, carried out for second step by Fig. 3 flow process and carry essence, wherein d 1LAnd d 1RRepresent that respectively with left figure and right figure be parallax after the first step that reference picture obtains is put forward essence, U LeftAnd U RightRepresent that respectively each pixel is based on the adaptive region of crossing skeleton among left figure and the right figure.
At first, adopt left and right sides consistency desired result to detect d 1LAnd d 1RThe degree of confidence of middle parallax is set up parallax and is put the letter matrix, if | d 1L(x, y)-d 1R(x+d 1L(x, y), y) |<T (in the experiment, T is 1 pixel), think that then parallax is credible, on the contrary then insincere; Secondly, select d 1LAnd d 1RIn any one further handle and (in the experiment, selected d 1L), processing mode still is based on the parallax statistic histogram, but only adds up U this moment LeftIn credible parallax, thereby can get rid of occlusion area and insincere parallax for the influence of statistics, further improve the parallax precision.
In sum, be difficult under the true vision guided navigation scene of amplitude distortion to obtain the high precision parallax at existing Stereo Matching Algorithm, the present invention proposes and a kind ofly will mate that cost is calculated and the coupling cost is accumulated the local matching process based on self-adaptation Census conversion that organically blends.Experimental result shows that this algorithm can obtain more high-precision disparity map to the left and right sides view that has intensity of illumination and time shutter difference, has taken into account matching precision and to the robustness of amplitude distortion, can better adapt to the application scenarios of unmanned plane vision navigation.
Above-mentioned embodiment does not limit technical scheme of the present invention in any form, and the technical scheme that mode obtained that every employing is equal to replacement or equivalent transformation all drops on protection scope of the present invention.

Claims (5)

1.一种基于自适应Census变换的光度不敏感立体匹配方法,其特征在于包括以下步骤:1. a photometric insensitive stereo matching method based on adaptive Census transform, is characterized in that comprising the following steps: (1)确定一种基于十字骨架的自适应区域,以此获得任意形状和大小的Census变换窗口;(1) Determine a kind of adaptive region based on the cross skeleton, so as to obtain a Census transformation window of any shape and size; (2)利用Census变换后的汉明距作为匹配代价,采用局部优化方法计算初始视差;(2) Use the Hamming distance after Census transformation as the matching cost, and use the local optimization method to calculate the initial disparity; (3)获得初始视差后,利用视差两步提精的方法进一步提高视差精度。(3) After obtaining the initial disparity, use the disparity two-step refinement method to further improve the disparity accuracy. 2.根据权利要求1所述的基于自适应Census变换的光度不敏感立体匹配方法,其特征在于:所述的任意形状和大小的Census变换窗口的获得方法包括如下步骤:2. the photometric insensitive stereo matching method based on adaptive Census transformation according to claim 1, is characterized in that: the obtaining method of the Census transformation window of described arbitrary shape and size comprises the steps: (1)根据图像的结构和色彩信息分别构建参考图像和目标图像中各像素基于十字骨架的自适应区域;(1) According to the structure and color information of the image, respectively construct the self-adaptive area based on the cross skeleton for each pixel in the reference image and the target image; (2)将待匹配点在参考图像中的自适应区域、对应点在目标图像中的自适应区域通过逻辑与运算获得Census变换窗口。(2) Obtain the Census transform window by logical AND operation of the adaptive area of the point to be matched in the reference image and the adaptive area of the corresponding point in the target image. 3.根据权利要求2所述的基于自适应Census变换的光度不敏感立体匹配方法,其特征在于:所述基于十字骨架的自适应区域的具体算法是:3. the photometric insensitive stereo matching method based on adaptive Census transformation according to claim 2, is characterized in that: the specific algorithm of the adaptive region based on the described cross skeleton is: (1)对图像中任意一个像素p确定一个十字形状的骨架,该骨架包括了水平和垂直两个方向,分别用H(p)和V(p)表示,骨架的四个方向的长度可表示为(hp -,hp +,vp -,vp +),像素p基于十字骨架的自适应区域可以表示为:(1) Determine a cross-shaped skeleton for any pixel p in the image. The skeleton includes two directions, horizontal and vertical, denoted by H(p) and V(p) respectively. The lengths of the four directions of the skeleton can be expressed As (h p - , hp + , v p - , v p + ), the adaptive area of pixel p based on the cross skeleton can be expressed as: Uu (( pp )) == ∪∪ qq ∈∈ VV (( pp )) Hh (( qq )) -- -- -- (( 11 )) (2)根据图像中相似色彩对应相同结构的假设,采用下式:(2) According to the assumption that similar colors in the image correspond to the same structure, the following formula is adopted: rr ** == maxmax rr ∈∈ [[ 11 ,, LL ]] (( rr ΠΠ ii ∈∈ [[ 11 ,, rr ]] δδ (( pp ,, pp ii )) )) -- -- -- (( 22 )) 分别确定中心像素p十字骨架四个方向的长度(hp -,hp +,vp -,vp +),式(2)中,δ是一个指示函数,用来衡量不同像素间的色彩差异程度,pi为p某十字方向上的一个像素,p在图像中的坐标表示为(xp,yp),r*为某十字方向的臂长,L为沿着p某十字方向的搜索范围,当pi在p的水平左侧,则pi在图像中的坐标可以表示为(xp-i,yp),L为沿着像素p水平左侧方向的搜索范围,则r*的计算结果为hp -,同理确定其它三个方向的长度hp +,vp -,vp +Determine the lengths (h p - , h p + , v p - , v p + ) of the four directions of the cross skeleton of the central pixel p respectively. In formula (2), δ is an indicator function used to measure the color between different pixels The degree of difference, p i is a pixel in the cross direction of p, the coordinates of p in the image are expressed as (x p , y p ), r * is the arm length in a certain cross direction, L is the arm length along a certain cross direction of p Search range, when p i is on the horizontal left side of p, then the coordinates of p i in the image can be expressed as (x p -i, y p ), L is the search range along the horizontal left side of pixel p, then r The calculation result of * is h p - , and the lengths h p + , v p - , and v p + of the other three directions are determined similarly. 4.根据权利要求1所述的基于自适应Census变换的光度不敏感立体匹配方法,其特征在于:所述的基于视差统计直方图和左右一致性校验的两步提精方法包括如下步骤:4. the photometric insensitive stereo matching method based on adaptive Census transformation according to claim 1, is characterized in that: the described two-step refinement method based on parallax statistics histogram and left and right consistency checking comprises the steps: (1)根据视差在相似色彩区域内平滑变化的假设,在能够反映图像结构和色彩信息的自适应区域内对初始视差进行基于统计直方图的第一步提精;(1) According to the assumption that the disparity changes smoothly in similar color regions, the initial disparity is refined in the first step based on the statistical histogram in the adaptive region that can reflect the image structure and color information; (2)为了排除不可信视差,对第一步提精后的左右视差图进行左右一致性校验,建立视差置信矩阵,随后在自适应区域内对可信视差采用基于视差统计直方图的优化,排除遮挡区域和不可信视差对统计结果的影响。(2) In order to eliminate unreliable disparity, the left and right disparity images refined in the first step are checked for left and right consistency, and the disparity confidence matrix is established, and then the credible disparity is optimized based on the disparity statistical histogram in the adaptive area , to exclude the influence of occluded regions and unreliable parallax on the statistical results. 5.根据权利要求4所述的基于自适应Census变换的光度不敏感立体匹配方法,其特征在于所述的视差统计直方图优化具体实施方法为:对于任意一个待匹配像素,统计在其基于十字骨架的自适应区域内不同视差的出现频次,选择具有最大出现概率的视差值作为优化结果。5. the photometric insensitive stereo matching method based on adaptive Census transform according to claim 4, it is characterized in that described disparity statistical histogram optimization specific implementation method is: for any pixel to be matched, statistics is based on cross The occurrence frequency of different disparities in the adaptive area of the skeleton, and the disparity value with the largest occurrence probability is selected as the optimization result.
CN201110065196A 2011-03-17 2011-03-17 Luminosity insensitivity stereo matching method based on self-adapting Census conversion Expired - Fee Related CN102136136B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110065196A CN102136136B (en) 2011-03-17 2011-03-17 Luminosity insensitivity stereo matching method based on self-adapting Census conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110065196A CN102136136B (en) 2011-03-17 2011-03-17 Luminosity insensitivity stereo matching method based on self-adapting Census conversion

Publications (2)

Publication Number Publication Date
CN102136136A true CN102136136A (en) 2011-07-27
CN102136136B CN102136136B (en) 2012-10-03

Family

ID=44295911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110065196A Expired - Fee Related CN102136136B (en) 2011-03-17 2011-03-17 Luminosity insensitivity stereo matching method based on self-adapting Census conversion

Country Status (1)

Country Link
CN (1) CN102136136B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368826A (en) * 2011-11-07 2012-03-07 天津大学 Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
CN102447933A (en) * 2011-11-01 2012-05-09 浙江捷尚视觉科技有限公司 Depth information acquisition method based on binocular framework
CN102930530A (en) * 2012-09-26 2013-02-13 苏州工业职业技术学院 Stereo matching method of double-viewpoint image
CN103440653A (en) * 2013-08-27 2013-12-11 北京航空航天大学 Binocular vision stereo matching method
CN104427324A (en) * 2013-09-02 2015-03-18 联咏科技股份有限公司 Parallax error calculation method and three-dimensional matching device thereof
CN104867135A (en) * 2015-05-04 2015-08-26 中国科学院上海微系统与信息技术研究所 High-precision stereo matching method based on guiding image guidance
CN106131448A (en) * 2016-07-22 2016-11-16 石家庄爱赛科技有限公司 The 3 d stereoscopic vision system of brightness of image can be automatically adjusted
CN106355608A (en) * 2016-09-09 2017-01-25 南京信息工程大学 Stereoscopic matching method on basis of variable-weight cost computation and S-census transformation
CN106651975A (en) * 2016-12-01 2017-05-10 大连理工大学 Census adaptive transformation algorithm based on multiple codes
CN106846290A (en) * 2017-01-19 2017-06-13 西安电子科技大学 Stereoscopic parallax optimization method based on anti-texture cross and weights cross
CN107240083A (en) * 2017-06-29 2017-10-10 海信集团有限公司 The method and device of noise in a kind of repairing disparity map
CN107330932A (en) * 2017-06-16 2017-11-07 海信集团有限公司 The method and device of noise in a kind of repairing disparity map
CN109003295A (en) * 2018-04-11 2018-12-14 中冶沈勘工程技术有限公司 A kind of unmanned plane aviation image fast matching method
CN109724537A (en) * 2019-02-11 2019-05-07 吉林大学 A binocular three-dimensional imaging method and system
CN113808185A (en) * 2021-11-19 2021-12-17 北京的卢深视科技有限公司 Image depth recovery method, electronic device and storage medium
CN119512490A (en) * 2024-10-21 2025-02-25 深圳市佳创计算机科技有限公司 Intelligent motherboard display drive system based on 3D display and augmented reality

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100007720A1 (en) * 2008-06-27 2010-01-14 Beddhu Murali Method for front matching stereo vision
CN101841730A (en) * 2010-05-28 2010-09-22 浙江大学 Real-time stereoscopic vision implementation method based on FPGA

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100007720A1 (en) * 2008-06-27 2010-01-14 Beddhu Murali Method for front matching stereo vision
CN101841730A (en) * 2010-05-28 2010-09-22 浙江大学 Real-time stereoscopic vision implementation method based on FPGA

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Field-Programmable Custom Computing Machines,2007.FCCM 2007.15th Annual IEEE Symposium on》 20070425 Chris Murphy et al. Low-Cost Stereo Vision on an FPGA 全文 1-5 , *
《IEEE Transactions on Pattern Analysis and Machine Intelligence》 19940930 Takeo Kanade et al. A Stereo Matching Algorithm with an Adaptive Window: Theory and Experiment 第496页第1段、图3 1 第16卷, 第9期 *
《控制与决策》 20080715 白明 等 双目立体匹配算法的研究与进展 全文 1-5 第23卷, 第7期 *
《电子与信息学报》 20110315 丁菁汀 等 基于FPGA的立体视觉匹配的高性能实现 第598页第2节 1 第33卷, 第3期 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102447933A (en) * 2011-11-01 2012-05-09 浙江捷尚视觉科技有限公司 Depth information acquisition method based on binocular framework
CN102368826A (en) * 2011-11-07 2012-03-07 天津大学 Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
CN102930530A (en) * 2012-09-26 2013-02-13 苏州工业职业技术学院 Stereo matching method of double-viewpoint image
CN103440653A (en) * 2013-08-27 2013-12-11 北京航空航天大学 Binocular vision stereo matching method
CN104427324A (en) * 2013-09-02 2015-03-18 联咏科技股份有限公司 Parallax error calculation method and three-dimensional matching device thereof
CN104867135B (en) * 2015-05-04 2017-08-25 中国科学院上海微系统与信息技术研究所 A kind of High Precision Stereo matching process guided based on guide image
CN104867135A (en) * 2015-05-04 2015-08-26 中国科学院上海微系统与信息技术研究所 High-precision stereo matching method based on guiding image guidance
CN106131448B (en) * 2016-07-22 2019-05-10 石家庄爱赛科技有限公司 The three-dimensional stereoscopic visual system of brightness of image can be automatically adjusted
CN106131448A (en) * 2016-07-22 2016-11-16 石家庄爱赛科技有限公司 The 3 d stereoscopic vision system of brightness of image can be automatically adjusted
CN106355608B (en) * 2016-09-09 2019-03-26 南京信息工程大学 The solid matching method with S-census transformation is calculated based on Changeable weight cost
CN106355608A (en) * 2016-09-09 2017-01-25 南京信息工程大学 Stereoscopic matching method on basis of variable-weight cost computation and S-census transformation
CN106651975B (en) * 2016-12-01 2019-08-13 大连理工大学 A kind of Census adaptive transformation method based on odd encoder
CN106651975A (en) * 2016-12-01 2017-05-10 大连理工大学 Census adaptive transformation algorithm based on multiple codes
CN106846290A (en) * 2017-01-19 2017-06-13 西安电子科技大学 Stereoscopic parallax optimization method based on anti-texture cross and weights cross
CN106846290B (en) * 2017-01-19 2019-10-11 西安电子科技大学 Stereo disparity optimization method based on anti-texture cross and weight cross
CN107330932A (en) * 2017-06-16 2017-11-07 海信集团有限公司 The method and device of noise in a kind of repairing disparity map
CN107240083A (en) * 2017-06-29 2017-10-10 海信集团有限公司 The method and device of noise in a kind of repairing disparity map
CN109003295A (en) * 2018-04-11 2018-12-14 中冶沈勘工程技术有限公司 A kind of unmanned plane aviation image fast matching method
CN109003295B (en) * 2018-04-11 2021-07-23 中冶沈勘工程技术有限公司 Rapid matching method for aerial images of unmanned aerial vehicle
CN109724537A (en) * 2019-02-11 2019-05-07 吉林大学 A binocular three-dimensional imaging method and system
CN113808185A (en) * 2021-11-19 2021-12-17 北京的卢深视科技有限公司 Image depth recovery method, electronic device and storage medium
CN113808185B (en) * 2021-11-19 2022-03-25 北京的卢深视科技有限公司 Image depth recovery method, electronic device and storage medium
CN119512490A (en) * 2024-10-21 2025-02-25 深圳市佳创计算机科技有限公司 Intelligent motherboard display drive system based on 3D display and augmented reality

Also Published As

Publication number Publication date
CN102136136B (en) 2012-10-03

Similar Documents

Publication Publication Date Title
CN102136136A (en) Luminosity insensitivity stereo matching method based on self-adapting Census conversion
Kim et al. Multi-view image and ToF sensor fusion for dense 3D reconstruction
CN102903096B (en) Monocular video based object depth extraction method
CN107392958B (en) Method and device for determining object volume based on binocular stereo camera
CN102930530B (en) Stereo matching method of double-viewpoint image
CN113763269B (en) Stereo matching method for binocular images
US20180091798A1 (en) System and Method for Generating a Depth Map Using Differential Patterns
US11995858B2 (en) Method, apparatus and electronic device for stereo matching
CN107886477A (en) Unmanned neutral body vision merges antidote with low line beam laser radar
CN101720047A (en) Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation
CN104065947B (en) The depth map acquisition methods of a kind of integration imaging system
CN104867135A (en) High-precision stereo matching method based on guiding image guidance
CN109255811A (en) A kind of solid matching method based on the optimization of confidence level figure parallax
CN106651897B (en) Parallax correction method based on super-pixel segmentation
CN102368826A (en) Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
CN113888639B (en) Visual odometer positioning method and system based on event camera and depth camera
CN106447661A (en) Rapid depth image generating method
CN105701787B (en) Depth map fusion method based on confidence level
CN103269435A (en) Binocular to multi-view virtual viewpoint synthetic method
US20230267720A1 (en) Cooperative lidar object detection via feature sharing in deep networks
CN113920183A (en) Monocular vision-based vehicle front obstacle distance measurement method
CN104166987A (en) Parallax estimation method based on improved adaptive weighted summation and belief propagation
CN105277169A (en) Image segmentation-based binocular range finding method
CN116029996A (en) Stereo matching method and device and electronic equipment
CN106530336A (en) Stereo matching algorithm based on color information and graph-cut theory

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121003

Termination date: 20150317

EXPY Termination of patent right or utility model