CN103325120A - Rapid self-adaption binocular vision stereo matching method capable of supporting weight - Google Patents

Rapid self-adaption binocular vision stereo matching method capable of supporting weight Download PDF

Info

Publication number
CN103325120A
CN103325120A CN2013102689033A CN201310268903A CN103325120A CN 103325120 A CN103325120 A CN 103325120A CN 2013102689033 A CN2013102689033 A CN 2013102689033A CN 201310268903 A CN201310268903 A CN 201310268903A CN 103325120 A CN103325120 A CN 103325120A
Authority
CN
China
Prior art keywords
pixel
parallax
support
window
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013102689033A
Other languages
Chinese (zh)
Inventor
张葛祥
王涛
关桃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN2013102689033A priority Critical patent/CN103325120A/en
Publication of CN103325120A publication Critical patent/CN103325120A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

本发明公开了一种快速自适应支持权值双目视觉立体匹配方法,包括以下步骤:读取已有待匹配双目图像对;计算匹配代价;加权聚合匹配代价;计算初始视差;对初始视差进行校正,得到最终视差矩阵;生成视差图,输出结果。本发明可用于立体显示技术领域,改善立体匹配效果。

The invention discloses a fast self-adaptive weight binocular vision stereo matching method, comprising the following steps: reading an existing binocular image pair to be matched; calculating matching cost; weighted aggregation matching cost; calculating initial parallax; Correction to obtain the final disparity matrix; generate a disparity map and output the result. The invention can be used in the technical field of stereo display to improve the effect of stereo matching.

Description

一种快速自适应支持权值双目视觉立体匹配方法A Fast Adaptive Weighted Binocular Stereo Matching Method

技术领域technical field

本发明涉及图像显示技术领域,尤其涉及一种快速自适应支持权值双目视觉立体匹配方法。The invention relates to the technical field of image display, in particular to a fast adaptive support weight binocular vision stereo matching method.

背景技术Background technique

视觉是人类认识世界、感知世界的一个重要手段,人类对外界的认知中约75%的信息是通过视觉系统获得。从传统的黑白照片、黑白电视机到现在的高分辨率彩色数字相片、高清数字电视,人类对视觉体验要求越来越高。虽然传统的二维视频已经能够提供高清晰度的二维平面信息,但人类生活在三维世界中,二维平面视频始终无法给人一种“身临其境”的视觉感受。双目立体视觉打破传统二维视频“单眼看世界”的局限性,利用计算机模拟人类的视觉系统,通过场景中的两幅或者多幅二维视图得到场景的三维信息,人类通过立体显示视频便可以感受到真实的三维世界。双目立体视觉是计算机视觉的一个重要研究领域,包括四个步骤:图像获取、摄像机标定、立体匹配和三维重建,其中立体匹配是关键技术,立体匹配的精度直接影响到三维重建的效果。Vision is an important means for human beings to understand and perceive the world. About 75% of the information in human cognition of the outside world is obtained through the visual system. From traditional black-and-white photos and black-and-white TVs to current high-resolution color digital photos and high-definition digital TVs, human beings have higher and higher requirements for visual experience. Although traditional two-dimensional video can already provide high-definition two-dimensional plane information, human beings live in a three-dimensional world, and two-dimensional plane video can never give people an "immersive" visual experience. Binocular stereo vision breaks the limitation of traditional two-dimensional video "seeing the world with one eye", uses computer to simulate the human visual system, and obtains three-dimensional information of the scene through two or more two-dimensional views in the scene. Can feel the real three-dimensional world. Binocular stereo vision is an important research field of computer vision, including four steps: image acquisition, camera calibration, stereo matching and 3D reconstruction, among which stereo matching is the key technology, and the accuracy of stereo matching directly affects the effect of 3D reconstruction.

虽然目前已有大量的立体匹配方法,但在实际应用中仍然存在许多问题。立体匹配按照优化方法的不同,可以分为两类:全局立体匹配方法和局部立体匹配方。全局立体匹配方法匹配精度高,但计算结构复杂,不易于硬件实现,局部立体匹配方法结构简单,易于硬件实现,但匹配精度也相对较低。随着Yoon提出自适应支持权值方法以来,局部方法的立体匹配性能大大提高,甚至超过了一些全局方法,但Yoon自适应支持权值方法仍然存在一个重要问题:计算速度慢,计算时间比其它局部算法长。因此,发明一种计算速度快、匹配性能高的快速自适应支持权值方法具有十分重要的意义,有利于将立体匹配技术应用到实际问题中。Although there are a large number of stereo matching methods, there are still many problems in practical applications. According to different optimization methods, stereo matching can be divided into two categories: global stereo matching method and local stereo matching method. The global stereo matching method has high matching accuracy, but the calculation structure is complex, and it is not easy to implement in hardware. The local stereo matching method has a simple structure and is easy to implement in hardware, but the matching accuracy is relatively low. Since Yoon proposed the adaptive support weight method, the stereo matching performance of the local method has been greatly improved, even surpassing some global methods, but there is still an important problem in the Yoon adaptive support weight method: the calculation speed is slow, and the calculation time is slower than other methods. The local algorithm is long. Therefore, it is of great significance to invent a fast adaptive support weight method with fast calculation speed and high matching performance, which is conducive to the application of stereo matching technology to practical problems.

发明内容Contents of the invention

本发明所要解决的技术问题在于克服Yoon自适应支持权值方法的不足,提出一种基于扩展秩变换的快速自适应支持权值方法,以双目立体图像对的左视图为待匹配图像,以右视图为匹配图像,为左视图中的每一个待匹配像素点在右视图中找到对应的匹配点,即求得左视图的视差图。The technical problem to be solved by the present invention is to overcome the deficiencies of the Yoon adaptive support weight method, and propose a fast adaptive support weight method based on extended rank transformation, with the left view of the binocular stereo image pair as the image to be matched, and The right view is the matching image, and the corresponding matching point is found in the right view for each pixel to be matched in the left view, that is, the disparity map of the left view is obtained.

本发明为解决其技术问题,所采用的技术包含以下步骤:The present invention is for solving its technical problem, and the technology adopted comprises the following steps:

S1、读取已有待匹配双目图像对Il和Ir,获得图像的大小和颜色通道信息,其中,Il表示左视图,为待匹配图像,Ir表示右视图,为匹配图像;S1. Read the existing pair of binocular images I l and I r to be matched, and obtain the size and color channel information of the image, wherein, I l represents the left view, which is the image to be matched, and I r represents the right view, which is the matched image;

S2、计算Il,Ir中像素点间的匹配代价,包括:S2. Calculate the matching cost between pixels in I l and I r , including:

S21、对待匹配图像Il,确定方形支持窗N,计算支持窗内中心像素和支持像素的灰度差diff(p,q),其中,diff(p,q)=I(p)-I(q),I(p)和I(q)分别为中心像素点p和支持像素点q的灰度值;S21. Determine the square support window N for the image I l to be matched, and calculate the gray difference diff(p,q) between the center pixel and the support pixel in the support window, where diff(p,q)=I(p)−I( q), I(p) and I(q) are the gray values of the center pixel p and the support pixel q, respectively;

S22、根据S21所得灰度差diff(p,q),将方形支持窗N内每个像素定义到5个等级中去,五个等级如下:S22. According to the gray difference diff(p,q) obtained in S21, each pixel in the square support window N is defined into five levels, and the five levels are as follows:

Figure BDA00003439816200021
其中,s和t是根据经验设置的阈值,满足尽可能减小图像噪声的影响这一原则;
Figure BDA00003439816200021
Among them, s and t are thresholds set according to experience, which satisfy the principle of reducing the influence of image noise as much as possible;

S23、统计得出初始相似性测度值SdS23. Statistically obtain the initial similarity measure value S d :

令fuz表示经秩变换后得到的数值矩阵,矩阵大小和方形支持窗N大小相同, fuz = - 2 diff ( p , q ) < - s - 1 - s &le; diff ( p , q ) < - t 0 - t &le; diff ( p , q ) &le; t 1 t < diff ( p , q ) &le; s 2 s < diff ( p , q ) , 分别对Il中待匹配像素点和Ir中候选匹配像素点进行秩变换,可以得到两个秩变换矩阵fuzl和fuzr,统计秩变换矩阵fuzl和秩变换矩阵fuzr在方形支持窗内对应位置具有相同等级的个数,得到初始相似性测度值Sd,其中, S d = &Sigma; q &Element; N m , m = 1 if fuz 1 = fuz r 0 otherwise , m表示秩变换矩阵fuzl和fuzr在对应位置是否具有相同等级,若相同,则m=1,否则m=0;Let fuz represent the numerical matrix obtained after rank transformation, and the size of the matrix is the same as that of the square support window N, fuz = - 2 diff ( p , q ) < - the s - 1 - the s &le; diff ( p , q ) < - t 0 - t &le; diff ( p , q ) &le; t 1 t < diff ( p , q ) &le; the s 2 the s < diff ( p , q ) , Perform rank transformation on the pixels to be matched in I l and the candidate matching pixels in I r respectively, and two rank transformation matrices fuz l and fuz r can be obtained. The statistical rank transformation matrix fuz l and the rank transformation matrix fuz r are in the square support window The number of corresponding positions with the same level in the corresponding position, and the initial similarity measure S d is obtained, where, S d = &Sigma; q &Element; N m , m = 1 if fuz 1 = fuz r 0 otherwise , m indicates whether the rank transformation matrices fuz l and fuz r have the same level at the corresponding positions, if they are the same, then m=1, otherwise m=0;

S24、统计待匹配图像中待匹配像素点为中心的n×n方形统计窗M内每个视差值d∈D对应的初始相似性测度值,根据ERT相似性度量函数得到待匹配像素和匹配图像中候选匹配像素点间的匹配代价

Figure BDA00003439816200025
其中,d表示待匹配像素和候选匹配像素在水平方向的视差,D={dmin,...dmax}, C ERT ( q , q &OverBar; d ) = &Sigma; n &times; n S d ( q ) ; S24. Count the initial similarity measurement value corresponding to each disparity value d∈D in the n×n square statistical window M centered on the pixel to be matched in the image to be matched, and obtain the pixel to be matched and the matching value according to the ERT similarity measurement function The matching cost between candidate matching pixels in the image
Figure BDA00003439816200025
Among them, d represents the disparity between the pixel to be matched and the pixel to be matched in the horizontal direction, D={d min ,...d max }, C ERT ( q , q &OverBar; d ) = &Sigma; no &times; no S d ( q ) ;

S3、加权聚合匹配代价,包括:S3, weighted aggregation matching cost, including:

S31、计算支持权值w(p,q):利用颜色相似性和几何接近性,计算匹配支持窗内支持像素q对待匹配像素p的支持权值w(p,q),w(p,q)=fs(Δcpq)·fp(Δgpq),

Figure BDA00003439816200032
其中,fs(Δcpq)表示由颜色相似性确定的聚类强度,fp(Δgpq)表示由几何接近性确定的聚类强度,Δcpq表示两个像素颜色cp和cq在RGB颜色空间的欧几里德距离,cp=[Rp,Gp,Bp],cq=[Rq,Gq,Bq], &Delta;c pq = ( R p - R q ) 2 + ( G p - G q ) 2 + ( B p - B q ) 2 , Δgpq表示中心像素和支持像素的在空间位置上的欧几里德距离,设像素p在图像域的坐标为p(x,y),像素q在图像域的坐标为q(x',y'),则γc,γp是用户指定参数,分别用来调节颜色相似性和几何接近性对支持权值大小的影响;S31. Calculating the support weight w(p,q): Using color similarity and geometric proximity, calculate the support weight w(p,q),w(p,q) of the support pixel q in the matching support window to be matched with the pixel p )=f s (Δc pq )·f p (Δg pq ),
Figure BDA00003439816200032
where f s (Δc pq ) represents the clustering strength determined by color similarity, f p (Δg pq ) represents the clustering strength determined by geometric proximity, and Δc pq represents the two pixel colors c p and c q in RGB Euclidean distance of color space, c p = [R p , G p , B p ], c q = [R q , G q , B q ], &Delta; c pq = ( R p - R q ) 2 + ( G p - G q ) 2 + ( B p - B q ) 2 , Δg pq represents the Euclidean distance between the central pixel and the supporting pixel in the spatial position. Let the coordinates of pixel p in the image domain be p(x, y), and the coordinates of pixel q in the image domain be q(x', y '),but γ c , γ p are user-specified parameters, which are used to adjust the influence of color similarity and geometric proximity on the support weights;

S32、根据S24所得匹配代价

Figure BDA00003439816200035
和S31所得支持权值w(p,q),加权聚合匹配代价,得到: E ( p , p &OverBar; d ) = &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q ) c ERT ( q , q &OverBar; d ) &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q ) , 其中,
Figure BDA00003439816200037
分别表示当视差为d时待匹配图像像素p、q在匹配图像中对应的匹配像素点,Np表示参考图像中的支持窗口大小,
Figure BDA000034398162000312
表示目标图像中的对应支持窗口大小,且 S32. The matching cost obtained according to S24
Figure BDA00003439816200035
Combined with the support weight w(p,q) obtained in S31, and the weighted aggregation matching cost, we get: E. ( p , p &OverBar; d ) = &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q ) c ERT ( q , q &OverBar; d ) &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q ) , in,
Figure BDA00003439816200037
Respectively represent the matching pixel points corresponding to the image pixels p and q to be matched in the matching image when the disparity is d, N p represents the support window size in the reference image,
Figure BDA000034398162000312
denote the corresponding support window size in the target image, and

S4、计算初始视差:对S3中所得加权聚合匹配代价采用局部最优化方法WTA(Winner-Take-All,胜者为王),得出最大加权聚合结果,最大加权聚合结果对应的视差值为该像素的初始视差dp,每个像素的初始视差结果保存到初始视差矩阵,初始视差矩阵为: d p = arg max d &Element; D E ( p , p &OverBar; d ) ; S4. Calculating the initial disparity: The local optimization method WTA (Winner-Take-All, the winner is king) is used for the weighted aggregation matching cost obtained in S3 to obtain the maximum weighted aggregation result, and the disparity corresponding to the maximum weighted aggregation result is The initial disparity d p of the pixel, the initial disparity result of each pixel is saved to the initial disparity matrix, and the initial disparity matrix is: d p = arg max d &Element; D. E. ( p , p &OverBar; d ) ;

S5、对S4所得初始视差进行校正,得到最终视差矩阵,包括:S5. Correcting the initial parallax obtained in S4 to obtain a final parallax matrix, including:

S51、确定以待校正像素p为中心的校正窗口Nc,根据颜色相似性和几何接近性为校正窗内每一个像素自适应地分配一个合适的支持权值wc

Figure BDA00003439816200039
S51. Determine the correction window N c centered on the pixel p to be corrected, and adaptively assign an appropriate support weight w c to each pixel in the correction window according to color similarity and geometric proximity,
Figure BDA00003439816200039

S52、观察校正窗内所有像素的初始视差分布,统计视差d∈D在校正窗内出现的次数,并聚合各个视差值d在校正窗内出现的次数及对应的权值,最大聚合结果对应的视差,则为待校正像素的最终视差dp_final,并将结果保存到最终视差矩阵,其中 d p _ final = arg max d &Element; D { &Sigma; q &Element; N c w c ( p , q ) &times; k } , k = 1 if d p ( q ) = d 0 otherwise , k表示校正窗内像素的初始视差是否等于待统计视差d,若相等,则k=1,否则k=0;S52. Observe the initial parallax distribution of all pixels in the correction window, count the number of times the parallax d∈D appears in the correction window, and aggregate the number of times each parallax value d appears in the correction window and the corresponding weight. The maximum aggregation result corresponds to The disparity is the final disparity d p_final of the pixel to be corrected, and the result is saved to the final disparity matrix, where d p _ final = arg max d &Element; D. { &Sigma; q &Element; N c w c ( p , q ) &times; k } , k = 1 if d p ( q ) = d 0 otherwise , k indicates whether the initial parallax of the pixels in the correction window is equal to the parallax d to be counted, if they are equal, then k=1, otherwise k=0;

S6、生成视差图,输出结果:将S5所得最终视差值dp_final映射到相应的灰度空间[0,255],映射比例为t,得到表示视差信息的灰度图像。S6. Generate a disparity map, and output the result: map the final disparity value d p_final obtained in S5 to the corresponding grayscale space [0, 255] with a mapping ratio of t, and obtain a grayscale image representing disparity information.

进一步地,S22中所述s<t。Further, in S22, s<t.

进一步地,S51所述校正窗内像素应尽量来自相同深度,自适应支持权值方形校正窗满足这一条件。Further, the pixels in the correction window in S51 should come from the same depth as much as possible, and the adaptive support weight square correction window satisfies this condition.

本发明通过一种基于扩展秩变换的快速自适应支持权值方法,以双目立体图像对的左视图为待匹配图像,以右视图为匹配图像,为左视图中的每一个待匹配像素点在右视图中找到对应的匹配点,即求得左视图的视差图,相对于其他计算方法,计算速度快,匹配性能高,更有利于将立体匹配技术应用到实际问题中。The present invention adopts a fast self-adaptive support weight method based on extended rank transform, takes the left view of the binocular stereo image pair as the image to be matched, takes the right view as the matching image, and uses the left view as the matching image for each pixel point to be matched in the left view Find the corresponding matching point in the right view, that is, obtain the disparity map of the left view. Compared with other calculation methods, the calculation speed is fast and the matching performance is high, which is more conducive to the application of stereo matching technology to practical problems.

附图说明Description of drawings

图1是本发明所述的方法步骤示意图。Fig. 1 is a schematic diagram of the method steps of the present invention.

具体实施方式Detailed ways

下面结合具体实施方式对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with specific embodiments.

一种快速自适应支持权值双目视觉立体匹配方法,其目的是快速求得待匹配图像对的高精度稠密视差图。本实施例以Middlebury测试平台提供的Teddy标准测试图像对为实验对象,其中以左视图为待匹配图像,以右视图为匹配图像,为左视图中的每一个待匹配像素点在右视图中找到对应的匹配点。为了求解该问题,根据流程图1所示,需采取如下步骤:A fast adaptive support weight binocular vision stereo matching method, the purpose of which is to quickly obtain a high-precision dense disparity map of an image pair to be matched. This embodiment takes the Teddy standard test image pair provided by the Middlebury test platform as the experimental object, wherein the left view is the image to be matched, the right view is the matching image, and each pixel point to be matched in the left view is found in the right view corresponding matching points. In order to solve this problem, according to the flow chart 1, the following steps need to be taken:

S1:读取待匹配双目图像对。输入Middlebury测试平台提供的Teddy标准测试图像对,其中左视图Il,右视图Ir。读取待匹配图像对包括图像大小和颜色通道等信息。S1: Read the binocular image pair to be matched. Input the Teddy standard test image pair provided by the Middlebury test platform, in which the left view I l and the right view I r . Read the image pair to be matched, including information such as image size and color channel.

S2:使用扩展秩变换函数,计算Il和Ir像素点间的匹配代价。S2: Use the extended rank transformation function to calculate the matching cost between I l and I r pixels.

首先确定待匹配像素p,确定方形支持窗口大小N=25,计算方形支持窗内中心像素p和支持像素q的灰度差diff(p,q)=I(p)-I(q),其中I(p),I(q)分别表示像素p,q的灰度值。First determine the pixel p to be matched, determine the size of the square support window N=25, and calculate the gray difference diff(p,q)=I(p)-I(q) between the center pixel p and the support pixel q in the square support window, where I(p), I(q) represent the gray values of pixels p and q, respectively.

然后根据diff(p,q)值的大小,将支持窗内每个像素定义到5个等级。Then, according to the value of diff(p,q), each pixel in the support window is defined to 5 levels.

Figure BDA00003439816200041
Figure BDA00003439816200041

对左视图和右视图进行秩变换,得到两个秩变换矩阵fuzl(p)、

Figure BDA00003439816200051
统计左右视图中方形支持窗口内秩变换矩阵fuzl(p)和
Figure BDA00003439816200052
在对应位置具有相同等级的个数,得到待匹配像素点p和候选匹配像素点
Figure BDA00003439816200053
间的初始相似性测度值Sd。Perform rank transformation on the left view and right view to obtain two rank transformation matrices fuz l (p),
Figure BDA00003439816200051
Statistical rank transformation matrices fuz l (p) and
Figure BDA00003439816200052
The number of the same level in the corresponding position, get the pixel point p to be matched and the candidate matching pixel point
Figure BDA00003439816200053
The initial similarity measure S d between them.

SS dd (( qq ,, qq &OverBar;&OverBar; dd )) == &Sigma;&Sigma; qq &Element;&Element; NN mm

mm == 11 ifif (( fuzfuz ll (( pp )) == fuzfuz rr (( pp &OverBar;&OverBar; dd )) )) 00 otherwiseotherwise

最后统计以待匹配像素p为窗口中心的3×3方形统计窗口内每个视差值d∈D对应的初始相似性测度值,得到待匹配像素和候选匹配像素点间的匹配代价

Figure BDA00003439816200056
其中,初始化视差集D={0,1,2,3,…56,57,58,59}。Finally, count the initial similarity measure value corresponding to each disparity value d∈D in the 3×3 square statistical window with the pixel p to be matched as the center of the window, and obtain the matching cost between the pixel to be matched and the candidate matching pixel
Figure BDA00003439816200056
Among them, the initial disparity set D={0,1,2,3,...56,57,58,59}.

S3:权聚合匹配代价:S3: Weight aggregation matching cost:

首先,利用颜色相似性和几何接近性,计算匹配支持窗内支持像素q对待匹配像素p的支持权值w(p,q):First, using the color similarity and geometric proximity, calculate the support weight w(p,q) of the support pixel q in the matching support window to be matched with the pixel p:

ww (( pp ,, qq )) == ff sthe s (( &Delta;c&Delta; c pqpq )) &CenterDot;&Center Dot; ff pp (( &Delta;g&Delta; g pqpq )) == expexp (( -- (( &Delta;c&Delta; c pqpq 1919 ++ &Delta;g&Delta;g pqpq 12.512.5 )) ))

&Delta;c&Delta; c pqpq == (( RR pp -- RR qq )) 22 ++ (( GG pp -- GG qq )) 22 ++ (( BB pp -- BB qq )) 22

&Delta;g&Delta; g pqpq == (( pp (( xx ,, ythe y )) -- qq (( xx &prime;&prime; ,, ythe y &prime;&prime; )) )) 22

然后,使用方形支持权值窗口,聚合匹配代价和相应的支持权值,支持窗口的大小为 N = N p = N p &OverBar; d = 25 . Then, using a square support weight window, the matching cost and the corresponding support weight are aggregated, and the size of the support window is N = N p = N p &OverBar; d = 25 .

EE. (( pp ,, pp &OverBar;&OverBar; dd )) == &Sigma;&Sigma; qq &Element;&Element; NN PP ,, qq &OverBar;&OverBar; &Element;&Element; NN pp &OverBar;&OverBar; dd ww (( pp ,, qq )) cc ERTERT (( qq ,, qq &OverBar;&OverBar; dd )) &Sigma;&Sigma; qq &Element;&Element; NN PP ,, qq &OverBar;&OverBar; &Element;&Element; NN pp &OverBar;&OverBar; dd ww (( pp ,, qq ))

S4:从聚合结果计算初始视差。采用局部最优化方法WTA,具有最大加权结果对应的视差值则为该像素的初始视差dp,并将结果保存到初始视差矩阵。S4: Calculate the initial disparity from the aggregation results. Using the local optimization method WTA, the disparity value corresponding to the maximum weighted result is the initial disparity d p of the pixel, and the result is saved to the initial disparity matrix.

dd pp == argarg maxmax dd &Element;&Element; DD. EE. (( pp ,, pp &OverBar;&OverBar; dd ))

S5:对初始视差进行校正S5: Correct the initial parallax

首先,确定以待校正像素p为中心的校正窗Nc=21,并根据颜色相似性和几何接近性,计算校正窗内校正像素p对待校正像素q的支持权值wc(p,q):First, determine the correction window N c =21 centered on the pixel p to be corrected, and calculate the support weight wc (p,q) of the pixel p to be corrected within the correction window based on the color similarity and geometric proximity :

ww cc (( pp ,, qq )) == expexp (( -- (( &Delta;c&Delta; c pqpq 1919 ++ &Delta;g&Delta;g pqpq 10.510.5 )) ))

然后观察校正窗内所有像素的初始视差分布,统计视差d∈D在校正窗内出现的次数,并聚合各个视差值d在校准窗内出现的次数及对应的权值,最大聚合结果对应的视差,则为待校正像素的最终视差dp_final,并将最终视差结果保存到最终视差矩阵。Then observe the initial disparity distribution of all pixels in the correction window, count the number of times that the disparity d∈D appears in the correction window, and aggregate the number of times each disparity value d appears in the calibration window and the corresponding weight. The maximum aggregation result corresponds to The disparity is the final disparity d p_final of the pixel to be corrected, and the final disparity result is saved to the final disparity matrix.

dd pp __ finalfinal == argarg maxmax dd &Element;&Element; DD. {{ &Sigma;&Sigma; qq &Element;&Element; NN cc ww cc (( pp ,, qq )) &times;&times; kk }}

kk == 11 ifif dd pp (( qq )) == dd 00 otherwiseotherwise

S6:生成视差图并输出结果。将最终视差矩阵中的视差值映射到相应的灰度空间[0,255],映射比例t=4,视差集D中视差值的映射结果如下:S6: Generate a disparity map and output the result. Map the disparity value in the final disparity matrix to the corresponding grayscale space [0, 255], the mapping ratio t=4, and the mapping result of the disparity value in the disparity set D is as follows:

 0×4=0    1×4=4   ...  13×4=52  14×4=560×4=0 1×4=4 ... 13×4=52 14×4=56

15×4=60  16×4=64  ...  28×4=112  29×4=11615×4=60 16×4=64 ... 28×4=112 29×4=116

30×4=120  31×4=124  ...  43×4=172  44×4=17630×4=120 31×4=124 ... 43×4=172 44×4=176

45×4=180  46×4=184  ...  58×4=232  59×4=23645×4=180 46×4=184 ... 58×4=232 59×4=236

其中,视差值越大,映射后越接近255,在视差图中越亮,视差值越小,映射后越接近0,在视差图中越暗。Among them, the larger the disparity value, the closer to 255 after mapping, the brighter it is in the disparity map, the smaller the disparity value, the closer to 0 after mapping, and the darker it is in the disparity map.

本发明以Il为待匹配图像,以Ir为匹配图像,为左视图中的每一个像素点在右视图中找到对应的匹配点,求得左视图的视差图。表1是本发明实施例结果和Yoon自适应支持权值方法结果的定量比较。从表1中可以看出,本发明在非遮挡区域,深度不连续区域,以及所有区域错误匹配率均比Yoon自适应支持权值方法的错误匹配率低,且本发明的立体匹配时间约为Yoon自适应支持权值方法匹配时间的1/20,匹配速度更快。The present invention uses I l as the image to be matched and I r as the matching image, finds a corresponding matching point in the right view for each pixel in the left view, and obtains the disparity map of the left view. Table 1 is a quantitative comparison between the results of the embodiment of the present invention and the results of the Yoon adaptive support weight method. As can be seen from Table 1, the present invention is in the non-occlusion region, the depth discontinuity region, and the false matching rate of all regions is all lower than the false matching rate of the Yoon adaptive support weight method, and the stereo matching time of the present invention is about Yoon adaptive supports 1/20 of the matching time of the weight method, and the matching speed is faster.

表1Table 1

Figure BDA00003439816200063
Figure BDA00003439816200063

必要在此指出的是,上面的实施例只是用于进一步阐述本发明,以便于本领域的普通技术人员更好地理解本发明。本发明已通过文字揭露了其首选实施方案,但通过阅读这些技术文字说明可以领会其中的可优化性和可修改性,并在不偏离本发明的范围和精神上进行改进,但这样的改进应仍属于本发明权利要求的保护范围。It must be pointed out here that the above embodiments are only used to further illustrate the present invention, so that those skilled in the art can better understand the present invention. The present invention has disclosed its preferred embodiment by text, but can comprehend optimization and modifiability wherein by reading these technical text description, and can improve without departing from the scope and spirit of the present invention, but such improvement should Still belong to the protection scope of the claims of the present invention.

Claims (3)

1. a quick self-adapted support-weight binocular vision solid matching method is characterized in that, may further comprise the steps:
S1, read existing binocular image to be matched to I lAnd I r, size and the Color Channel information of acquisition image, wherein, I lThe expression left view is image to be matched, I rThe expression right view is matching image;
S2, calculating I l, I rCoupling cost between middle pixel comprises:
S21, treat matching image I l, determine square support window N, calculate and support center pixel and the gray scale difference diff (p that supports pixel in the window, q), wherein, diff (p, q)=and I (p)-I (q), I (p) and I (q) are respectively central pixel point p and support the gray-scale value of pixel q;
S22, according to S21 gained gray scale difference diff (p, q), with each pixel definition is in 5 grades in the square support window N, five grades are as follows:
Figure FDA00003439816100011
Wherein, s and t are the threshold values that rule of thumb arranges, and satisfy this principle that affects that reduces as far as possible picture noise;
S23, statistics draw initial similarity measure value S d:
Make fuz represent the numerical matrix that obtains after the order conversion, matrix size is identical with square support window N size, fuz = - 2 diff ( p , q ) < - s - 1 - s &le; diff ( p , q ) < - t 0 - t &le; diff ( p , q ) &le; t 1 t < diff ( p , q ) &le; s 2 s < diff ( p , q ) , Respectively to I lIn pixel to be matched and I rMiddle candidate matches pixel carries out the order conversion, can obtain two order transformation matrix fuz lAnd fuz r, statistics order transformation matrix fuz lWith order transformation matrix fuz rCorrespondence position has the number of same levels in square support window, obtains initial similarity measure value S d, wherein, S d = &Sigma; q &Element; N m , m = 1 if fuz l = fuz r 0 otherwise , M represents order transformation matrix fuz lAnd fuz rWhether have same levels at correspondence position, if identical, m=1 then, otherwise m=0;
Initial similarity measure value corresponding to each parallax value d ∈ D in the square statistic window M of n * n in S24, the statistics image to be matched centered by the pixel to be matched obtains the coupling cost between the candidate matches pixel in pixel to be matched and the matching image according to ERT similarity measurement function
Figure FDA00003439816100015
Wherein, d represents pixel to be matched and candidate matches pixel parallax in the horizontal direction, D={d Min... d Max, C ERT ( q , q &OverBar; d ) = &Sigma; n &times; n S d ( q ) ;
S3, weighting polymerization coupling cost comprise:
S31, calculating support-weight w (p, q): utilize color similarity and how much proximities, calculate the support-weight w (p, q) that the interior support of Matching supporting window pixel q treats matched pixel p, w (p, q)=f s(Δ c Pq) f p(Δ g Pq),
Figure FDA00003439816100022
Figure FDA00003439816100023
Wherein, f s(Δ c Pq) represent by the definite cluster intensity of color similarity, f p(Δ g Pq) represent by how much definite cluster intensity of proximity, Δ c PqRepresent two pixel color c pAnd c qAt the Euclidean distance of RGB color space, c p=[R p, G p, B p], c q=[R q, G q, B q], &Delta;c pq = ( R p - R q ) 2 + ( G p - G q ) 2 + ( B p - B q ) 2 , Δ g PqThe Euclidean distance on the locus of expression center pixel and support pixel, establishing pixel p is p (x, y) at the coordinate of image area, pixel q is q (x', y') at the coordinate of image area, then
Figure FDA00003439816100025
γ c, γ pBe user-specified parameters, be used for respectively the impact on the support-weight size of adjustable colors similarity and how much proximities;
S32, according to S24 gained coupling cost
Figure FDA00003439816100026
With S31 gained support-weight w (p, q), weighting polymerization coupling cost obtains: E ( p , p - d ) = &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q ) c ERT ( q , q &OverBar; d ) &Sigma; q &Element; N P , q &OverBar; &Element; N p &OverBar; d w ( p , q ) , Wherein,
Figure FDA00003439816100028
The matched pixel point that represents respectively image pixel p to be matched, q correspondence in matching image when parallax is d, N pSupport window size in the expression reference picture,
Figure FDA00003439816100029
Corresponding support window size in the expression target image, and
Figure FDA000034398161000210
S4, calculating initial parallax: gained weighting polymerization coupling cost among the S3 is adopted suboptimization method WTA(Winner-Take-All, the victor is a king), draw the maximum weighted polymerization result, the parallax value that the maximum weighted polymerization result is corresponding is the initial parallax d of this pixel p, the initial parallax result of each pixel is saved in the initial parallax matrix, and the initial parallax matrix is: d p = arg max d &Element; D E ( p , p &OverBar; d ) ;
S5, S4 gained initial parallax is proofreaied and correct, is obtained final parallax matrix, comprising:
S51, definite correcting window N centered by pixel p to be corrected c, be that each pixel is distributed a suitable support-weight w adaptively in the correction window according to color similarity and how much proximities c,
The initial parallax of all pixels distributes in S52, the observation correction window, the number of times that statistical parallax d ∈ D occurs in correction window, and the number of times that in correction window, occurs of each parallax value of polymerization d and corresponding weights, the parallax that maximum polymerization result is corresponding then is the final parallax d of pixel to be corrected P_final, and the result is saved in final parallax matrix, wherein d p _ final = arg max d &Element; D { &Sigma; q &Element; N c w c ( p , q ) &times; k } , k = 1 if d p ( q ) = d 0 otherwise , K represents whether the initial parallax of pixel equals to treat statistical parallax d in the correction window, if equate, and k=1 then, otherwise k=0;
S6, generation disparity map, Output rusults: with the final parallax value d of S5 gained P_finalBe mapped to corresponding gray space [0,255], the mapping ratio is t, obtains representing the gray level image of parallax information.
2. a kind of quick self-adapted support-weight binocular vision solid matching method according to claim 1 is characterized in that: s<t described in the S22.
3. a kind of quick self-adapted support-weight binocular vision solid matching method according to claim 1 is characterized in that: pixel should be as far as possible from same depth in the described correction window of S51, and the square correction window of self-adaptation support-weight satisfies this condition.
CN2013102689033A 2013-06-30 2013-06-30 Rapid self-adaption binocular vision stereo matching method capable of supporting weight Pending CN103325120A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013102689033A CN103325120A (en) 2013-06-30 2013-06-30 Rapid self-adaption binocular vision stereo matching method capable of supporting weight

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013102689033A CN103325120A (en) 2013-06-30 2013-06-30 Rapid self-adaption binocular vision stereo matching method capable of supporting weight

Publications (1)

Publication Number Publication Date
CN103325120A true CN103325120A (en) 2013-09-25

Family

ID=49193843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013102689033A Pending CN103325120A (en) 2013-06-30 2013-06-30 Rapid self-adaption binocular vision stereo matching method capable of supporting weight

Country Status (1)

Country Link
CN (1) CN103325120A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971366A (en) * 2014-04-18 2014-08-06 天津大学 Stereoscopic matching method based on double-weight aggregation
CN104123727A (en) * 2014-07-26 2014-10-29 福州大学 Stereo matching method based on self-adaptation Gaussian weighting
CN104200453A (en) * 2014-09-15 2014-12-10 西安电子科技大学 Parallax image correcting method based on image segmentation and credibility
CN104637043A (en) * 2013-11-08 2015-05-20 株式会社理光 Supporting pixel selection method and device and parallax determination method
CN104820991A (en) * 2015-05-15 2015-08-05 武汉大学 Multi-soft-constraint stereo matching method based on cost matrix
CN104915941A (en) * 2014-03-11 2015-09-16 株式会社理光 Method and apparatus for calculating parallax
CN106156748A (en) * 2016-07-22 2016-11-23 浙江零跑科技有限公司 Traffic scene participant's recognition methods based on vehicle-mounted binocular camera
CN106254850A (en) * 2016-08-23 2016-12-21 深圳市捷视飞通科技股份有限公司 The image matching method of double vision point three-dimensional video-frequency and device
TWI566203B (en) * 2013-12-16 2017-01-11 財團法人工業技術研究院 Method and system for depth refinement and data aggregation
CN107025660A (en) * 2016-02-01 2017-08-08 北京三星通信技术研究有限公司 A kind of method and apparatus for determining binocular dynamic visual sensor image parallactic
CN108154529A (en) * 2018-01-04 2018-06-12 北京大学深圳研究生院 The solid matching method and system of a kind of binocular image
CN108230273A (en) * 2018-01-05 2018-06-29 西南交通大学 A kind of artificial compound eye camera three dimensional image processing method based on geological information
CN108305269A (en) * 2018-01-04 2018-07-20 北京大学深圳研究生院 A kind of image partition method and system of binocular image
CN108381549A (en) * 2018-01-26 2018-08-10 广东三三智能科技有限公司 A kind of quick grasping means of binocular vision guided robot, device and storage medium
WO2020177061A1 (en) * 2019-03-04 2020-09-10 北京大学深圳研究生院 Binocular stereo vision matching method and system based on extremum verification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120163704A1 (en) * 2010-12-23 2012-06-28 Electronics And Telecommunications Research Institute Apparatus and method for stereo matching
CN102572485A (en) * 2012-02-02 2012-07-11 北京大学 Self-adaptive weighted stereo matching algorithm, stereo display and collecting device and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120163704A1 (en) * 2010-12-23 2012-06-28 Electronics And Telecommunications Research Institute Apparatus and method for stereo matching
CN102572485A (en) * 2012-02-02 2012-07-11 北京大学 Self-adaptive weighted stereo matching algorithm, stereo display and collecting device and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KUK-JIN YOON ET AL.: "Adaptive Support-Weight Approach for Correspondence Search", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, vol. 28, no. 4, 30 April 2006 (2006-04-30), pages 650 - 656 *
TAO GUAN ET AL.: "Performance enhancement of Adaptive Support-Weight approach by tuning parameters", 《2012 IEEE FIFTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE》, 18 October 2012 (2012-10-18), pages 206 - 211, XP032331182, DOI: doi:10.1109/ICACI.2012.6463153 *
ZHENG GU ET AL.: "Local stereo matching with adaptive support-weight, rank transform and disparity calibration", 《PATTERN RECOGNITION LETTERS》, vol. 29, no. 9, 1 July 2008 (2008-07-01), pages 1230 - 1235, XP022663891, DOI: doi:10.1016/j.patrec.2008.01.032 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104637043A (en) * 2013-11-08 2015-05-20 株式会社理光 Supporting pixel selection method and device and parallax determination method
CN104637043B (en) * 2013-11-08 2017-12-05 株式会社理光 Pixel selecting method, device, parallax value is supported to determine method
TWI566203B (en) * 2013-12-16 2017-01-11 財團法人工業技術研究院 Method and system for depth refinement and data aggregation
CN104915941A (en) * 2014-03-11 2015-09-16 株式会社理光 Method and apparatus for calculating parallax
CN104915941B (en) * 2014-03-11 2017-08-04 株式会社理光 The method and apparatus for calculating parallax
CN103971366A (en) * 2014-04-18 2014-08-06 天津大学 Stereoscopic matching method based on double-weight aggregation
CN104123727B (en) * 2014-07-26 2017-02-15 福州大学 Stereo matching method based on self-adaptation Gaussian weighting
CN104123727A (en) * 2014-07-26 2014-10-29 福州大学 Stereo matching method based on self-adaptation Gaussian weighting
CN104200453A (en) * 2014-09-15 2014-12-10 西安电子科技大学 Parallax image correcting method based on image segmentation and credibility
CN104200453B (en) * 2014-09-15 2017-01-25 西安电子科技大学 Parallax image correcting method based on image segmentation and credibility
CN104820991A (en) * 2015-05-15 2015-08-05 武汉大学 Multi-soft-constraint stereo matching method based on cost matrix
CN104820991B (en) * 2015-05-15 2017-10-03 武汉大学 A kind of multiple soft-constraint solid matching method based on cost matrix
CN107025660B (en) * 2016-02-01 2020-07-10 北京三星通信技术研究有限公司 Method and device for determining image parallax of binocular dynamic vision sensor
CN107025660A (en) * 2016-02-01 2017-08-08 北京三星通信技术研究有限公司 A kind of method and apparatus for determining binocular dynamic visual sensor image parallactic
CN106156748A (en) * 2016-07-22 2016-11-23 浙江零跑科技有限公司 Traffic scene participant's recognition methods based on vehicle-mounted binocular camera
CN106254850A (en) * 2016-08-23 2016-12-21 深圳市捷视飞通科技股份有限公司 The image matching method of double vision point three-dimensional video-frequency and device
CN108154529A (en) * 2018-01-04 2018-06-12 北京大学深圳研究生院 The solid matching method and system of a kind of binocular image
CN108305269A (en) * 2018-01-04 2018-07-20 北京大学深圳研究生院 A kind of image partition method and system of binocular image
CN108154529B (en) * 2018-01-04 2021-11-23 北京大学深圳研究生院 Stereo matching method and system for binocular images
CN108305269B (en) * 2018-01-04 2022-05-10 北京大学深圳研究生院 Image segmentation method and system for binocular image
CN108230273A (en) * 2018-01-05 2018-06-29 西南交通大学 A kind of artificial compound eye camera three dimensional image processing method based on geological information
CN108230273B (en) * 2018-01-05 2020-04-07 西南交通大学 Three-dimensional image processing method of artificial compound eye camera based on geometric information
CN108381549A (en) * 2018-01-26 2018-08-10 广东三三智能科技有限公司 A kind of quick grasping means of binocular vision guided robot, device and storage medium
CN108381549B (en) * 2018-01-26 2021-12-14 广东三三智能科技有限公司 Binocular vision guide robot rapid grabbing method and device and storage medium
WO2020177061A1 (en) * 2019-03-04 2020-09-10 北京大学深圳研究生院 Binocular stereo vision matching method and system based on extremum verification

Similar Documents

Publication Publication Date Title
CN103325120A (en) Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN107767413B (en) An Image Depth Estimation Method Based on Convolutional Neural Networks
CN105744256B (en) Based on the significant objective evaluation method for quality of stereo images of collection of illustrative plates vision
CN108648161A (en) The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
CN103606137B (en) Keep the histogram equalization method of background and detailed information
CN105046708B (en) A kind of color correction objective evaluation method consistent with subjective perception
CN104036501B (en) A kind of objective evaluation method for quality of stereo images based on rarefaction representation
CN106651853B (en) Establishment method of 3D saliency model based on prior knowledge and depth weight
CN101610425B (en) Method for evaluating stereo image quality and device
CN107392950A (en) A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection
CN101771893A (en) Video frequency sequence background modeling based virtual viewpoint rendering method
CN111027415B (en) Vehicle detection method based on polarization image
CN105898278B (en) A kind of three-dimensional video-frequency conspicuousness detection method based on binocular Multidimensional Awareness characteristic
CN103581651A (en) Method for synthesizing virtual sight points of vehicle-mounted multi-lens camera looking-around system
CN116664462B (en) Infrared and visible light image fusion method based on MS-DSC and I_CBAM
CN109242834A (en) It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method
CN104506872B (en) A kind of method and device of converting plane video into stereoscopic video
CN103020933A (en) Multi-source image fusion method based on bionic visual mechanism
CN104200453A (en) Parallax image correcting method based on image segmentation and credibility
CN108460794B (en) Binocular three-dimensional infrared salient target detection method and system
CN111882516B (en) An Image Quality Assessment Method Based on Visual Saliency and Deep Neural Networks
CN104469355B (en) Based on the prediction of notable adaptive euphoropsia and the euphoropsia Enhancement Method of nonlinear mapping
CN103065320A (en) Synthetic aperture radar (SAR) image change detection method based on constant false alarm threshold value
CN104144339B (en) A kind of matter based on Human Perception is fallen with reference to objective evaluation method for quality of stereo images
CN104394405B (en) A kind of method for evaluating objective quality based on full reference picture

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130925