CN103325120A - Rapid self-adaption binocular vision stereo matching method capable of supporting weight - Google Patents
Rapid self-adaption binocular vision stereo matching method capable of supporting weight Download PDFInfo
- Publication number
- CN103325120A CN103325120A CN2013102689033A CN201310268903A CN103325120A CN 103325120 A CN103325120 A CN 103325120A CN 2013102689033 A CN2013102689033 A CN 2013102689033A CN 201310268903 A CN201310268903 A CN 201310268903A CN 103325120 A CN103325120 A CN 103325120A
- Authority
- CN
- China
- Prior art keywords
- pixel
- parallax
- support
- window
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 239000011159 matrix material Substances 0.000 claims abstract description 22
- 238000012937 correction Methods 0.000 claims abstract description 20
- 230000009466 transformation Effects 0.000 claims description 15
- 238000011524 similarity measure Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 238000005259 measurement Methods 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 2
- 238000006116 polymerization reaction Methods 0.000 claims 7
- 238000010168 coupling process Methods 0.000 claims 6
- 238000005859 coupling reaction Methods 0.000 claims 6
- 230000008878 coupling Effects 0.000 claims 5
- 239000007787 solid Substances 0.000 claims 3
- 238000006243 chemical reaction Methods 0.000 claims 2
- 230000002776 aggregation Effects 0.000 abstract description 10
- 238000004220 aggregation Methods 0.000 abstract description 10
- 230000000694 effects Effects 0.000 abstract description 2
- 230000003044 adaptive effect Effects 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 6
- 241000282414 Homo sapiens Species 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 2
- 208000006440 Open Bite Diseases 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
本发明公开了一种快速自适应支持权值双目视觉立体匹配方法,包括以下步骤:读取已有待匹配双目图像对;计算匹配代价;加权聚合匹配代价;计算初始视差;对初始视差进行校正,得到最终视差矩阵;生成视差图,输出结果。本发明可用于立体显示技术领域,改善立体匹配效果。
The invention discloses a fast self-adaptive weight binocular vision stereo matching method, comprising the following steps: reading an existing binocular image pair to be matched; calculating matching cost; weighted aggregation matching cost; calculating initial parallax; Correction to obtain the final disparity matrix; generate a disparity map and output the result. The invention can be used in the technical field of stereo display to improve the effect of stereo matching.
Description
技术领域technical field
本发明涉及图像显示技术领域,尤其涉及一种快速自适应支持权值双目视觉立体匹配方法。The invention relates to the technical field of image display, in particular to a fast adaptive support weight binocular vision stereo matching method.
背景技术Background technique
视觉是人类认识世界、感知世界的一个重要手段,人类对外界的认知中约75%的信息是通过视觉系统获得。从传统的黑白照片、黑白电视机到现在的高分辨率彩色数字相片、高清数字电视,人类对视觉体验要求越来越高。虽然传统的二维视频已经能够提供高清晰度的二维平面信息,但人类生活在三维世界中,二维平面视频始终无法给人一种“身临其境”的视觉感受。双目立体视觉打破传统二维视频“单眼看世界”的局限性,利用计算机模拟人类的视觉系统,通过场景中的两幅或者多幅二维视图得到场景的三维信息,人类通过立体显示视频便可以感受到真实的三维世界。双目立体视觉是计算机视觉的一个重要研究领域,包括四个步骤:图像获取、摄像机标定、立体匹配和三维重建,其中立体匹配是关键技术,立体匹配的精度直接影响到三维重建的效果。Vision is an important means for human beings to understand and perceive the world. About 75% of the information in human cognition of the outside world is obtained through the visual system. From traditional black-and-white photos and black-and-white TVs to current high-resolution color digital photos and high-definition digital TVs, human beings have higher and higher requirements for visual experience. Although traditional two-dimensional video can already provide high-definition two-dimensional plane information, human beings live in a three-dimensional world, and two-dimensional plane video can never give people an "immersive" visual experience. Binocular stereo vision breaks the limitation of traditional two-dimensional video "seeing the world with one eye", uses computer to simulate the human visual system, and obtains three-dimensional information of the scene through two or more two-dimensional views in the scene. Can feel the real three-dimensional world. Binocular stereo vision is an important research field of computer vision, including four steps: image acquisition, camera calibration, stereo matching and 3D reconstruction, among which stereo matching is the key technology, and the accuracy of stereo matching directly affects the effect of 3D reconstruction.
虽然目前已有大量的立体匹配方法,但在实际应用中仍然存在许多问题。立体匹配按照优化方法的不同,可以分为两类:全局立体匹配方法和局部立体匹配方。全局立体匹配方法匹配精度高,但计算结构复杂,不易于硬件实现,局部立体匹配方法结构简单,易于硬件实现,但匹配精度也相对较低。随着Yoon提出自适应支持权值方法以来,局部方法的立体匹配性能大大提高,甚至超过了一些全局方法,但Yoon自适应支持权值方法仍然存在一个重要问题:计算速度慢,计算时间比其它局部算法长。因此,发明一种计算速度快、匹配性能高的快速自适应支持权值方法具有十分重要的意义,有利于将立体匹配技术应用到实际问题中。Although there are a large number of stereo matching methods, there are still many problems in practical applications. According to different optimization methods, stereo matching can be divided into two categories: global stereo matching method and local stereo matching method. The global stereo matching method has high matching accuracy, but the calculation structure is complex, and it is not easy to implement in hardware. The local stereo matching method has a simple structure and is easy to implement in hardware, but the matching accuracy is relatively low. Since Yoon proposed the adaptive support weight method, the stereo matching performance of the local method has been greatly improved, even surpassing some global methods, but there is still an important problem in the Yoon adaptive support weight method: the calculation speed is slow, and the calculation time is slower than other methods. The local algorithm is long. Therefore, it is of great significance to invent a fast adaptive support weight method with fast calculation speed and high matching performance, which is conducive to the application of stereo matching technology to practical problems.
发明内容Contents of the invention
本发明所要解决的技术问题在于克服Yoon自适应支持权值方法的不足,提出一种基于扩展秩变换的快速自适应支持权值方法,以双目立体图像对的左视图为待匹配图像,以右视图为匹配图像,为左视图中的每一个待匹配像素点在右视图中找到对应的匹配点,即求得左视图的视差图。The technical problem to be solved by the present invention is to overcome the deficiencies of the Yoon adaptive support weight method, and propose a fast adaptive support weight method based on extended rank transformation, with the left view of the binocular stereo image pair as the image to be matched, and The right view is the matching image, and the corresponding matching point is found in the right view for each pixel to be matched in the left view, that is, the disparity map of the left view is obtained.
本发明为解决其技术问题,所采用的技术包含以下步骤:The present invention is for solving its technical problem, and the technology adopted comprises the following steps:
S1、读取已有待匹配双目图像对Il和Ir,获得图像的大小和颜色通道信息,其中,Il表示左视图,为待匹配图像,Ir表示右视图,为匹配图像;S1. Read the existing pair of binocular images I l and I r to be matched, and obtain the size and color channel information of the image, wherein, I l represents the left view, which is the image to be matched, and I r represents the right view, which is the matched image;
S2、计算Il,Ir中像素点间的匹配代价,包括:S2. Calculate the matching cost between pixels in I l and I r , including:
S21、对待匹配图像Il,确定方形支持窗N,计算支持窗内中心像素和支持像素的灰度差diff(p,q),其中,diff(p,q)=I(p)-I(q),I(p)和I(q)分别为中心像素点p和支持像素点q的灰度值;S21. Determine the square support window N for the image I l to be matched, and calculate the gray difference diff(p,q) between the center pixel and the support pixel in the support window, where diff(p,q)=I(p)−I( q), I(p) and I(q) are the gray values of the center pixel p and the support pixel q, respectively;
S22、根据S21所得灰度差diff(p,q),将方形支持窗N内每个像素定义到5个等级中去,五个等级如下:S22. According to the gray difference diff(p,q) obtained in S21, each pixel in the square support window N is defined into five levels, and the five levels are as follows:
其中,s和t是根据经验设置的阈值,满足尽可能减小图像噪声的影响这一原则; Among them, s and t are thresholds set according to experience, which satisfy the principle of reducing the influence of image noise as much as possible;
S23、统计得出初始相似性测度值Sd:S23. Statistically obtain the initial similarity measure value S d :
令fuz表示经秩变换后得到的数值矩阵,矩阵大小和方形支持窗N大小相同,
S24、统计待匹配图像中待匹配像素点为中心的n×n方形统计窗M内每个视差值d∈D对应的初始相似性测度值,根据ERT相似性度量函数得到待匹配像素和匹配图像中候选匹配像素点间的匹配代价其中,d表示待匹配像素和候选匹配像素在水平方向的视差,D={dmin,...dmax},
S3、加权聚合匹配代价,包括:S3, weighted aggregation matching cost, including:
S31、计算支持权值w(p,q):利用颜色相似性和几何接近性,计算匹配支持窗内支持像素q对待匹配像素p的支持权值w(p,q),w(p,q)=fs(Δcpq)·fp(Δgpq), 其中,fs(Δcpq)表示由颜色相似性确定的聚类强度,fp(Δgpq)表示由几何接近性确定的聚类强度,Δcpq表示两个像素颜色cp和cq在RGB颜色空间的欧几里德距离,cp=[Rp,Gp,Bp],cq=[Rq,Gq,Bq],
S32、根据S24所得匹配代价和S31所得支持权值w(p,q),加权聚合匹配代价,得到:
S4、计算初始视差:对S3中所得加权聚合匹配代价采用局部最优化方法WTA(Winner-Take-All,胜者为王),得出最大加权聚合结果,最大加权聚合结果对应的视差值为该像素的初始视差dp,每个像素的初始视差结果保存到初始视差矩阵,初始视差矩阵为:
S5、对S4所得初始视差进行校正,得到最终视差矩阵,包括:S5. Correcting the initial parallax obtained in S4 to obtain a final parallax matrix, including:
S51、确定以待校正像素p为中心的校正窗口Nc,根据颜色相似性和几何接近性为校正窗内每一个像素自适应地分配一个合适的支持权值wc, S51. Determine the correction window N c centered on the pixel p to be corrected, and adaptively assign an appropriate support weight w c to each pixel in the correction window according to color similarity and geometric proximity,
S52、观察校正窗内所有像素的初始视差分布,统计视差d∈D在校正窗内出现的次数,并聚合各个视差值d在校正窗内出现的次数及对应的权值,最大聚合结果对应的视差,则为待校正像素的最终视差dp_final,并将结果保存到最终视差矩阵,其中
S6、生成视差图,输出结果:将S5所得最终视差值dp_final映射到相应的灰度空间[0,255],映射比例为t,得到表示视差信息的灰度图像。S6. Generate a disparity map, and output the result: map the final disparity value d p_final obtained in S5 to the corresponding grayscale space [0, 255] with a mapping ratio of t, and obtain a grayscale image representing disparity information.
进一步地,S22中所述s<t。Further, in S22, s<t.
进一步地,S51所述校正窗内像素应尽量来自相同深度,自适应支持权值方形校正窗满足这一条件。Further, the pixels in the correction window in S51 should come from the same depth as much as possible, and the adaptive support weight square correction window satisfies this condition.
本发明通过一种基于扩展秩变换的快速自适应支持权值方法,以双目立体图像对的左视图为待匹配图像,以右视图为匹配图像,为左视图中的每一个待匹配像素点在右视图中找到对应的匹配点,即求得左视图的视差图,相对于其他计算方法,计算速度快,匹配性能高,更有利于将立体匹配技术应用到实际问题中。The present invention adopts a fast self-adaptive support weight method based on extended rank transform, takes the left view of the binocular stereo image pair as the image to be matched, takes the right view as the matching image, and uses the left view as the matching image for each pixel point to be matched in the left view Find the corresponding matching point in the right view, that is, obtain the disparity map of the left view. Compared with other calculation methods, the calculation speed is fast and the matching performance is high, which is more conducive to the application of stereo matching technology to practical problems.
附图说明Description of drawings
图1是本发明所述的方法步骤示意图。Fig. 1 is a schematic diagram of the method steps of the present invention.
具体实施方式Detailed ways
下面结合具体实施方式对本发明作进一步详细描述。The present invention will be further described in detail below in conjunction with specific embodiments.
一种快速自适应支持权值双目视觉立体匹配方法,其目的是快速求得待匹配图像对的高精度稠密视差图。本实施例以Middlebury测试平台提供的Teddy标准测试图像对为实验对象,其中以左视图为待匹配图像,以右视图为匹配图像,为左视图中的每一个待匹配像素点在右视图中找到对应的匹配点。为了求解该问题,根据流程图1所示,需采取如下步骤:A fast adaptive support weight binocular vision stereo matching method, the purpose of which is to quickly obtain a high-precision dense disparity map of an image pair to be matched. This embodiment takes the Teddy standard test image pair provided by the Middlebury test platform as the experimental object, wherein the left view is the image to be matched, the right view is the matching image, and each pixel point to be matched in the left view is found in the right view corresponding matching points. In order to solve this problem, according to the flow chart 1, the following steps need to be taken:
S1:读取待匹配双目图像对。输入Middlebury测试平台提供的Teddy标准测试图像对,其中左视图Il,右视图Ir。读取待匹配图像对包括图像大小和颜色通道等信息。S1: Read the binocular image pair to be matched. Input the Teddy standard test image pair provided by the Middlebury test platform, in which the left view I l and the right view I r . Read the image pair to be matched, including information such as image size and color channel.
S2:使用扩展秩变换函数,计算Il和Ir像素点间的匹配代价。S2: Use the extended rank transformation function to calculate the matching cost between I l and I r pixels.
首先确定待匹配像素p,确定方形支持窗口大小N=25,计算方形支持窗内中心像素p和支持像素q的灰度差diff(p,q)=I(p)-I(q),其中I(p),I(q)分别表示像素p,q的灰度值。First determine the pixel p to be matched, determine the size of the square support window N=25, and calculate the gray difference diff(p,q)=I(p)-I(q) between the center pixel p and the support pixel q in the square support window, where I(p), I(q) represent the gray values of pixels p and q, respectively.
然后根据diff(p,q)值的大小,将支持窗内每个像素定义到5个等级。Then, according to the value of diff(p,q), each pixel in the support window is defined to 5 levels.
对左视图和右视图进行秩变换,得到两个秩变换矩阵fuzl(p)、统计左右视图中方形支持窗口内秩变换矩阵fuzl(p)和在对应位置具有相同等级的个数,得到待匹配像素点p和候选匹配像素点间的初始相似性测度值Sd。Perform rank transformation on the left view and right view to obtain two rank transformation matrices fuz l (p), Statistical rank transformation matrices fuz l (p) and The number of the same level in the corresponding position, get the pixel point p to be matched and the candidate matching pixel point The initial similarity measure S d between them.
最后统计以待匹配像素p为窗口中心的3×3方形统计窗口内每个视差值d∈D对应的初始相似性测度值,得到待匹配像素和候选匹配像素点间的匹配代价其中,初始化视差集D={0,1,2,3,…56,57,58,59}。Finally, count the initial similarity measure value corresponding to each disparity value d∈D in the 3×3 square statistical window with the pixel p to be matched as the center of the window, and obtain the matching cost between the pixel to be matched and the candidate matching pixel Among them, the initial disparity set D={0,1,2,3,...56,57,58,59}.
S3:权聚合匹配代价:S3: Weight aggregation matching cost:
首先,利用颜色相似性和几何接近性,计算匹配支持窗内支持像素q对待匹配像素p的支持权值w(p,q):First, using the color similarity and geometric proximity, calculate the support weight w(p,q) of the support pixel q in the matching support window to be matched with the pixel p:
然后,使用方形支持权值窗口,聚合匹配代价和相应的支持权值,支持窗口的大小为
S4:从聚合结果计算初始视差。采用局部最优化方法WTA,具有最大加权结果对应的视差值则为该像素的初始视差dp,并将结果保存到初始视差矩阵。S4: Calculate the initial disparity from the aggregation results. Using the local optimization method WTA, the disparity value corresponding to the maximum weighted result is the initial disparity d p of the pixel, and the result is saved to the initial disparity matrix.
S5:对初始视差进行校正S5: Correct the initial parallax
首先,确定以待校正像素p为中心的校正窗Nc=21,并根据颜色相似性和几何接近性,计算校正窗内校正像素p对待校正像素q的支持权值wc(p,q):First, determine the correction window N c =21 centered on the pixel p to be corrected, and calculate the support weight wc (p,q) of the pixel p to be corrected within the correction window based on the color similarity and geometric proximity :
然后观察校正窗内所有像素的初始视差分布,统计视差d∈D在校正窗内出现的次数,并聚合各个视差值d在校准窗内出现的次数及对应的权值,最大聚合结果对应的视差,则为待校正像素的最终视差dp_final,并将最终视差结果保存到最终视差矩阵。Then observe the initial disparity distribution of all pixels in the correction window, count the number of times that the disparity d∈D appears in the correction window, and aggregate the number of times each disparity value d appears in the calibration window and the corresponding weight. The maximum aggregation result corresponds to The disparity is the final disparity d p_final of the pixel to be corrected, and the final disparity result is saved to the final disparity matrix.
S6:生成视差图并输出结果。将最终视差矩阵中的视差值映射到相应的灰度空间[0,255],映射比例t=4,视差集D中视差值的映射结果如下:S6: Generate a disparity map and output the result. Map the disparity value in the final disparity matrix to the corresponding grayscale space [0, 255], the mapping ratio t=4, and the mapping result of the disparity value in the disparity set D is as follows:
0×4=0 1×4=4 ... 13×4=52 14×4=560×4=0 1×4=4 ... 13×4=52 14×4=56
15×4=60 16×4=64 ... 28×4=112 29×4=11615×4=60 16×4=64 ... 28×4=112 29×4=116
30×4=120 31×4=124 ... 43×4=172 44×4=17630×4=120 31×4=124 ... 43×4=172 44×4=176
45×4=180 46×4=184 ... 58×4=232 59×4=23645×4=180 46×4=184 ... 58×4=232 59×4=236
其中,视差值越大,映射后越接近255,在视差图中越亮,视差值越小,映射后越接近0,在视差图中越暗。Among them, the larger the disparity value, the closer to 255 after mapping, the brighter it is in the disparity map, the smaller the disparity value, the closer to 0 after mapping, and the darker it is in the disparity map.
本发明以Il为待匹配图像,以Ir为匹配图像,为左视图中的每一个像素点在右视图中找到对应的匹配点,求得左视图的视差图。表1是本发明实施例结果和Yoon自适应支持权值方法结果的定量比较。从表1中可以看出,本发明在非遮挡区域,深度不连续区域,以及所有区域错误匹配率均比Yoon自适应支持权值方法的错误匹配率低,且本发明的立体匹配时间约为Yoon自适应支持权值方法匹配时间的1/20,匹配速度更快。The present invention uses I l as the image to be matched and I r as the matching image, finds a corresponding matching point in the right view for each pixel in the left view, and obtains the disparity map of the left view. Table 1 is a quantitative comparison between the results of the embodiment of the present invention and the results of the Yoon adaptive support weight method. As can be seen from Table 1, the present invention is in the non-occlusion region, the depth discontinuity region, and the false matching rate of all regions is all lower than the false matching rate of the Yoon adaptive support weight method, and the stereo matching time of the present invention is about Yoon adaptive supports 1/20 of the matching time of the weight method, and the matching speed is faster.
表1Table 1
必要在此指出的是,上面的实施例只是用于进一步阐述本发明,以便于本领域的普通技术人员更好地理解本发明。本发明已通过文字揭露了其首选实施方案,但通过阅读这些技术文字说明可以领会其中的可优化性和可修改性,并在不偏离本发明的范围和精神上进行改进,但这样的改进应仍属于本发明权利要求的保护范围。It must be pointed out here that the above embodiments are only used to further illustrate the present invention, so that those skilled in the art can better understand the present invention. The present invention has disclosed its preferred embodiment by text, but can comprehend optimization and modifiability wherein by reading these technical text description, and can improve without departing from the scope and spirit of the present invention, but such improvement should Still belong to the protection scope of the claims of the present invention.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013102689033A CN103325120A (en) | 2013-06-30 | 2013-06-30 | Rapid self-adaption binocular vision stereo matching method capable of supporting weight |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013102689033A CN103325120A (en) | 2013-06-30 | 2013-06-30 | Rapid self-adaption binocular vision stereo matching method capable of supporting weight |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103325120A true CN103325120A (en) | 2013-09-25 |
Family
ID=49193843
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2013102689033A Pending CN103325120A (en) | 2013-06-30 | 2013-06-30 | Rapid self-adaption binocular vision stereo matching method capable of supporting weight |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103325120A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971366A (en) * | 2014-04-18 | 2014-08-06 | 天津大学 | Stereoscopic matching method based on double-weight aggregation |
CN104123727A (en) * | 2014-07-26 | 2014-10-29 | 福州大学 | Stereo matching method based on self-adaptation Gaussian weighting |
CN104200453A (en) * | 2014-09-15 | 2014-12-10 | 西安电子科技大学 | Parallax image correcting method based on image segmentation and credibility |
CN104637043A (en) * | 2013-11-08 | 2015-05-20 | 株式会社理光 | Supporting pixel selection method and device and parallax determination method |
CN104820991A (en) * | 2015-05-15 | 2015-08-05 | 武汉大学 | Multi-soft-constraint stereo matching method based on cost matrix |
CN104915941A (en) * | 2014-03-11 | 2015-09-16 | 株式会社理光 | Method and apparatus for calculating parallax |
CN106156748A (en) * | 2016-07-22 | 2016-11-23 | 浙江零跑科技有限公司 | Traffic scene participant's recognition methods based on vehicle-mounted binocular camera |
CN106254850A (en) * | 2016-08-23 | 2016-12-21 | 深圳市捷视飞通科技股份有限公司 | The image matching method of double vision point three-dimensional video-frequency and device |
TWI566203B (en) * | 2013-12-16 | 2017-01-11 | 財團法人工業技術研究院 | Method and system for depth refinement and data aggregation |
CN107025660A (en) * | 2016-02-01 | 2017-08-08 | 北京三星通信技术研究有限公司 | A kind of method and apparatus for determining binocular dynamic visual sensor image parallactic |
CN108154529A (en) * | 2018-01-04 | 2018-06-12 | 北京大学深圳研究生院 | The solid matching method and system of a kind of binocular image |
CN108230273A (en) * | 2018-01-05 | 2018-06-29 | 西南交通大学 | A kind of artificial compound eye camera three dimensional image processing method based on geological information |
CN108305269A (en) * | 2018-01-04 | 2018-07-20 | 北京大学深圳研究生院 | A kind of image partition method and system of binocular image |
CN108381549A (en) * | 2018-01-26 | 2018-08-10 | 广东三三智能科技有限公司 | A kind of quick grasping means of binocular vision guided robot, device and storage medium |
WO2020177061A1 (en) * | 2019-03-04 | 2020-09-10 | 北京大学深圳研究生院 | Binocular stereo vision matching method and system based on extremum verification |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120163704A1 (en) * | 2010-12-23 | 2012-06-28 | Electronics And Telecommunications Research Institute | Apparatus and method for stereo matching |
CN102572485A (en) * | 2012-02-02 | 2012-07-11 | 北京大学 | Self-adaptive weighted stereo matching algorithm, stereo display and collecting device and system |
-
2013
- 2013-06-30 CN CN2013102689033A patent/CN103325120A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120163704A1 (en) * | 2010-12-23 | 2012-06-28 | Electronics And Telecommunications Research Institute | Apparatus and method for stereo matching |
CN102572485A (en) * | 2012-02-02 | 2012-07-11 | 北京大学 | Self-adaptive weighted stereo matching algorithm, stereo display and collecting device and system |
Non-Patent Citations (3)
Title |
---|
KUK-JIN YOON ET AL.: "Adaptive Support-Weight Approach for Correspondence Search", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, vol. 28, no. 4, 30 April 2006 (2006-04-30), pages 650 - 656 * |
TAO GUAN ET AL.: "Performance enhancement of Adaptive Support-Weight approach by tuning parameters", 《2012 IEEE FIFTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE》, 18 October 2012 (2012-10-18), pages 206 - 211, XP032331182, DOI: doi:10.1109/ICACI.2012.6463153 * |
ZHENG GU ET AL.: "Local stereo matching with adaptive support-weight, rank transform and disparity calibration", 《PATTERN RECOGNITION LETTERS》, vol. 29, no. 9, 1 July 2008 (2008-07-01), pages 1230 - 1235, XP022663891, DOI: doi:10.1016/j.patrec.2008.01.032 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104637043A (en) * | 2013-11-08 | 2015-05-20 | 株式会社理光 | Supporting pixel selection method and device and parallax determination method |
CN104637043B (en) * | 2013-11-08 | 2017-12-05 | 株式会社理光 | Pixel selecting method, device, parallax value is supported to determine method |
TWI566203B (en) * | 2013-12-16 | 2017-01-11 | 財團法人工業技術研究院 | Method and system for depth refinement and data aggregation |
CN104915941A (en) * | 2014-03-11 | 2015-09-16 | 株式会社理光 | Method and apparatus for calculating parallax |
CN104915941B (en) * | 2014-03-11 | 2017-08-04 | 株式会社理光 | The method and apparatus for calculating parallax |
CN103971366A (en) * | 2014-04-18 | 2014-08-06 | 天津大学 | Stereoscopic matching method based on double-weight aggregation |
CN104123727B (en) * | 2014-07-26 | 2017-02-15 | 福州大学 | Stereo matching method based on self-adaptation Gaussian weighting |
CN104123727A (en) * | 2014-07-26 | 2014-10-29 | 福州大学 | Stereo matching method based on self-adaptation Gaussian weighting |
CN104200453A (en) * | 2014-09-15 | 2014-12-10 | 西安电子科技大学 | Parallax image correcting method based on image segmentation and credibility |
CN104200453B (en) * | 2014-09-15 | 2017-01-25 | 西安电子科技大学 | Parallax image correcting method based on image segmentation and credibility |
CN104820991A (en) * | 2015-05-15 | 2015-08-05 | 武汉大学 | Multi-soft-constraint stereo matching method based on cost matrix |
CN104820991B (en) * | 2015-05-15 | 2017-10-03 | 武汉大学 | A kind of multiple soft-constraint solid matching method based on cost matrix |
CN107025660B (en) * | 2016-02-01 | 2020-07-10 | 北京三星通信技术研究有限公司 | Method and device for determining image parallax of binocular dynamic vision sensor |
CN107025660A (en) * | 2016-02-01 | 2017-08-08 | 北京三星通信技术研究有限公司 | A kind of method and apparatus for determining binocular dynamic visual sensor image parallactic |
CN106156748A (en) * | 2016-07-22 | 2016-11-23 | 浙江零跑科技有限公司 | Traffic scene participant's recognition methods based on vehicle-mounted binocular camera |
CN106254850A (en) * | 2016-08-23 | 2016-12-21 | 深圳市捷视飞通科技股份有限公司 | The image matching method of double vision point three-dimensional video-frequency and device |
CN108154529A (en) * | 2018-01-04 | 2018-06-12 | 北京大学深圳研究生院 | The solid matching method and system of a kind of binocular image |
CN108305269A (en) * | 2018-01-04 | 2018-07-20 | 北京大学深圳研究生院 | A kind of image partition method and system of binocular image |
CN108154529B (en) * | 2018-01-04 | 2021-11-23 | 北京大学深圳研究生院 | Stereo matching method and system for binocular images |
CN108305269B (en) * | 2018-01-04 | 2022-05-10 | 北京大学深圳研究生院 | Image segmentation method and system for binocular image |
CN108230273A (en) * | 2018-01-05 | 2018-06-29 | 西南交通大学 | A kind of artificial compound eye camera three dimensional image processing method based on geological information |
CN108230273B (en) * | 2018-01-05 | 2020-04-07 | 西南交通大学 | Three-dimensional image processing method of artificial compound eye camera based on geometric information |
CN108381549A (en) * | 2018-01-26 | 2018-08-10 | 广东三三智能科技有限公司 | A kind of quick grasping means of binocular vision guided robot, device and storage medium |
CN108381549B (en) * | 2018-01-26 | 2021-12-14 | 广东三三智能科技有限公司 | Binocular vision guide robot rapid grabbing method and device and storage medium |
WO2020177061A1 (en) * | 2019-03-04 | 2020-09-10 | 北京大学深圳研究生院 | Binocular stereo vision matching method and system based on extremum verification |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103325120A (en) | Rapid self-adaption binocular vision stereo matching method capable of supporting weight | |
CN107767413B (en) | An Image Depth Estimation Method Based on Convolutional Neural Networks | |
CN105744256B (en) | Based on the significant objective evaluation method for quality of stereo images of collection of illustrative plates vision | |
CN108648161A (en) | The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks | |
CN103606137B (en) | Keep the histogram equalization method of background and detailed information | |
CN105046708B (en) | A kind of color correction objective evaluation method consistent with subjective perception | |
CN104036501B (en) | A kind of objective evaluation method for quality of stereo images based on rarefaction representation | |
CN106651853B (en) | Establishment method of 3D saliency model based on prior knowledge and depth weight | |
CN101610425B (en) | Method for evaluating stereo image quality and device | |
CN107392950A (en) | A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection | |
CN101771893A (en) | Video frequency sequence background modeling based virtual viewpoint rendering method | |
CN111027415B (en) | Vehicle detection method based on polarization image | |
CN105898278B (en) | A kind of three-dimensional video-frequency conspicuousness detection method based on binocular Multidimensional Awareness characteristic | |
CN103581651A (en) | Method for synthesizing virtual sight points of vehicle-mounted multi-lens camera looking-around system | |
CN116664462B (en) | Infrared and visible light image fusion method based on MS-DSC and I_CBAM | |
CN109242834A (en) | It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method | |
CN104506872B (en) | A kind of method and device of converting plane video into stereoscopic video | |
CN103020933A (en) | Multi-source image fusion method based on bionic visual mechanism | |
CN104200453A (en) | Parallax image correcting method based on image segmentation and credibility | |
CN108460794B (en) | Binocular three-dimensional infrared salient target detection method and system | |
CN111882516B (en) | An Image Quality Assessment Method Based on Visual Saliency and Deep Neural Networks | |
CN104469355B (en) | Based on the prediction of notable adaptive euphoropsia and the euphoropsia Enhancement Method of nonlinear mapping | |
CN103065320A (en) | Synthetic aperture radar (SAR) image change detection method based on constant false alarm threshold value | |
CN104144339B (en) | A kind of matter based on Human Perception is fallen with reference to objective evaluation method for quality of stereo images | |
CN104394405B (en) | A kind of method for evaluating objective quality based on full reference picture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130925 |