CN100586199C - Parallax acquisition method and device - Google Patents
Parallax acquisition method and device Download PDFInfo
- Publication number
- CN100586199C CN100586199C CN200810090761A CN200810090761A CN100586199C CN 100586199 C CN100586199 C CN 100586199C CN 200810090761 A CN200810090761 A CN 200810090761A CN 200810090761 A CN200810090761 A CN 200810090761A CN 100586199 C CN100586199 C CN 100586199C
- Authority
- CN
- China
- Prior art keywords
- energy
- parallax
- parameter
- acquiring
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
Description
技术领域 technical field
本发明涉及计算机视觉技术,尤其涉及一种视差获取方法和装置。The present invention relates to computer vision technology, in particular to a parallax acquisition method and device.
背景技术 Background technique
立体匹配是计算机视觉中的一个经典问题。立体匹配技术用于从立体图像获得三维(3D)图像。立体图像是指,对同一个对象,根据在同一直线上的不同位置拍摄的一对二维(2D)图像。Stereo matching is a classic problem in computer vision. Stereo matching techniques are used to obtain three-dimensional (3D) images from stereo images. A stereoscopic image refers to a pair of two-dimensional (2D) images taken at different positions on the same straight line for the same object.
例如,如果立体图像是由对象前方的两个摄像机拍摄的同一场景的一对二维图像,左图像和右图像,对所述对象在空间中相同的点,在左图像和右图像中的坐标差称为视差。根据立体匹配技术从立体图像获取的视差构造三维图像。For example, if a stereoscopic image is a pair of two-dimensional images of the same scene captured by two cameras in front of the object, a left image and a right image, for the object at the same point in space, the coordinates in the left and right images The difference is called parallax. A 3D image is constructed from the disparity acquired from the stereo image according to the stereo matching technique.
下面对现有技术一中的立体匹配技术作以介绍。现有技术一中的立体匹配技术采用的技术方案,如图1所示,主要包括的步骤有:步骤101、把图像分割成多个区域;步骤102、计算各区域图像像素的初始视差值;步骤103、对各区域进行视差平面拟合,获得各区域的初始视差参数;步骤104、优化各区域的视差参数;步骤105、根据优化的视差参数获得各区域的视差。下面主要对现有技术一中步骤102、103和104进行说明。The stereo matching technology in the prior art 1 is introduced below. The technical solution adopted by the stereo matching technology in the prior art 1, as shown in Figure 1, mainly includes the following steps: step 101, dividing the image into multiple regions; step 102, calculating the initial parallax value of the image pixels in each region ; Step 103, performing parallax plane fitting on each region to obtain the initial parallax parameters of each region; Step 104, optimizing the parallax parameters of each region; Step 105, obtaining the parallax of each region according to the optimized parallax parameters. The following mainly describes steps 102, 103 and 104 in the first prior art.
步骤102、计算各区域图像像素的初始视差值。Step 102. Calculate initial disparity values of image pixels in each region.
预先设定初始视差范围[dmin,dmax]和视差步长d′。选择正在处理的当前图像区域,即当前区域,在此区域中选择第一个图像像素,设置此像素视差d为dmin,按公式(r1)计算匹配残差c(x,y,d):The initial disparity range [d min , d max ] and the disparity step size d′ are preset. Select the current image area being processed, that is, the current area, select the first image pixel in this area, set the parallax d of this pixel as dmin , and calculate the matching residual c(x, y, d) according to the formula (r1):
其中,(x,y)为图象坐标值,d为视差Among them, (x, y) is the image coordinate value, d is the disparity
按照视差步长d′,不断增大视差d,并计算c(x,y,d),保存最小的c(x,y,d)及对应的d,直到d≥dmax,则最小的c(x,y,d)对应的d,即为此像素初始视差。According to the disparity step size d′, continuously increase the disparity d, and calculate c(x, y, d), save the smallest c(x, y, d) and the corresponding d until d≥d max , then the smallest c The d corresponding to (x, y, d) is the initial disparity of this pixel.
步骤103、对各区域进行视差平面拟合,获得各区域的初始视差参数。Step 103, performing parallax plane fitting on each region to obtain initial parallax parameters of each region.
首先根据各区域的初始视差值拟合各区域的初始视差平面。现有技术一确定各区域的初始视差平面的方法为:首先按照下述方法判断各像素点是否为局外点:按照参考图象像素(x,y)的初始视差得到对应图像像素(x′,y′),计算(x′,y′)相对参考图像的初始视差如果
然后,选择当前处理的图像区域,判断此区域中包括的像素数是否大于一个设定的阈值,现有技术一只处理像素数大于所述阈值的图像区域。如果当前处理的图像区域中的像素数不大于该阈值,则选择下一个图像区域,采用同样的方法进行判断;如果当前处理的图像区域中的像素数大于该阈值,在该区域中选择第一个图像像素点,开始进行初始视差平面拟合。Then, the currently processed image area is selected, and it is judged whether the number of pixels included in this area is greater than a set threshold. The prior art only processes image areas with the number of pixels greater than the threshold. If the number of pixels in the currently processed image area is not greater than the threshold, select the next image area and use the same method to judge; if the number of pixels in the currently processed image area is greater than the threshold, select the first image area in this area image pixels, and start the initial parallax plane fitting.
按照下述视差平面模型公式(r2),建模所述各区域的视差平面:According to the following parallax plane model formula (r2), the parallax planes of the regions are modeled:
d=c1x+c2y+c3 (r2)d=c 1 x+c 2 y+c 3 (r2)
其中,(x,y)为图像像素的坐标值,d为该图像像素对应的初始视差,(c1,c2,c3)为视差平面系数向量,即视差参数。Among them, (x, y) is the coordinate value of the image pixel, d is the initial disparity corresponding to the image pixel, (c 1 , c 2 , c 3 ) is the disparity plane coefficient vector, that is, the disparity parameter.
如果该像素点是局外点,则选择下一个像素点。如果该像素点不是局外点,则将该像素信息加入到公式(r3)的线性系统矩阵中。If the pixel is an outlier, select the next pixel. If the pixel point is not an outlier point, the pixel information is added to the linear system matrix of formula (r3).
当该区域所有的点都处理完后,按照公式(r3),可以求出[c1,c2,c3]T,得到该区域的初始视差平面:When all the points in the area are processed, according to the formula (r3), [c 1 , c 2 , c 3 ] T can be obtained to obtain the initial parallax plane of the area:
对各区域用上述同样的方法拟合初始视差平面,各区域的初始视差平面的集合组成一个视差平面集,根据该视差平面集得到各区域的视差参数。The same method as above is used to fit the initial parallax planes for each area, the set of initial parallax planes of each area forms a set of parallax planes, and the parallax parameters of each area are obtained according to the set of parallax planes.
步骤104、优化各区域的视差参数。Step 104, optimize the parallax parameters of each region.
现有技术一优化各区域的视差参数的方法主要包括三个步骤。Prior Art A method for optimizing the parallax parameters of each region mainly includes three steps.
第一步骤、根据得到初始视差平面,使用循环优化视差平面。The first step is to use a loop to optimize the parallax plane according to the obtained initial parallax plane.
循环优化视差平面的方法是:设置新平面与老平面均为初始视差平面,设置循环记数为1,选择当前处理区域中第一个像素,判断该像素是否为遮挡像素。如为遮挡像素,则选择下一个像素;如不是遮挡像素,为了加强算法的鲁棒性,采用加权最小平方技术,按照公式(r4)计算加权系数,并将该像素加权后信息加入到公式(r3)的线性系统矩阵中:The method of circularly optimizing the parallax plane is: set the new plane and the old plane as the initial parallax plane, set the loop count to 1, select the first pixel in the current processing area, and judge whether the pixel is an occluded pixel. If it is an occluded pixel, select the next pixel; if it is not an occluded pixel, in order to enhance the robustness of the algorithm, the weighted least square technique is used to calculate the weighting coefficient according to the formula (r4), and the weighted information of the pixel is added to the formula ( In the linear system matrix of r3):
当该区域所有的点都处理完后,按照公式r5,可以求出[c1,c2,c3]T,得到新的视差平面:When all the points in this area are processed, [c 1 , c 2 , c 3 ] T can be obtained according to the formula r5, and a new parallax plane can be obtained:
其中,wi为像素(xi,yi)的加权系数w(βi)。Wherein, w i is the weighting coefficient w(β i ) of the pixel ( xi , y i ).
判断新的视差平面与老的视差平面的差异幅度是否小于一个预先给定的阈值,如果不小于,则循环记数递增,并更新老的视差平面系数为新的视差平面系数,选择该区域中第一个像素,重复上面的处理,直到新的视差平面与老的视差平面的差异幅度小于该阈值。如果小于,判断新的视差平面是否已在视差平面集中,如果不在,则将此新的视差平面加入到视差平面集中。选择下一区域循环上述处理,直到所有区域都处理完,于是就得到了一个新的视差平面集。Judging whether the difference between the new parallax plane and the old parallax plane is less than a predetermined threshold, if it is not less than, the loop count is incremented, and the old parallax plane coefficient is updated to the new parallax plane coefficient, select this area For the first pixel, repeat the above process until the difference between the new disparity plane and the old disparity plane is smaller than the threshold. If it is smaller, judge whether the new disparity plane is already in the disparity plane set, if not, add the new disparity plane into the disparity plane set. Select the next area to cycle through the above processing until all areas are processed, so a new set of disparity planes is obtained.
第二步骤、将获取的各区域的新的视差平面进行聚类分层。The second step is to cluster and stratify the acquired new disparity planes of each region.
从平面集中选择第一个平面,在该平面中选择第一个区域,从该区域的非遮挡像素中确定满足公式(r6)的对该平面的支持像素数:Select the first plane from the plane set, select the first area in the plane, and determine the number of supporting pixels of the plane that satisfy the formula (r6) from the non-occluded pixels in the area:
βi<阈值 (r6)βi<threshold (r6)
判断该区域中像素数是否大于一给定阈值,例如40,如果不大于该阈值,则按照公式(r7)计算匹配代价:Determine whether the number of pixels in the area is greater than a given threshold, for example 40, if not greater than the threshold, then calculate the matching cost according to the formula (r7):
其中,S为图像区域,P为视差平面,
如果大于该阈值,则按照公式(r8)计算匹配代价:If it is greater than the threshold, the matching cost is calculated according to the formula (r8):
其中,O为区域S中的遮挡部分,n为区域S中的非遮挡像素数,s为区域S中对平面P的支持像素数。Among them, O is the occlusion part in the area S, n is the number of non-occlusion pixels in the area S, and s is the number of pixels supporting the plane P in the area S.
设置最小匹配代价为一个大数,判断计算出的匹配代价是否小于该最小匹配代价,如果小于,则将该计算出的匹配代价作为最小匹配代价。选择下一个视差平面,重复上面的处理,直到所有的视差平面都处理完。选择下一个区域,重复上面的处理,直到所有的区域都处理完。Set the minimum matching cost to a large number, judge whether the calculated matching cost is less than the minimum matching cost, and if it is smaller, use the calculated matching cost as the minimum matching cost. Select the next disparity plane and repeat the above process until all disparity planes are processed. Select the next area and repeat the above process until all areas are processed.
判断是否有相邻区域视差平面一样的联合区域,如果有,将联合区域作为最后的一个区域,重复上面确定每个区域的初始视差平面集及后续的处理,直到不再有联合区域。Determine whether there is a joint area with the same parallax plane as the adjacent area. If so, use the joint area as the last area, repeat the above determination of the initial parallax plane set for each area and subsequent processing until there is no joint area.
第三步骤、通过设置标签,进一步优化各区域的视差平面,根据该视差平面获得视差参数。The third step is to further optimize the parallax plane of each region by setting the label, and obtain the parallax parameter according to the parallax plane.
现有技术一获得每个区域到优化的视差平面的标签的方法为:按照上面计算得到的各区域及对应的视差平面作为各区域初始标签,选择最优视差平面,按公式(r9)计算标签效率函数:Prior Art 1 The method of obtaining labels from each region to the optimized parallax plane is: according to the above calculated regions and corresponding parallax planes as the initial labels of each region, select the optimal parallax plane, and calculate the labels according to formula (r9) Efficiency function:
E(f)=Edata(f)+Esmooth(f) (r9)E(f)=E data (f)+E smooth (f) (r9)
其中,f一个标签,赋予每一个区域S∈R一个对应的视差平面f(S)∈D,R为参考图像的区域集,D为估计的视差平面集。Edata按公式(r10)计算:Among them, f is a label, and each region S∈R is given a corresponding disparity plane f(S)∈D, R is the region set of the reference image, and D is the estimated disparity plane set. E data is calculated according to the formula (r10):
Esmooth按公式(r11)计算:E smooth is calculated according to formula (r11):
其中,S、S′为相邻区域,uS,S′正比于区域S与S′之间公共边界的长度。当f(S)≠f(S′)时,δ(f(S)≠f(S′))为1;当f(S)=f(S′)时,δ(f(S)≠f(S′))为0。Among them, S, S' are adjacent areas, uS, S' are proportional to the length of the common boundary between areas S and S'. When f(S)≠f(S′), δ(f(S)≠f(S′)) is 1; when f(S)=f(S′), δ(f(S)≠f (S')) is 0.
从f′中找到使标签效率函数E(f′)最小的标签记为f″,判断是否是E(f″)≤E(f)。如果是,设置f=f″,回到选择最优视差平面,循环处理;如果不是,将最优标签设置为f,处理结束。根据获取的最优标签获得视差参数。Find the label that minimizes the label efficiency function E(f') from f' and mark it as f", and judge whether E(f")≤E(f). If yes, set f=f″, go back to selecting the optimal parallax plane, and loop processing; if not, set the optimal label to f, and the process ends. Obtain the parallax parameter according to the obtained optimal label.
在实现本发明的过程中,发明人发现现有技术中至少存在如下问题:现有技术中在对各区域的视差参数进行优化时,只根据正在处理的区域自身的能量信息优化该区域的视差参数,而一个区域的视差参数发生变化时,会对其相邻区域的能量也造成影响,现有技术孤立考虑各区域的视差优化,不能获得较准确视差;并且一旦遮挡像素点的选取不准确,会导致聚类分层不准确,以致获取的最佳标签也不准确,从而造成获取的视差误差也较大。In the process of realizing the present invention, the inventors found that there are at least the following problems in the prior art: when optimizing the parallax parameters of each area in the prior art, the parallax of the area is only optimized according to the energy information of the area being processed parameters, and when the parallax parameters of a region change, it will also affect the energy of its adjacent regions. The existing technology considers the parallax optimization of each region in isolation, and cannot obtain more accurate parallax; and once the selection of occluded pixels is inaccurate , will lead to inaccurate clustering and stratification, so that the best label obtained is also inaccurate, resulting in a large parallax error.
发明内容 Contents of the invention
一方面,本发明实施例提供了一种视差获取方法,能够获得更准确的视差。On the one hand, the embodiment of the present invention provides a disparity acquisition method, which can obtain more accurate disparity.
本发明实施例采用的技术方案如下:一种视差获取方法,该方法包括:The technical solution adopted in the embodiment of the present invention is as follows: a parallax acquisition method, the method comprising:
根据初始的视差参数获取被匹配图像中各区域的匹配能量,所述被匹配图像包括至少两个所述区域;Acquiring the matching energy of each region in the matched image according to the initial parallax parameter, the matched image including at least two of the regions;
在所述各区域中,为当前区域确定至少一个操作区域,所述当前区域为所述各区域中正在处理的区域;In each of the areas, at least one operating area is determined for the current area, and the current area is the area being processed in the various areas;
根据所述当前区域和所述各操作区域的匹配能量,获取当前区域的优化视差参数;Acquiring an optimized parallax parameter of the current area according to the matching energy of the current area and the operation areas;
根据所述当前区域的优化视差参数获取当前区域的视差。The disparity of the current region is acquired according to the optimized disparity parameter of the current region.
另一方面,本发明实施例提供了一种视差获取装置,能够获得更准确的视差。On the other hand, the embodiment of the present invention provides a disparity acquisition device, which can obtain more accurate disparity.
本发明实施例采用的技术方案如下:一种视差获取装置,该装置包括:The technical solution adopted in the embodiment of the present invention is as follows: a parallax acquisition device, the device includes:
匹配能量获取单元,用于根据初始的视差参数获取被匹配图像中各区域的匹配能量,所述被匹配图像包括至少两个所述区域;A matching energy acquisition unit, configured to acquire the matching energy of each region in the matched image according to the initial parallax parameter, the matched image including at least two of the regions;
操作区域确定单元,用于在所述各区域中,为当前区域确定至少一个操作区域,所述当前区域为所述各区域中正在处理的区域;An operation area determining unit, configured to determine at least one operation area for the current area in the various areas, the current area being the area being processed in the various areas;
优化视差参数获取单元,用于根据所述当前区域和由所述操作区域确定单元确定的各操作区域的匹配能量,获取优化视差参数;An optimized parallax parameter acquiring unit, configured to acquire an optimized parallax parameter according to the matching energy of the current area and each operating area determined by the operating area determining unit;
视差值获取单元,用于根据所述优化视差参数获取单元获取的优化视差参数,获取当前区域的视差。The disparity value obtaining unit is configured to obtain the disparity of the current area according to the optimized disparity parameter obtained by the optimized disparity parameter obtaining unit.
在本发明实施例中,获取被匹配图像中各区域的视差时,采用所述各区域的匹配能量作为准则函数,对当前区域获得的初始视差进行优化时,不但根据所述当前区域的匹配能量,还根据所述各操作区域的匹配能量,由于通过不同区域之间匹配能量的互相依存,互相影响,获取的所述当前区域的优化视差参数比较准确,从而,本发明实施例根据所述优化视差参数,能够获得更准确的视差。In the embodiment of the present invention, when obtaining the disparity of each region in the matched image, the matching energy of each region is used as a criterion function, and when optimizing the initial disparity obtained in the current region, not only according to the matching energy of the current region , and according to the matching energy of each operation area, due to the interdependence and mutual influence of the matching energy between different areas, the optimized parallax parameter of the current area obtained is relatively accurate, so the embodiment of the present invention is based on the optimization Parallax parameter, which can get more accurate parallax.
附图说明 Description of drawings
图1为现有技术一中提供的立体匹配技术的方法流程图;Fig. 1 is a method flow chart of the stereo matching technology provided in prior art 1;
图2为本发明实施例一提供的视差获取方法流程图;FIG. 2 is a flow chart of a parallax acquisition method provided by Embodiment 1 of the present invention;
图3为本发明实施例二提供的视差获取方法流程图;FIG. 3 is a flow chart of a parallax acquisition method provided by Embodiment 2 of the present invention;
图4为本发明实施例提供的对Tsukuba立体图像处理结果图;Fig. 4 is the Tsukuba stereo image processing result figure provided by the embodiment of the present invention;
图5为本发明实施例提供的对分别采用RANSAC方法和投票方法平面拟合结果的比较图;Fig. 5 is a comparison diagram of the plane fitting results respectively using the RANSAC method and the voting method provided by the embodiment of the present invention;
图6为本发明实施例提供的对Tsukuba立体图像经过基于投票的视差平面拟合后得到的视差图;Fig. 6 is the parallax map obtained after fitting the parallax plane based on voting to the Tsukuba stereo image provided by the embodiment of the present invention;
图7为图像遮挡示意图;Fig. 7 is a schematic diagram of image occlusion;
图8为本发明实施例提供的对Tsukuba图像经过四次协同优化后的视差图;Fig. 8 is the disparity map after four collaborative optimizations of the Tsukuba image provided by the embodiment of the present invention;
图9为利用本发明实施例提供的视差获取方法对标准图像Tsukuba、Venus、Teddy、Cones进行处理的视差结果图;Fig. 9 is a parallax result diagram of standard images Tsukuba, Venus, Teddy, and Cones processed by using the parallax acquisition method provided by the embodiment of the present invention;
图10为本发明实施例一提供的视差获取装置结构图;FIG. 10 is a structural diagram of a parallax acquisition device provided by Embodiment 1 of the present invention;
图11为本发明实施例二提供的视差获取装置结构图。FIG. 11 is a structural diagram of a parallax acquisition device provided by Embodiment 2 of the present invention.
具体实施方式 Detailed ways
为了解决现有技术中,获取视差时,只根据所处理的当前区域自身孤立的能量信息,优化后获取的视差误差较大的问题,本发明实施例提供了一种视差获取方法和装置,能够获得更准确的视差。In order to solve the problem in the prior art that when disparity is acquired, only based on the isolated energy information of the processed current area itself, the disparity error obtained after optimization is relatively large. Embodiments of the present invention provide a disparity acquisition method and device, which can Get more accurate parallax.
为了更清楚地说明本发明实施例的技术方案,下面将结合附图对本发明的实施例进行详细的介绍,下面的描述仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些实施例获得本发明的其他的实施方式。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings. The following descriptions are only some embodiments of the present invention. For those of ordinary skill in the art, Other implementations of the present invention can also be obtained according to these embodiments without any creative effort.
如图2所示,一种视差获取方法,该方法包括:As shown in Figure 2, a parallax acquisition method, the method includes:
步骤201、根据初始的视差参数获取被匹配图像中各区域的匹配能量,所述被匹配图像包括至少两个所述区域;
步骤202、在所述各区域中,为当前区域确定至少一个操作区域,所述当前区域为所述各区域中正在处理的区域;
步骤203、根据所述当前区域和所述各操作区域的匹配能量,获取当前区域的优化视差参数;
步骤204、根据所述当前区域的优化视差参数获取当前区域的视差。
在本发明实施例中,获取被匹配图像中各区域的视差时,采用所述各区域的匹配能量作为准则函数,对当前区域获得的初始视差进行优化时,不但根据述当前区域的匹配能量,还根据所述各操作区域的匹配能量,由于通过不同区域之间匹配能量的互相依存,互相影响,获取的所述当前区域的优化视差参数比较准确,从而,本发明实施例根据所述优化视差参数,能够获得更准确的视差。In the embodiment of the present invention, when obtaining the disparity of each area in the matched image, the matching energy of each area is used as a criterion function, and when optimizing the initial disparity obtained in the current area, not only according to the matching energy of the current area, Also according to the matching energy of each operation area, due to the interdependence and mutual influence of the matching energy between different areas, the optimized parallax parameter of the current area obtained is relatively accurate. Therefore, according to the optimized parallax parameters to obtain more accurate disparity.
如图3所示,本发明实施例提供的技术方案,在步骤201根据初始的视差参数获取被匹配图像中各区域的匹配能量,所述被匹配图像包括至少两个所述区域之前还包括:As shown in Figure 3, the technical solution provided by the embodiment of the present invention obtains the matching energy of each region in the matched image according to the initial parallax parameter in
步骤201a、获取所述各区域中像素点的初始视差;
步骤201b、根据所述初始视差获取所述各区域初始的视差参数。下面对各步骤进行详细说明。
本发明实施例采用的立体图对,为使用标准配置的两台摄像机在同一场景对同一对象拍摄的左图像和右图像,在本发明实施例中,被匹配图像为左图像,参考图像为右图像。The stereogram pair used in the embodiment of the present invention is a left image and a right image taken on the same object in the same scene by using two standard cameras. In the embodiment of the present invention, the matched image is the left image, and the reference image is the right image .
本发明实施例采用均值漂移算法(Mean-Shift)将被匹配图像分割为至少两个区域。In the embodiment of the present invention, a mean-shift algorithm (Mean-Shift) is used to divide the matched image into at least two regions.
步骤201a、获取所述各区域中像素点的初始视差。
获取各区域中像素点的初始视差的方法很多,本发明实施例可采用背景技术中描述的方法获取各区域中像素点的初始视差,也可以采用自适应相关窗算法获取所述各区域中像素点的初始视差,在此不作限定。所述自适应相关窗算法为现有技术,熟悉该技术的本领域技术人员可通过现有技术中公开的相关内容获得步骤201a的结果。There are many ways to obtain the initial disparity of pixels in each region. In the embodiment of the present invention, the method described in the background technology can be used to obtain the initial disparity of pixels in each region, and the adaptive correlation window algorithm can also be used to obtain the pixel in each region. The initial disparity of the point, which is not limited here. The adaptive correlation window algorithm is a prior art, and those skilled in the art can obtain the result of
本发明实施例中针对标准配置的两台摄像机所拍摄的左右立体图对进行说明。如图4所示,图4给出了本发明实施例对Tsukuba立体图像对进行处理的结果,其中,图4(a)、(b)分别是对应左图像的分割图像和初始视差图。In the embodiment of the present invention, the left and right stereogram pairs captured by two standard cameras are described. As shown in Fig. 4, Fig. 4 shows the result of processing the Tsukuba stereo image pair according to the embodiment of the present invention, where Fig. 4(a) and (b) are the segmented image and the initial disparity map corresponding to the left image respectively.
步骤201b、根据所述初始视差获取所述各区域初始的视差参数。
本发明实施例采用一种鲁棒的基于投票的视差平面拟合方法,获取所述各区域初始的视差参数。获取的初始视差,经常会具有一定的误差,所述基于投票的视差平面拟合方法能有效去除误差点,即离群数据,使拟合结果更准确,从而,进一步保证了能获得更准确地视差,如图5所示,给出了分别采用RANSAC方法和投票方法平面拟合结果的比较图,其中,横线表示投票方法的拟合错误率,折线表示RANSAC方法的拟合错误率,可以看出,所提出的投票方法不仅拟合的错误率低,而且结果非常稳定,而RANSAC方法虽然拟合的错误率也较低,但性能不稳定。如图6所示,为基于投票的视差平面拟合后得到的视差图。实验结果表明,初始视差图中的噪声可得到有效的抑制。下面对步骤201b进行详细介绍。The embodiment of the present invention adopts a robust voting-based parallax plane fitting method to obtain the initial parallax parameters of each region. The obtained initial parallax often has a certain error, and the voting-based parallax plane fitting method can effectively remove error points, that is, outlier data, so that the fitting result is more accurate, thereby further ensuring that more accurate Parallax, as shown in Figure 5, gives a comparison chart of the plane fitting results of the RANSAC method and the voting method, where the horizontal line represents the fitting error rate of the voting method, and the broken line represents the fitting error rate of the RANSAC method, which can be It can be seen that the proposed voting method not only has a low fitting error rate, but also has very stable results, while the RANSAC method has a low fitting error rate, but its performance is unstable. As shown in Figure 6, it is the disparity map obtained after fitting the disparity plane based on voting. Experimental results show that the noise in the initial disparity map can be effectively suppressed.
根据视差平面模型公式d=c1x+c2y+c3,建模所述各区域的视差平面,其中,c1,c2,c3为视差平面参数,c1为第一参数,c2为第二参数,c3为第三参数。According to the parallax plane model formula d=c 1 x+c 2 y+c 3 , model the parallax planes of the regions, where c 1 , c 2 , and c 3 are parallax plane parameters, and c 1 is the first parameter, c 2 is the second parameter and c 3 is the third parameter.
获取所述各区域初始的视差参数的步骤包括:The step of obtaining the initial parallax parameters of each region includes:
S1、获取所述第一参数c1,该步骤具体包括:S1. Obtaining the first parameter c 1 , this step specifically includes:
S11、在所述各区域中,在每一个相同像素行中任选至少一个行像素点对;S11. In each of the regions, select at least one row of pixel point pairs in each same pixel row;
所述一个行像素点对的选取可以是相邻的一对点,也可以是相隔的一对点。The selection of pixel point pairs in one row may be a pair of adjacent points, or a pair of points apart.
S12、在所述各区域中,根据各所述行像素点对的坐标值和初始视差获得各候选第一参数;S12. In each of the regions, each candidate first parameter is obtained according to the coordinate value of each row of pixel point pairs and the initial parallax;
对所述选取的行像素点对进行相关计算,对任一个行像素点对进行相关计算的方法为:计算这一个行像素点对的视差之差和坐标值之差的比,Δd/Δx,得到平面参数c1的一个所述候选第一参数,对选取的所有行像素点对进行上述同样的计算,得到第一参数c1的所述各候选第一参数;Perform correlation calculations on the selected row pixel pairs, and perform correlation calculations on any row pixel pair as follows: calculate the ratio of the parallax difference and the coordinate value difference of this row pixel pair, Δd/Δx, Obtain one of the candidate first parameters of the plane parameter c1 , perform the above-mentioned same calculation on all selected row pixel pairs, and obtain the candidate first parameters of the first parameter c1 ;
S13、根据所述各候选第一参数投票获取所述各区域的第一参数;S13. Obtain the first parameters of the regions by voting according to the candidate first parameters;
将所述各候选第一参数,在一维的第一参数c1的参数空间进行投票,生成频数相对于c1的直方图,并对所述直方图进行滤波处理。本发明实施例采用高斯平滑滤波,将高斯平滑滤波的峰值点,即投票中得票数最多的点,所对应的c1的值,作为第一参数c1。The candidate first parameters are voted in the one-dimensional parameter space of the first parameter c 1 to generate a histogram of frequency relative to c 1 , and filter the histogram. The embodiment of the present invention adopts Gaussian smoothing filter, and the value of c 1 corresponding to the peak point of Gaussian smoothing filter, that is, the point with the most votes in the voting, is used as the first parameter c 1 .
S2、获取所述第二参数c2,该步骤具体包括:S2. Acquiring the second parameter c 2 , this step specifically includes:
S21、在所述各区域中,在每一个相同像素列中任选至少一个列像素点对;S21. In each of the regions, select at least one column pixel point pair in each same pixel column;
所述一个列像素点对的选取可以是相邻的一对点,也可以是相隔的一对点。The selection of pixel point pairs in a column may be a pair of adjacent points, or a pair of points apart from each other.
S22、在所述各区域中,根据各所述列像素点对的坐标值和初始视差获得各候选第二参数;S22. In each of the regions, each candidate second parameter is obtained according to the coordinate value of each column of pixel point pairs and the initial disparity;
对所述选取的列像素点对进行相关计算,对任一个列像素点对进行相关计算的方法为:计算这一个列像素点对的视差之差和坐标值之差的比,Δd/Δy,得到平面参数c2的一个所述候选第二参数,对选取的所有列像素点对进行上述同样的计算,得到第二参数c2的所述各候选第二参数。Perform correlation calculations on the selected column pixel pairs, and perform correlation calculations on any column pixel point pair as follows: calculate the ratio of the parallax difference and the coordinate value difference of this column pixel point pair, Δd/Δy, One of the candidate second parameters of the plane parameter c2 is obtained, and the same calculation as above is performed on all selected column pixel point pairs to obtain the candidate second parameters of the second parameter c2 .
S23、在所述各区域中,根据所述各候选第二参数投票获取所述第二参数;S23. In each area, obtain the second parameter by voting according to each candidate second parameter;
将所述各候选第二参数,在一维的第二参数c2的参数空间进行投票,生成频数相对于c2的直方图,并对所述直方图进行滤波处理。本发明实施例采用高斯平滑滤波,将高斯平滑滤波的峰值点,即投票中得票数最多的点所对应的c2的值,作为第二参数c2。The candidate second parameters are voted in the parameter space of the one-dimensional second parameter c 2 to generate a histogram of the frequency relative to c 2 , and filter the histogram. The embodiment of the present invention adopts Gaussian smoothing filter, and the peak point of Gaussian smoothing filter, that is, the value of c 2 corresponding to the point with the most votes in the voting, is used as the second parameter c 2 .
S3、获取所述第三参数c3,该步骤具体包括:S3. Acquiring the third parameter c 3 , this step specifically includes:
S31、根据所述各区域中的各候选第一参数和相应的各候选第二参数用视差平面拟合出相应的各候选第三参数;S31. Fitting corresponding candidate third parameters with a parallax plane according to each candidate first parameter and each corresponding candidate second parameter in each region;
将所述各候选第一参数、相应的各候选第二参数、及像素坐标与视差,分别代入视差平面模型公式d=c1x+c2y+c3,得到相应的各候选第三参数。Substituting the candidate first parameters, corresponding candidate second parameters, pixel coordinates and disparity into the parallax plane model formula d=c 1 x+c 2 y+c 3 to obtain corresponding candidate third parameters .
S32、根据所述各候选第三参数投票获取所述各区域的第三参数。S32. Obtain the third parameter of each area by voting according to each candidate third parameter.
将所述各候选第三参数,在一维的第三参数的参数空间进行投票,生成频数相对于c3的直方图,并对所述直方图进行滤波处理。本发明实施例采用高斯平滑滤波,将高斯平滑滤波的峰值点,即投票中得票数最多的点所对应的c3的值,作为第三参数c3。The candidate third parameters are voted in the one-dimensional parameter space of the third parameter to generate a histogram of frequency relative to c 3 , and filter processing is performed on the histogram. The embodiment of the present invention adopts Gaussian smoothing filter, and uses the peak point of Gaussian smoothing filter, that is, the value of c 3 corresponding to the point with the most votes in the voting, as the third parameter c 3 .
步骤201、根据初始的视差参数获取被匹配图像中各区域的匹配能量,所述被匹配图像包括至少两个所述区域。
所述匹配能量包括数据能量、遮挡能量和平滑能量。进行视差优化时获取的匹配能量包括所述数据能量,和所述遮挡能量与平滑能量中的至少一种的组合。The matching energy includes data energy, occlusion energy and smoothing energy. The matching energy acquired during parallax optimization includes the data energy and a combination of at least one of the occlusion energy and smoothing energy.
本发明实施例获取的匹配能量包括所述数据能量、遮挡能量和平滑能量三种,即任一图像区域的匹配能量为Ei=Edata+Eocclude+Esmooth,其中,Ei为区域i的匹配能量,下标i表示第i个图像区域,Edata为数据能量,Eocclude为遮挡能量,Esmooth为平滑能量。The matching energy obtained in the embodiment of the present invention includes the data energy, occlusion energy and smoothing energy, that is, the matching energy of any image area is E i =E data +E occlude +E smooth , where E i is area i The subscript i represents the i-th image region, E data is the data energy, E occlude is the occlusion energy, and E smooth is the smoothing energy.
本发明实施例提供的获取所述各区域的匹配能量的具体方法为:The specific method for obtaining the matching energy of each region provided by the embodiment of the present invention is as follows:
T1、获取所述各区域的数据能量。T1. Obtain the data energy of each area.
对数据能量Edata,由下式(r12)计算:The data energy E data is calculated by the following formula (r12):
其中,V1和Vr分别表示当前区域在左右图像上的可见像素集,p∈V1、q∈Vr为左右图像上相匹配的两个对应像素,r、g、b表示相应像素的RGB颜色值。各像素的RGB颜色值如下得到:根据所估计的当前区域的视差平面参数计算与被匹配图像(左图像)上的像素p对应的参考图像(右图像)上的像素q,并获取其RGB颜色值。由于计算得到的视差是浮点型的,所以像素q一般并不正好处于整数像素位置上,故其RGB颜色值一般不能直接从图像中获得。在本发明实施例中,其值由参考图像上q的四个近邻像素的RGB颜色值经过插值得到。计算数据能量时考虑了可视性准则,即要求用来计算数据能量的像素在左右图像对中都是可见的。Among them, V 1 and V r respectively represent the visible pixel sets of the current area on the left and right images, p∈V 1 and q∈V r are two corresponding pixels matched on the left and right images, and r, g, b represent the pixel values of the corresponding pixels RGB color value. The RGB color value of each pixel is obtained as follows: Calculate the pixel q on the reference image (right image) corresponding to the pixel p on the matched image (left image) according to the estimated parallax plane parameters of the current area, and obtain its RGB color value. Since the calculated disparity is a floating-point type, the pixel q is generally not exactly at an integer pixel position, so its RGB color value generally cannot be obtained directly from the image. In the embodiment of the present invention, its value is obtained by interpolating the RGB color values of four neighboring pixels of q on the reference image. The calculation of data energy takes into account the visibility criterion, which requires that the pixels used to calculate data energy are visible in both left and right image pairs.
本发明实施例技术方案提供的计算所述数据能量的公式并不仅限于公式(r12),所有基于公式(r12)做近似变化的公式都属于本发明实施例提供的技术方案,比如,使用两个像素各颜色分量差的绝对值的和,或使用两个像素各颜色分量差的平方和。The formula for calculating the data energy provided by the technical solution of the embodiment of the present invention is not limited to the formula (r12), and all formulas that make approximate changes based on the formula (r12) belong to the technical solution provided by the embodiment of the present invention, for example, using two The sum of the absolute values of the differences between the color components of a pixel, or the sum of the squares of the differences between the color components of two pixels.
T2、获取所述各区域的遮挡能量。T2. Obtain the shading energy of each region.
为了便于理解先对图像遮挡作简单的介绍。遮挡分为左遮挡和右遮挡两种情况。如图7所示,考虑被匹配图像(左图像)上的两个相邻区域A和B,A在B的右边。当A区域在相邻边界处的视差大于其左邻域B的视差时,那么根据当前的视差计算结果将A和B映射到参考图像(右图像)时,B′的一部分(图中的D区域)将被A′所遮挡(左遮挡),而当A区域在相邻边界处的视差小于其左邻域B的视差时,映射到右图像后的两个区域A′和B′之间将会出现空隙(图中的C区域),相当于右图像上A′区域有一部分在左图像上被B遮挡了(右遮挡)。在本发明实施例中,如下设置遮挡能量:当左遮挡发生时,左遮挡能量加到相应的左遮挡区域;当右遮挡发生时,右遮挡能量加到相应的右遮挡区域。实际中,被遮挡区域的检测可如下进行:首先根据当前的视差计算结果将左图像上相关的各区域映射到右图像,然后在右图像上检查映射情况;如果某个像素被重复映射,则将该像素记为左遮挡,并加上左遮挡能量;如果某个像素未被映射,则将该像素记为右遮挡,并加上右遮挡能量。In order to facilitate understanding, a brief introduction to image occlusion is given first. Occlusion is divided into two cases: left occlusion and right occlusion. As shown in Figure 7, consider two adjacent regions A and B on the matched image (left image), A is on the right of B. When the disparity of A area at the adjacent boundary is greater than that of its left neighbor B, then when A and B are mapped to the reference image (right image) according to the current disparity calculation result, a part of B' (D in the figure region) will be occluded by A' (left occlusion), and when the disparity of A region at the adjacent boundary is smaller than the disparity of its left neighbor B, it is mapped between the two regions A' and B' after the right image There will be a gap (area C in the figure), which is equivalent to a part of area A' on the right image being blocked by B on the left image (right occlusion). In the embodiment of the present invention, the shading energy is set as follows: when left shading occurs, the left shading energy is added to the corresponding left shading area; when right shading occurs, the right shading energy is added to the corresponding right shading area. In practice, the detection of the occluded area can be carried out as follows: first, map the relevant areas on the left image to the right image according to the current disparity calculation result, and then check the mapping situation on the right image; if a pixel is repeatedly mapped, then Record the pixel as left occlusion, and add the left occlusion energy; if a pixel is not mapped, record the pixel as right occlusion, and add the right occlusion energy.
综上所述,根据下述公式(r13),可以得到遮挡能量Eocclude为:To sum up, according to the following formula (r13), the occlusion energy E occlude can be obtained as:
Eocclude=(|OccL|+|OccR|)λocc (r13)E occlude =(|Occ L |+|Occ R |)λ occ (r13)
其中,OccL、OccR分别表示左遮挡和右遮挡像素的个数,而λocc表示所设置的遮挡惩罚常量。Among them, Occ L , Occ R represent the number of left occlusion and right occlusion pixels respectively, and λ occ represents the set occlusion penalty constant.
本发明实施例技术方案提供的计算所述遮挡能量的公式并不仅限于公式(r13),所有基于公式(r13)做近似变化的公式都属于本发明实施例提供的技术方案,比如对左遮挡和右遮挡设置不同的遮挡惩罚常量,或者只处理左遮挡,或者只处理右遮挡。The formula for calculating the shading energy provided by the technical solution of the embodiment of the present invention is not limited to the formula (r13), and all formulas that make approximate changes based on the formula (r13) belong to the technical solution provided by the embodiment of the present invention, such as for left occlusion and Set different occlusion penalty constants for right occlusion, or only deal with left occlusion, or only deal with right occlusion.
T3、获取所述各区域的平滑能量。T3. Obtain the smoothed energy of each region.
可以根据下述公式(r14),得到平滑能量Esmooth为:According to the following formula (r14), the smoothing energy E smooth can be obtained as:
这里,BC表示被匹配图像上当前区域的边界点集,N表示和BC近邻的其它区域上的边界点集,p∈BC、q∈N为四连通意义上的两个近邻像素点,d(p)、d(q)为像素p、q的视差,而λS为所设置的平滑惩罚量,可以是一个常量,也可以是所述两个像素颜色的差值的函数,如差值较小时,设为2倍的差值,差值较大时,设为1倍的差值。其中差值计算可以是两个像素各颜色分量差的绝对值中的最大值,或两个像素各颜色分量差的绝对值的和,或使用两个像素各颜色分量差的平方和。其中的判别式|d(p)-d(q)|≥阈值1用于判断当前区域上一个边界点是否为视差不连续的的点。阈值1可以取大于0的数,如0.5或1。作为附加条件,要求p不是一个遮挡边界点。公式(r14)给出了对当前区域上具有不连续视差且不属于遮挡边界的边界点所施加的平滑惩罚能量之和。Here, BC represents the boundary point set of the current area on the matched image, N represents the boundary point set on other areas adjacent to BC , and p∈BC and q∈N are two adjacent pixel points in the sense of four-connectivity , d(p), d(q) are the parallax of pixels p and q, and λ S is the set smoothing penalty, which can be a constant or a function of the difference between the two pixel colors, such as When the difference is small, it is set to 2 times the difference, and when the difference is large, it is set to 1 times the difference. The difference calculation can be the maximum value of the absolute values of the differences of the color components of the two pixels, or the sum of the absolute values of the differences of the color components of the two pixels, or the sum of the squares of the differences of the color components of the two pixels. The discriminant |d(p)-d(q)|≥threshold 1 is used to judge whether a boundary point on the current region is a point with discontinuous parallax. Threshold 1 can take a number greater than 0, such as 0.5 or 1. As an additional condition, it is required that p is not an occluded boundary point. Equation (r14) gives the sum of smoothing penalty energies applied to boundary points on the current region that have discontinuous disparity and do not belong to occlusion boundaries.
步骤202、在所述各区域中,为当前区域确定至少一个操作区域,所述当前区域为所述各区域中正在处理的区域。
所述当前区域的操作区域可以为与当前区域相邻的图像区域,也可以为与当前区域不相邻的图像区域,选取的所述操作区域的个数考虑到计算量和优化效果,应适当。The operation area of the current area may be an image area adjacent to the current area, or an image area not adjacent to the current area, and the number of the operation areas selected shall be appropriate in consideration of the amount of calculation and the optimization effect. .
步骤203、根据所述当前区域的匹配能量和所述各操作区域的匹配能量获取所述当前区域的优化视差参数,该步骤具体包括:
U1、将所述各操作区域的视差参数,或所述各操作区域的视差参数与当前区域的视差参数的加权组合生成的视差参数,作为所述当前区域新的视差参数,根据所述新的视差参数获取相应的所述当前区域的各待选匹配能量;U1. Using the disparity parameters of the operation areas, or the disparity parameters generated by the weighted combination of the disparity parameters of the operation areas and the disparity parameters of the current area, as the new disparity parameters of the current area, according to the new The disparity parameter acquires each candidate matching energy corresponding to the current region;
本发明实施例取与当前区域(匹配为能量Ei(x),i表示第i个区域)相邻的图像区域作为最佳的所述操作区域。以对各相邻区域与当前区域的处理为例对U1进行说明。In the embodiment of the present invention, the image region adjacent to the current region (matched as the energy E i (x), where i represents the i-th region) is taken as the optimal operation region. U1 is described by taking the processing of each adjacent area and the current area as an example.
将当前区域的各相邻区域的视差参数,或所述各相邻区域的视差参数与当前区域的视差参数的加权组合生成的新的视差参数,作为当前区域的视差参数。并且,根据新的视差参数,重新获取所述当前区域及各操作区域的数据点,遮挡点和平滑点像素,根据所述像素点更新当前区域的匹配能量,得到更新后的匹配能量Ei′(x),即所述当前区域的各待选匹配能量。The disparity parameters of each adjacent area of the current area, or a new disparity parameter generated by a weighted combination of the disparity parameters of each adjacent area and the disparity parameter of the current area is used as the disparity parameter of the current area. And, according to the new parallax parameter, reacquire the data points of the current area and each operation area, the occlusion point and smooth point pixels, update the matching energy of the current area according to the pixel points, and obtain the updated matching energy E i ' (x), that is, each candidate matching energy of the current region.
U2、根据所述新的视差参数获取所述各操作区域的更新的匹配能量;U2. Obtain updated matching energies of the operation regions according to the new parallax parameters;
根据新的视差参数,重新获取所述当前区域及各操作区域的数据点,遮挡点和平滑点像素,根据所述像素点更新所述各操作区域的匹配能量,得到更新后的匹配能量E′j(x),j为选取的相邻区域的标号。According to the new parallax parameter, reacquire the data points of the current region and each operation region, the occlusion point and smooth point pixels, update the matching energy of each operation region according to the pixel points, and obtain the updated matching energy E′ j (x), j is the label of the selected adjacent area.
U3、获取所述各待选匹配能量与相应的所述各操作区域的更新匹配能量的各协同优化能量U3. Obtain the collaborative optimization energies of the matching energies to be selected and the corresponding updated matching energies of the operating areas
本发明实施例设置加权系数λk,wij,0≤λk≤1,0≤wij≤1来构造所述协同优化能量。所述的各协同优化能量为: In the embodiment of the present invention, weighting coefficients λ k , w ij , 0≤λ k ≤1, 0≤w ij ≤1 are set to construct the collaborative optimization energy. The various collaborative optimization energies described are:
U4、根据所述各协同优化能量获取所述当前区域的优化视差参数。U4. Acquire optimized parallax parameters of the current region according to the collaborative optimization energies.
协同优化能量越小,该协同优化能量对应的视差参数越合理。本发明实施例在所述各协同优化能量中选取最小值,根据获得该最小协同优化能量时当前区域采用的视差参数,作为所述当前区域的优化视差参数。The smaller the collaborative optimization energy is, the more reasonable the parallax parameter corresponding to the collaborative optimization energy is. In the embodiment of the present invention, a minimum value is selected among the collaborative optimization energies, and the parallax parameter used in the current region when the minimum collaborative optimization energy is obtained is used as the optimized parallax parameter of the current region.
步骤204、根据所述当前区域的优化视差参数获取当前区域的视差。
根据所述当前区域的优化视差参数由视差平面模型公式,获取当前区域的视差。The parallax of the current area is obtained from the parallax plane model formula according to the optimized parallax parameter of the current area.
在步骤203中,介绍了对当前一个区域的处理方法,本发明实施例按照同样的方法对一幅图像中的所有区域采用协同优化的方法逐次进行了迭代处理,完成对整幅图像的处理,并且可以按照同样的方法对整幅图像重复多次迭代处理,以获得较好的视差结果。本发明实施例进行了四次迭代,如图8所示,对Tsukuba图像经过四次协同优化后修正了视差,其中,(a)~(b)是第一到第四次迭代得到的视差图,图中的e为整个图像各区域协同能量的和。In
本发明实施例以对Tsukuba图像的处理为例进行了说明,但本发明实施例并不仅限于只适用于某一具体图像,如图9所示,为采用本发明实施例提供的方法对标准图像Tsukuba、Venus、Teddy、Cones的处理结果,利用本发明实施例提供的方法得到的结果与真实视差很接近,能够获得理想的视差结果。The embodiment of the present invention takes the processing of the Tsukuba image as an example to describe, but the embodiment of the present invention is not limited to only applicable to a specific image, as shown in FIG. The processing results of Tsukuba, Venus, Teddy, and Cones are very close to the real parallax by using the method provided by the embodiment of the present invention, and an ideal parallax result can be obtained.
在本发明实施例中,获取被匹配图像中各区域的视差时,采用所述各区域的匹配能量作为准则函数,对当前区域获得的初始视差进行优化时,不但根据述当前区域的匹配能量,还根据所述各操作区域的匹配能量,由于通过不同区域之间匹配能量的互相依存,互相影响,获取的所述当前区域的优化视差参数比较准确,从而,本发明实施例根据所述优化视差参数,能够获得更准确的视差。In the embodiment of the present invention, when obtaining the disparity of each area in the matched image, the matching energy of each area is used as a criterion function, and when optimizing the initial disparity obtained in the current area, not only according to the matching energy of the current area, Also according to the matching energy of each operation area, due to the interdependence and mutual influence of the matching energy between different areas, the optimized parallax parameter of the current area obtained is relatively accurate. Therefore, according to the optimized parallax parameters to obtain more accurate disparity.
本发明实施例还提供了一种视差获取装置,能够获得更准确的视差,如图10所示,该装置包括:The embodiment of the present invention also provides a parallax acquisition device, which can obtain more accurate parallax, as shown in Figure 10, the device includes:
匹配能量获取单元,用于根据初始的视差参数获取被匹配图像中各区域的匹配能量,所述被匹配图像包括至少两个所述区域;A matching energy acquisition unit, configured to acquire the matching energy of each region in the matched image according to the initial parallax parameter, the matched image including at least two of the regions;
操作区域确定单元,用于在所述各区域中,为当前区域确定至少一个操作区域,所述当前区域为所述各区域中正在处理的区域;An operation area determining unit, configured to determine at least one operation area for the current area in the various areas, the current area being the area being processed in the various areas;
优化视差参数获取单元,用于根据所述当前区域和由所述操作区域确定单元确定的各操作区域的匹配能量,获取优化视差参数;An optimized parallax parameter acquiring unit, configured to acquire an optimized parallax parameter according to the matching energy of the current area and each operating area determined by the operating area determining unit;
视差值获取单元,用于根据所述优化视差参数获取单元获取的优化视差参数,获取当前区域的视差。The disparity value obtaining unit is configured to obtain the disparity of the current area according to the optimized disparity parameter obtained by the optimized disparity parameter obtaining unit.
在本发明实施例中,获取被匹配图像中各区域的视差时,采用所述各区域的匹配能量作为准则函数,对当前区域获得的初始视差进行优化时,不但根据所述当前区域的匹配能量,还根据所述各操作区域的匹配能量,由于通过不同区域之间匹配能量的互相依存,互相影响,获取的所述当前区域的优化视差参数比较准确,从而,本发明实施例根据所述优化视差参数,能够获得更准确的视差。In the embodiment of the present invention, when obtaining the disparity of each region in the matched image, the matching energy of each region is used as a criterion function, and when optimizing the initial disparity obtained in the current region, not only according to the matching energy of the current region , and according to the matching energy of each operation area, due to the interdependence and mutual influence of the matching energy between different areas, the optimized parallax parameter of the current area obtained is relatively accurate, so the embodiment of the present invention is based on the optimization Parallax parameter, which can get more accurate parallax.
如图11所示,本发明实施例提供的装置还包括:As shown in Figure 11, the device provided by the embodiment of the present invention also includes:
初始视差获取单元,用于获取所述各区域中像素点的初始视差;an initial parallax acquisition unit, configured to acquire the initial parallax of pixels in each region;
视差参数获取单元,用于根据所述初始视差获取单元获取的初始视差,获取所述各区域的所述的初始的视差参数。The disparity parameter acquisition unit is configured to acquire the initial disparity parameters of the regions according to the initial disparity acquired by the initial disparity acquisition unit.
所述视差参数包括第一参数、第二参数和第三参数,所述视差参数获取单元包括:The parallax parameter includes a first parameter, a second parameter and a third parameter, and the parallax parameter acquisition unit includes:
第一参数获取模块,用于首先在所述各区域中,在每一个相同像素行中任选至少一个行像素点对;然后在所述各区域中,根据各所述行像素点对的像素值和初始视差获得各候选第一参数;最后根据所述各候选第一参数投票获取所述各区域的第一参数;The first parameter acquisition module is used to first select at least one row pixel point pair in each same pixel row in each of the regions; then in each region, according to the pixels of each row of pixel point pairs value and initial disparity to obtain each candidate first parameter; finally obtain the first parameter of each region by voting according to each candidate first parameter;
第二参数获取模块,用于首先在所述各区域中,在每一个相同像素列中任选至少一个列像素点对;然后在所述各区域中,根据各所述列像素点对的坐标值和初始视差获得各候选第二参数;最后根据所述各候选第二参数投票获取所述各区域的第二参数;The second parameter acquisition module is used to first select at least one column pixel point pair in each same pixel column in each of the regions; then in each of the regions, according to the coordinates of each of the column pixel point pairs value and initial disparity to obtain each candidate second parameter; finally obtain the second parameter of each region by voting according to each candidate second parameter;
第三参数获取模块,用于根据由所述第一参数获取模块得到的各候选第一参数,和所述第二参数获取模块得到的相应的各候选第二参数,用视差平面拟合出相应的各候选第三参数,根据所述各候选第三参数投票获取所述各区域的第三参数。The third parameter acquisition module is used to use the parallax plane to fit the corresponding candidate first parameters obtained by the first parameter acquisition module and the corresponding candidate second parameters obtained by the second parameter acquisition module. Each candidate third parameter of each candidate third parameter is voted according to each candidate third parameter to obtain the third parameter of each region.
本发明实施例通过所述视差参数获取单元,采用一种鲁棒的基于投票的视差平面拟合方法,获取所述各区域初始的视差参数。由于获取的初始视差,经常会具有一定的误差,所述视差参数获取单元通过所述基于投票的视差平面拟合方法能有效去除误差点,即离群数据,使拟合结果更准确,从而,进一步保证了能获得更准确地视差,In the embodiment of the present invention, the parallax parameter acquiring unit adopts a robust voting-based parallax plane fitting method to acquire the initial parallax parameters of each region. Due to the obtained initial parallax, there is often a certain error, and the parallax parameter acquisition unit can effectively remove error points, that is, outlier data, through the parallax plane fitting method based on voting, so that the fitting result is more accurate, thus, Further ensure that more accurate parallax can be obtained,
所述优化视差参数获取单元包括:The optimized parallax parameter acquisition unit includes:
匹配能量处理模块,用于将所述各操作区域的视差参数,或所述各操作区域的视差参数与当前区域的视差参数的加权组合生成的视差参数,作为所述当前区域新的视差参数,根据所述新的视差参数获取相应的所述当前区域的各待选匹配能量,The matching energy processing module is configured to use the disparity parameters of the operation areas, or the disparity parameters generated by the weighted combination of the disparity parameters of the operation areas and the disparity parameters of the current area, as the new disparity parameters of the current area, Acquiring corresponding candidate matching energies of the current region according to the new parallax parameter,
操作区域能量更新模块,用于根据所述新的视差参数获取所述各操作区域的更新的匹配能量;An operation area energy updating module, configured to obtain updated matching energy of each operation area according to the new parallax parameter;
协同优化能量获取模块,用于根据预设的加权系数,计算所述各待选匹配能量与相应的所述各操作区域的匹配能量的各协同优化能量;A collaborative optimization energy acquisition module, configured to calculate the collaborative optimization energies of the matching energies to be selected and the corresponding matching energies of the operating regions according to preset weighting coefficients;
优化视差参数获取模块,用于根据所述各协同优化能量获取所述当前区域的优化视差参数。An optimized parallax parameter acquiring module, configured to acquire the optimized parallax parameter of the current region according to the collaborative optimization energies.
本发明实施例利用优化视差参数获取单元,对一幅图像中的所有区域采用协同优化的方法逐次进行了迭代处理,完成对整幅图像的处理,并且可以按照同样的方法对整幅图像重复多次迭代处理,从而,能够获得较好的视差结果。The embodiment of the present invention utilizes the optimized parallax parameter acquisition unit to iteratively process all areas in an image using a collaborative optimization method to complete the processing of the entire image, and can repeat multiple times for the entire image in the same way. times of iterative processing, thus, better parallax results can be obtained.
本发明实施例中匹配能量包括数据能量、遮挡能量和平滑能量。进行视差优化时获取的匹配能量包括所述数据能量,和遮挡能量与平滑能量中至少一种的组合,所述匹配能量获取单元有三种情况:The matching energy in the embodiment of the present invention includes data energy, occlusion energy and smoothing energy. The matching energy obtained during parallax optimization includes the data energy, and a combination of at least one of occlusion energy and smoothing energy, and the matching energy acquisition unit has three situations:
第一种情况、当所述匹配能量包括数据能量和遮挡能量,所述匹配能量获取单元包括:In the first case, when the matching energy includes data energy and occlusion energy, the matching energy acquisition unit includes:
第一像素点获取模块,用于获取所述各区域中的数据点像素和遮挡点像素;A first pixel acquisition module, configured to acquire data point pixels and occlusion point pixels in each area;
数据能量获取模块,用于根据所述第一像素点获取模块获取的数据点像素获取所述数据能量;A data energy acquisition module, configured to acquire the data energy according to the data point pixels acquired by the first pixel point acquisition module;
遮挡能量获取模块,用于根据所述第一像素点获取模块获取的遮挡点像素获取遮挡能量。An occlusion energy acquisition module, configured to acquire occlusion energy according to the occlusion point pixels acquired by the first pixel point acquisition module.
第二种情况、当所述匹配能量包括数据能量和平滑能量,所述匹配能量获取单元包括:In the second case, when the matching energy includes data energy and smoothing energy, the matching energy acquisition unit includes:
第二像素点获取模块,用于获取所述各区域中的数据点像素和平滑点像素;The second pixel point acquisition module is used to acquire data point pixels and smooth point pixels in each area;
数据能量获取模块,用于根据所述第二像素点获取模块获取的数据点像素获取数据能量;A data energy acquisition module, configured to acquire data energy according to the data point pixels acquired by the second pixel point acquisition module;
平滑能量获取模块,用于根据所述第二像素点获取模块获取的平滑点像素获取平滑能量。A smoothing energy acquiring module, configured to acquire smoothing energy according to the smoothing point pixels acquired by the second pixel point acquiring module.
第三种情况、当所述匹配能量包括数据能量、遮挡能量和平滑能量,所述匹配能量获取单元包括:In the third case, when the matching energy includes data energy, occlusion energy and smoothing energy, the matching energy acquisition unit includes:
第三像素点获取模块,用于获取所述各区域中的数据点像素、遮挡点像素和平滑点像素;The third pixel point acquisition module is used to acquire data point pixels, occlusion point pixels and smooth point pixels in each area;
数据能量获取模块,用于根据所述第三像素点获取模块获取的数据点像素获取所述数据能量;A data energy acquisition module, configured to acquire the data energy according to the data point pixels acquired by the third pixel point acquisition module;
遮挡能量获取模块,用于根据所述第三像素点获取模块获取的遮挡点像素获取所述遮挡能量;An occlusion energy acquisition module, configured to acquire the occlusion energy according to the occlusion point pixels acquired by the third pixel point acquisition module;
平滑能量获取模块,用于根据所述第三像素点获取模块获取的平滑点像素获取所述平滑能量。A smoothing energy acquisition module, configured to acquire the smoothed energy according to the smooth point pixels acquired by the third pixel point acquisition module.
本发明实施例中采用第三种情况,所述匹配能量包括数据能量、遮挡能量和平滑能量,进行视差优化时获取的视差更加准确。The third case is adopted in the embodiment of the present invention, the matching energy includes data energy, occlusion energy and smoothing energy, and the disparity obtained during disparity optimization is more accurate.
本发明装置实施例中各个单元和模块的具体工作方法,可参照本发明的方法实施例,此处不再赘述。For the specific working methods of each unit and module in the device embodiment of the present invention, reference may be made to the method embodiment of the present invention, which will not be repeated here.
在本发明实施例中,获取被匹配图像中各区域的视差时,采用所述各区域的匹配能量作为准则函数,对当前区域获得的初始视差进行优化时,不但根据所述当前区域的匹配能量,还根据所述各操作区域的匹配能量,由于通过不同区域之间匹配能量的互相依存,互相影响,获取的所述当前区域的优化视差参数比较准确,从而,本发明实施例根据所述优化视差参数,能够获得更准确的视差。In the embodiment of the present invention, when obtaining the disparity of each region in the matched image, the matching energy of each region is used as a criterion function, and when optimizing the initial disparity obtained in the current region, not only according to the matching energy of the current region , and according to the matching energy of each operation area, due to the interdependence and mutual influence of the matching energy between different areas, the optimized parallax parameter of the current area obtained is relatively accurate, so the embodiment of the present invention is based on the optimization Parallax parameter, which can get more accurate parallax.
本领域普通技术人员可以理解实现上述实施例中的全部或部分步骤,可以通过程序指令相关硬件完成。所述实施例对应的软件可以存储在一个计算机可存储读取的介质中。Those of ordinary skill in the art can understand that all or part of the steps in the above embodiments can be implemented through program instructions and related hardware. The software corresponding to the embodiments may be stored in a computer-readable medium.
当然,本发明的实施例还可有很多种,在不背离本发明的实施例精神及其实质的情况下,本领域技术人员当可根据本发明的实施例做出各种相应的改变和变形,但这些相应的改变和变形都应属于本发明的实施例所附的权利要求的保护范围。Of course, there are many other embodiments of the present invention, and those skilled in the art can make various corresponding changes and modifications according to the embodiments of the present invention without departing from the spirit and essence of the embodiments of the present invention. , but these corresponding changes and modifications should all belong to the protection scope of the appended claims of the embodiments of the present invention.
Claims (19)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200810090761A CN100586199C (en) | 2008-03-30 | 2008-03-30 | Parallax acquisition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200810090761A CN100586199C (en) | 2008-03-30 | 2008-03-30 | Parallax acquisition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101262619A CN101262619A (en) | 2008-09-10 |
CN100586199C true CN100586199C (en) | 2010-01-27 |
Family
ID=39962776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN200810090761A Expired - Fee Related CN100586199C (en) | 2008-03-30 | 2008-03-30 | Parallax acquisition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100586199C (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103053166A (en) * | 2010-11-08 | 2013-04-17 | 索尼公司 | Stereoscopic image data transmission device, stereoscopic image data transmission method, and stereoscopic image data reception device |
CN103081478A (en) * | 2010-06-24 | 2013-05-01 | 电子部品研究院 | Method for configuring stereoscopic moving picture file |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5311465B2 (en) * | 2008-11-25 | 2013-10-09 | Necシステムテクノロジー株式会社 | Stereo matching processing system, stereo matching processing method, and program |
CN101790103B (en) * | 2009-01-22 | 2012-05-30 | 华为技术有限公司 | Parallax calculation method and device |
CN101877796B (en) * | 2009-04-28 | 2013-07-24 | 海信集团有限公司 | Optical parallax acquiring method, device and system |
JP5404263B2 (en) * | 2009-09-07 | 2014-01-29 | パナソニック株式会社 | Parallax calculation method and parallax calculation device |
CN103106651B (en) * | 2012-07-16 | 2015-06-24 | 清华大学深圳研究生院 | Method for obtaining parallax error plane based on three-dimensional hough |
CN104240219B (en) * | 2013-06-13 | 2017-08-08 | 株式会社理光 | Configure the method and system of parallax value |
CN104284172A (en) * | 2013-07-04 | 2015-01-14 | 联咏科技股份有限公司 | Image matching method and stereo matching system |
CN104469337A (en) * | 2013-09-25 | 2015-03-25 | 联咏科技股份有限公司 | Disparity computing method and stereo matching system adopting same |
CN104637043B (en) * | 2013-11-08 | 2017-12-05 | 株式会社理光 | Pixel selecting method, device, parallax value is supported to determine method |
CN106157285B (en) * | 2015-04-03 | 2018-12-21 | 株式会社理光 | For selecting the method and system of the preferred value of the parameter group for disparity computation |
CN104902260B (en) * | 2015-06-30 | 2018-04-27 | Tcl集团股份有限公司 | The acquisition methods and system of a kind of image parallactic |
CN106447862B (en) * | 2016-10-13 | 2018-08-24 | 凌美芯(北京)科技有限责任公司 | A kind of intelligent gate ticket checking method based on computer vision technique |
CN106502501A (en) * | 2016-10-31 | 2017-03-15 | 宁波视睿迪光电有限公司 | Index localization method and device |
WO2018082604A1 (en) * | 2016-11-04 | 2018-05-11 | 宁波舜宇光电信息有限公司 | Parallax and distance parameter calculation methods, dual camera module and electronic device |
CN108377376B (en) * | 2016-11-04 | 2021-01-26 | 宁波舜宇光电信息有限公司 | Parallax calculation method, double-camera module and electronic equipment |
CN107358645B (en) * | 2017-06-08 | 2020-08-11 | 上海交通大学 | Product 3D model reconstruction method and system |
CN108337498B (en) * | 2018-01-31 | 2020-04-28 | 北京大学深圳研究生院 | A parallax calculation method and system for surface fitting |
CN110838138A (en) * | 2019-09-26 | 2020-02-25 | 北京迈格威科技有限公司 | Repetitive texture detection method, device, computer equipment and storage medium |
-
2008
- 2008-03-30 CN CN200810090761A patent/CN100586199C/en not_active Expired - Fee Related
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103081478A (en) * | 2010-06-24 | 2013-05-01 | 电子部品研究院 | Method for configuring stereoscopic moving picture file |
CN103053166A (en) * | 2010-11-08 | 2013-04-17 | 索尼公司 | Stereoscopic image data transmission device, stereoscopic image data transmission method, and stereoscopic image data reception device |
Also Published As
Publication number | Publication date |
---|---|
CN101262619A (en) | 2008-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100586199C (en) | Parallax acquisition method and device | |
Pham et al. | Domain transformation-based efficient cost aggregation for local stereo matching | |
Concha et al. | Using superpixels in monocular SLAM | |
CN104867135B (en) | A kind of High Precision Stereo matching process guided based on guide image | |
CN102903096B (en) | Monocular video based object depth extraction method | |
CN104616286B (en) | Quick semi-automatic multi views depth restorative procedure | |
US11170202B2 (en) | Apparatus and method for performing 3D estimation based on locally determined 3D information hypotheses | |
CN101086788B (en) | Method and device for generating a disparity map from stereo images and stereo matching method and device therefor | |
CN102930530B (en) | Stereo matching method of double-viewpoint image | |
US20160350904A1 (en) | Static Object Reconstruction Method and System | |
US20150097827A1 (en) | Target Region Fill Utilizing Transformations | |
CN104966289B (en) | A kind of depth estimation method based on 4D light fields | |
CN102750711A (en) | Binocular video depth map obtaining method based on image segmentation and motion estimation | |
CN110853151A (en) | Three-dimensional point set recovery method based on video | |
CN102819843B (en) | Stereo image parallax estimation method based on boundary control belief propagation | |
CN110619367B (en) | Joint low-rank constraint cross-view-angle discrimination subspace learning method and device | |
CN104680510A (en) | RADAR parallax image optimization method and stereo matching parallax image optimization method and system | |
CN109887008B (en) | Method, device and equipment for parallax stereo matching based on forward and backward smoothing and O (1) complexity | |
CN104598744A (en) | Depth estimation method based on optical field | |
CN101765019B (en) | Stereo matching algorithm for motion blur and illumination change image | |
CN106408513A (en) | Super-resolution reconstruction method of depth map | |
Luo et al. | Foreground removal approach for hole filling in 3D video and FVV synthesis | |
WO2013144418A1 (en) | Image segmentation | |
CN105184825A (en) | Indoor-scene-oriented mobile augmented reality method | |
CN102306393A (en) | Method and device for deep diffusion based on contour matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C56 | Change in the name or address of the patentee |
Owner name: HUAWEI DEVICE CO., LTD. Free format text: FORMER NAME: SHENZHEN HUAWEI TECHNOLOGY CO. |
|
CP01 | Change in the name or title of a patent holder |
Address after: 518129 Longgang District, Guangdong, Bantian HUAWEI base B District, building 2, building No. Co-patentee after: University of Science and Technology of China Patentee after: Huawei Device Co., Ltd. Address before: 518129 Longgang District, Guangdong, Bantian HUAWEI base B District, building 2, building No. Co-patentee before: University of Science and Technology of China Patentee before: Shenzhen Huawei Communication Technology Co., Ltd. |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20100127 Termination date: 20170330 |