WO2016065579A1 - Procédé et système d'estimation globale de disparité - Google Patents

Procédé et système d'estimation globale de disparité Download PDF

Info

Publication number
WO2016065579A1
WO2016065579A1 PCT/CN2014/089924 CN2014089924W WO2016065579A1 WO 2016065579 A1 WO2016065579 A1 WO 2016065579A1 CN 2014089924 W CN2014089924 W CN 2014089924W WO 2016065579 A1 WO2016065579 A1 WO 2016065579A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
image
matching
points
pixel
Prior art date
Application number
PCT/CN2014/089924
Other languages
English (en)
Chinese (zh)
Inventor
彭祎
王荣刚
王振宇
高文
董胜富
王文敏
赵洋
Original Assignee
北京大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学深圳研究生院 filed Critical 北京大学深圳研究生院
Priority to PCT/CN2014/089924 priority Critical patent/WO2016065579A1/fr
Publication of WO2016065579A1 publication Critical patent/WO2016065579A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • the present application relates to the field of stereo matching image processing, and in particular, to a global disparity estimation method and system.
  • the user can only passively view the images captured by the camera, and can not view the different viewpoints from other perspectives, while the multi-view video allows the user to view from multiple viewpoints, enhancing Interactivity and 3D sensory effects have broad application prospects in stereo TV, video conferencing, auto navigation, virtual reality and other fields.
  • the stronger interactivity and sensory effect also increase the amount of video data, which increases the burden on video storage and transmission. How to solve such problems has become a research hotspot.
  • Stereo matching also called parallax estimation, is based on multi-view image data (generally binocular) acquired by the front-end camera to estimate the geometric relationship between the pixels in the corresponding image.
  • the information of the corresponding viewpoint can be obtained from the information of one viewpoint and its depth (parallax) information, thereby reducing the amount of original data and facilitating the transmission and storage of the multi-view video.
  • the stereo matching method can be roughly divided into a local stereo matching algorithm and a global stereo matching algorithm (see Scharstein D, Szeliski RA taxonomy and evaluation of dense two-frame stereo correspondence algorithms [J]. International journal of computer Vision, 2002, 47(1-3): 7-42.).
  • the local stereo matching algorithm is not accurate, but the speed is fast, which is not conducive to practical application.
  • the global stereo matching algorithm is based on the optimization of the global energy function to obtain the parallax result, which has higher accuracy but slower speed.
  • There are some improved global stereo matching algorithms that produce speeds comparable to local stereo matching algorithms such as the Fast Confidence Propagation algorithm (see Pedro F. Felzenszwalb, Daniel P. Huttenlocher. Efficient Belief Propagation for Early Vision. International Journal of Computer Vision October 2006, Volume 70, Issue 1, pp 41-54).
  • stereo matching has been widely concerned as an important link in multi-view video, and a large number of stereo matching algorithms have emerged.
  • stereo matching has been widely concerned as an important link in multi-view video, and a large number of stereo matching algorithms have emerged.
  • problems with stereo matching especially correctness and stability, which need to be further improved.
  • the present application provides a global disparity estimation method, including:
  • the first viewpoint image is an image of the target acquired from the first viewpoint
  • the second viewpoint image is an image of the target acquired from the second viewpoint
  • each third matching point is taken as the origin, and the first and second directions are searched along the first axis and the negative direction, and the pixel is searched as a search point until the search for the constraint is not satisfied.
  • the point of the condition is stopped, and all the points that satisfy the constraint are searched for as the fourth matching point; the third matching point and the fourth matching point are taken as the second matching space of the current pixel point;
  • the constraint condition includes a linear constraint condition and a spatial constraint condition based on a sample point, the linear constraint condition being a constraint of an Euclidean distance between a current pixel point and a search point, the space constraint condition being a search point and Constraining the Euclidean distance between the sampling points in color, the first axis and the second axis being perpendicular to each other;
  • the constraint is:
  • l 1 is the distance from pixel p to search point q
  • pixel p is the current pixel
  • l 2 is the distance from pixel p to sample point e i
  • O lab (p, q) is pixel p and search
  • the Euclidean distance of the point q in the color, O lab (q, e i ) is the Euclidean distance of the search point q and the sampling point e i in color
  • k 1 , k 2 , k 3 , k 4 , w 1 , w 2 is a custom parameter, and k 1 >k 2 , k 4 >k 3 , w 2 >w 1 .
  • the preset rule is such that the distance between each sampling point from four adjacent sampling points of the upper, lower, left and right is a preset distance.
  • the method further includes: performing polar line correction on the first viewpoint image and the second viewpoint image.
  • the occlusion region in the mark image is further included, specifically: taking the first view image and each block is The first reliable point L(p) from the left end in each row, the point R(pd p ) corresponding to the second viewpoint image is calculated according to the parallax d p of the point L( p ); in the second viewpoint image The point R(pd p -1) starts to find the first reliable point Rq to the left, finds its parallax d q , and calculates that the point Rq corresponds to the point L(q+d q ) in the first viewpoint image, two of the horizontal The point between the point L(p) and L(q+d q ) is the occlusion point.
  • the initial disparity is calculated using a fast confidence propagation global algorithm based on the sum of the matching costs of all points in the first matching space and the matching cost of all points in the second matching space.
  • performing image segmentation on the first view image and the second view image includes:
  • Image blocks according to color merging image blocks whose number of pixels is smaller than a preset value with image blocks closest to the color in adjacent image blocks; and/or judging that two adjacent image blocks are close in color, and two When the sum of the number of pixel points of the image block is less than the preset value, the two image blocks are merged;
  • Combining the image blocks according to the parallax merging the image blocks with the number of reliable points smaller than the preset value with the image blocks closest to the colors in the adjacent image blocks, the reliable points are obtained according to the initial parallax of each pixel in the original image. And/or, determine whether the parallax change of two adjacent image blocks is smooth, and if so, merge the two image blocks.
  • the first view image and the second view image are divided into a plurality of image blocks, specifically, the image is divided into a plurality of image blocks based on the superpixel color block.
  • determining whether the parallax change of two adjacent image blocks is smooth comprises:
  • the present application also provides a global disparity estimation system, include:
  • An image reading module configured to read in the first viewpoint image and the second viewpoint image, the first viewpoint image is an image of the target acquired from the first viewpoint, and the second viewpoint image is an image of the target acquired from the second viewpoint;
  • a matching space calculation module configured to select a sampling point on the first viewpoint image according to a preset rule, sequentially select a pixel point as a current pixel point on the first viewpoint image, and use the current pixel point as an origin, along a positive direction of the first axis And the negative direction, searching by pixel by pixel as a search point until the search for a point that does not satisfy the preset constraint is stopped, and searching for all points satisfying the constraint as the first matching point;
  • Each of the first matching points is an origin, and the pixel is searched as a search point by the pixel in the positive direction and the negative direction of the second axis until the point that does not satisfy the preset constraint is searched, and the searched satisfaction is stopped. All points of the constraint are used as a second matching point; the first matching point and the second matching point are used as the first matching space of the current pixel point;
  • each third matching point is taken as the origin, and the first and second directions are searched along the first axis and the negative direction, and the pixel is searched as a search point until the search for the constraint is not satisfied.
  • the point of the condition is stopped, and all the points that satisfy the constraint are searched for as the fourth matching point; the third matching point and the fourth matching point are taken as the second matching space of the current pixel point;
  • the constraint condition includes a linear constraint condition and a spatial constraint condition based on a sample point, the linear constraint condition being a constraint of an Euclidean distance between a current pixel point and a search point, the space constraint condition being a search point and Constraining the Euclidean distance between the sampling points in color, the first axis and the second axis being perpendicular to each other;
  • a matching cost calculation module configured to calculate a sum of matching costs of all points in the first matching space, and calculate a sum of matching costs of all points in the second matching space;
  • An initial disparity calculation module configured to calculate an initial disparity according to a sum of matching costs of all points in the first matching space and a matching cost of all points in the second matching space, and select a reliable point;
  • An image blocking module configured to perform image segmentation on the first view image and the second view image
  • a final disparity calculation module configured to perform block segmentation based on the image, and calculate a final disparity of each pixel point in the first view image and the second view image according to the initial disparity of the reliable point.
  • the constraints adopted include linear constraints and spatial beam conditions based on sample points.
  • the linear constraints are between the current pixel and the search point.
  • the constraint of the Euclidean distance on the color, the space constraint is the constraint of the Euclidean distance between the search point and the sample point. Because the above two constraints are used at the same time, the calculated matching space is closer to the image. The edge of the object, therefore, can improve the accuracy of the matching space calculation, thus ensuring the accuracy of the final disparity calculation.
  • FIG. 1 is a schematic flowchart of a global disparity estimation method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of selecting sampling points in a matching space calculation method according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of calculation of a first matching space in a matching space calculation method according to an embodiment of the present application
  • FIG. 4 is a schematic block diagram of a global disparity estimation system according to an embodiment of the present application.
  • FIG. 5 is a test result of the global disparity estimation method provided by the embodiment of the present application on the Middlebury test platform.
  • this embodiment provides a global disparity estimation method, which includes the following steps:
  • the first viewpoint image and the second viewpoint image are read, the first viewpoint image is an image of a target acquired from the first viewpoint, and the second viewpoint image is an image of a target acquired from the second viewpoint.
  • the first viewpoint image is a left viewpoint image (hereinafter referred to as a left diagram)
  • the second viewpoint image is a right viewpoint image (hereinafter referred to as a right diagram) as an example.
  • the left and right images may be images in a binocular sequence taken by a binocular camera, or two images taken by a monocular camera at a certain horizontal displacement.
  • the left and right figures are both color images, and in some embodiments, may also be achromatic images.
  • the left and right images read in are the images that have been corrected by the polar line, ie the polar lines of the two images are horizontally paralleled for subsequent matching cost calculations, if two images are input. Without the over-polar line correction, you also need to perform polar line correction on the left and right images.
  • the matching space of the pixel points in the image is first calculated.
  • the matching space includes the first matching space and the second matching space, and the calculation method is as follows:
  • sampling point e is selected in the space of the left image. Specifically, the distance between each sampling point from the four adjacent sampling points of the upper, lower, left and right is a preset distance d, and all the sampling points form a grid shape. as shown in picture 2.
  • the sampling points may be selected in other manners, that is, the preset rules for selecting sampling points may be determined according to actual needs.
  • the constraint condition comprises a linear constraint condition and a spatial constraint condition based on the sample point
  • the linear constraint condition is current
  • the space constraint is the constraint of the Euclidean distance between the search point and the sample point in color.
  • a distance is extended from the two directions of the X-axis (first axis) and the Y-axis (second axis) respectively according to the color difference for the calculation of the matching space.
  • the pixel points are sequentially selected as the current pixel point p, and the point p is used as the origin.
  • the positive and negative directions along the X-axis are searched by pixel-by-pixel points as search points until the search for the constraints that do not satisfy the preset constraints are found.
  • all the points that satisfy the search condition are used as the first matching point; each of the first matching points is taken as the origin, and the positive and negative directions along the Y axis are used to search for the pixel by pixel.
  • the point p is used as the origin, and the search is performed on the pixel-by-pixel point as the search point along the positive and negative directions of the Y-axis until the point where the preset constraint is not satisfied is stopped, and the searched satisfaction constraint is obtained.
  • All the points of the condition are used as the third matching point; the third matching point is taken as the origin, and the positive and negative directions of the X-axis are used to search for the pixel-by-pixel point as the search point until the search for the constraint that does not satisfy the preset constraint is found.
  • the point stops, and all the points that satisfy the search for the constraint are taken as the fourth matching point; the third matching point and the fourth matching point are taken as the second matching space S 2 of the point p.
  • the points that meet the constraint condition are searched in the positive direction of the X-axis, the negative direction of the X-axis, the positive direction of the Y-axis, and the negative direction of the Y-axis, that is, the right arm, the left arm, the upper arm, and the lower side shown in FIG. arm.
  • constraints are:
  • l 1 is the distance from the pixel point p to the search point q
  • the pixel point p is the current pixel point
  • l 2 is the distance from the pixel point p to the sampling point e i
  • the selection is by the condition k 3 *l 1 ⁇ l 2 ⁇ k 4 *l 1 determines that O lab (p, q) is the Euclidean distance of the pixel p and the search point q in color
  • O lab (q, e i ) is the search point q and the sample point e i in color
  • the Euclidean distance, k 1 , k 2 , k 3 , k 4 , w 1 , w 2 are custom parameters, and k 1 >k 2 , k 4 >k 3 , w 2 >w 1 .
  • O lab (p, q) is the Euclidean distance of the pixel point p and the search point q on the lab color
  • O lab (q, e i ) is the search point q and the sampling point e i on the lab color. Euclidean distance. It should be noted that the value of i in the i-th sampling point e i is set to a suitable k 3 value and k 4 value, so that the value of i is unique to determine a unique sampling point.
  • condition 12 belongs to linear constraint
  • condition 3 belongs to spatial constraint based on sampling point.
  • the introduced spatial constraint is mainly used to improve the boundary region point of the object in the image, so that the calculated matching space is closer to the edge of the object in the image, and the algorithm is enhanced by referring to more reasonable color information. Stability. Therefore, under the premise of linear constraints, combined with the spatial constraints based on sampling points, the accuracy and stability of stereo matching can be better guaranteed. In other embodiments, the above constraints may be appropriately changed according to actual needs.
  • the step of calculating the matching cost of the point is also included.
  • the matching is performed within the specified range ⁇ of the right graph, and the matching cost of all points in the range and the point L p is calculated.
  • the range ⁇ is the search range, that is, the range of the disparity value.
  • the search range is on the same scanning line (polar line) as the point L p . Since the left and right images have been corrected by the polar line and the polar lines are horizontally parallel, the search range ⁇ is a horizontal direction. Line segment.
  • each point w in the first matching space S 1 of the point L p is used to match the R w+d point in the right picture, and the calculation of the matching cost of each pair of points is calculated. Obtained by the hybrid cost function, the final matching cost is the sum of the matching costs of all point pairs C 1 .
  • the sum of matching costs C 2 is calculated in the same way using the second matching space S 2 of the point L p .
  • the matching cost function of each point pair consists of three parts: a gray space census transform (center transform), a color space absolute value difference (denoted as AD), and a bidirectional gradient.
  • a gray space census transform center transform
  • AD color space absolute value difference
  • a bidirectional gradient The specific calculations of each part are as follows:
  • the use scenario of the census transformation is performed on the grayscale image.
  • the color map is converted into a grayscale image.
  • the grayscale value of the p-point in the original image is represented by GS(p), and at the same time, the calculation is performed in p.
  • the central 7x9 window remove the census value x(p,q) generated by all points q and p except p.
  • the formula is as follows:
  • x(p,q) is concatenated into a binary string B(p) according to the relative positions of p and q.
  • two corresponding bit strings can be obtained, and the difference between them is described by Hamming distance, and the cost value is as follows:
  • d represents the disparity between the corresponding pixels.
  • the absolute value difference is a more common method for measuring the similarity between two points.
  • the AD value in the color space is used, and the cost value obtained according to the AD value is as follows:
  • the RGB color of point p in the left image The RGB color of the point corresponding to the parallax d in the right picture and the left picture p, Indicates the Euclidean distance of these two colors.
  • the gradient is selected as the cost term.
  • a bidirectional gradient that is, a gradient in the horizontal and vertical directions is employed.
  • N x and N y represent the derivative (gradient) in the x and y directions, respectively
  • IL(p) is the gray value of the point to be calculated (left)
  • I R (pd) is in the other picture (right) Figure)
  • d is the parallax between two points
  • the final cost function is a weighted mixture of the above three terms, as shown in equation (6), where a, b, and g are weights to represent the contribution of each item to the final cost function value.
  • C census is the h(p,d) value of the corresponding point found in equation (3).
  • the fast belief propagation algorithm according to globally C 1 and C 2 using The initial parallax is used to improve the accuracy and stability of stereo matching.
  • the specific calculation method is as follows:
  • N(p) is a set of four points of up, down, left, and right adjacent to point p.
  • the local matching cost for point p parallax is d p is D p (d p ):
  • N(p) ⁇ q is a set of q points removed from the top, bottom, left, and right points adjacent to the point p.
  • the optimal disparity d * p (ie, the initial disparity) of point p can be obtained by minimizing the energy function E, which has the following formula:
  • is the range of values of parallax.
  • the matching of the left and right parallax maps is used to further filter the reliable points, and d L (p) represents the left figure.
  • the parallax of the point is:
  • Match(p) 1 indicates that p is reliable, and equal to 0 indicates that p is unreliable.
  • a step of blocking the image is also included.
  • the image is first divided into several small pieces (image blocks) of a very small size.
  • the image is divided into several image blocks based on the superpixel color block, and then, here, The image blocks are combined based on color and parallax, respectively.
  • Superpixel-based color segmentation refers to taking a number of (usually large) superpixels in space, and then using spatial information and color information to determine the pixel closest to each superpixel. Each super pixel forms a block with the pixel closest to it, taking the number of super pixel points and the generated block The number is equal. Based on the color segmentation of Superpixel, the algorithm has better effect on the boundary of the object when there are enough super-pixel points, but due to the excessive number of blocks generated in this case, it will have a negative impact on the calculation.
  • the number of pixel points is p(s)
  • the number of reliable points is r(s).
  • S40 merging the image blocks according to the parallax: combining the image blocks with the number of reliable points smaller than the preset value with the image blocks closest to the color in the adjacent image blocks, and the reliable points are selected according to the initial parallax of each pixel in the original image. And/or, determine whether the parallax change of two adjacent image blocks is smooth, and if so, merge the two image blocks.
  • V Sk (i) where a and b are preset pixel widths; when max
  • ⁇ j it is determined that the current image block S and its adjacent image block
  • the parallax change of S k is smooth, wherein i ⁇ W S, Sk , W S, and Sk are subscript sets of all pairs of points at the boundary of block S and block S k , and j is a preset value.
  • the image blocking method for global disparity estimation not only the color information is used for blocking, but also the disparity information is introduced, and the accuracy of the final calculated final disparity can be further improved.
  • the left and right pictures are pictures viewed from different perspectives, some parts are not in the right picture in the left picture, and some parts are not in the left picture in the right picture. These parts are all occlusion areas. Since these regions exist only in one picture, the results calculated by the parallax calculations made by the previous method are basically wrong. These errors will affect the final estimation result. Therefore, it is necessary to use the color block to find the occlusion area. And marked as unreliable points to improve the final correct rate.
  • the left occlusion area exists in the color block, the right end of each block is adjacent to other blocks, and the left end adjacent part is unoccluded.
  • the occlusion area exists in the portion of the color block where the left end of each block is adjacent to other blocks, and the portion adjacent to the right end is unoccluded.
  • the occlusion region in the mark image is further included, specifically: taking the first reliable point of each block from the left end in each row in the left figure.
  • L(p) which is calculated according to the disparity d p of the point L(p), which corresponds to the point R(pd p ) of the right picture; in the right picture, the first reliable is found from the point R(pd p -1) to the left.
  • Point Rq find its parallax d q , calculate the point Rq corresponds to the point L(q+d q ) in the left picture, and the point between the two points L(p) and L(q+d q ) To cover the point.
  • the embodiment further includes the steps of: performing median filtering on the existing reliable points based on the color block, and removing some reliable points again. That is, when S20 is executed after S30, S20 can utilize the information in S30 when performing further screening of reliable points. It should be noted that some of the steps in FIG. 1 do not limit the strict execution order, and the order of execution may be determined according to specific needs.
  • the gradient on the X-axis and the Y-axis is first estimated.
  • the estimation method is to select some reliable points on the X-axis and p points in a color block, calculate the gradient formed by these points, and finally take the median value, that is, the gradient derivationX estimated by the p-point on the X-axis. p).
  • DerivationY(p) is obtained in the same way in the Y direction.
  • the median is sorted for all d(p i ) and the value is rounded to see if it is equal to d(p), and if not, the point is filtered out.
  • the value obtained by taking the median value for all d(p i ) and rounding the value is the final disparity d(p) of point p.
  • the final parallax can also be obtained by any of the prior art methods.
  • the present embodiment further provides a global disparity estimation system, which includes an image reading module 1000, a matching space computing module 1001, and a matching cost calculation module 1002.
  • the image reading module 1000 is configured to read in a first viewpoint image that is an image of a target acquired from a first viewpoint, and a second viewpoint image that is an image of a target acquired from a second viewpoint.
  • the matching space calculation module 1001 is configured to select a sampling point on the first viewpoint image according to a preset rule, and sequentially select a pixel point as the current pixel point on the first viewpoint image, and use the current pixel point as an origin, along the positive direction of the first axis.
  • a matching point is the origin, Exploring the pixel-by-pixel point as a search point along the positive and negative directions of the second axis until the point where the preset constraint is not satisfied is stopped, and searching for all points that satisfy the constraint as the second match Point; the first matching point and the second matching point are used as the first matching space of the current pixel point.
  • the matching space calculation module 1001 is further configured to use the current pixel point as an origin, search in the positive direction and the negative direction of the second axis, and use the pixel-by-pixel point as the search point until the point where the predetermined constraint condition is not found is stopped, And searching all the points satisfying the constraint as the third matching point; respectively searching for the third matching point as the origin, and searching along the first axis in the positive direction and the negative direction, using the pixel-by-pixel point as the search point until the search is performed.
  • the point that does not satisfy the preset constraint is stopped, all the points that satisfy the search condition are used as the fourth matching point; the third matching point and the fourth matching point are used as the second matching space of the current pixel point.
  • Constraints include linear constraints and spatial constraints based on sample points.
  • the linear constraints are the constraints of the Euclidean distance between the current pixel and the search point.
  • the spatial constraint is the color between the search point and the sample point.
  • the first axis and the second axis are perpendicular to each other.
  • the matching cost calculation module 1002 is configured to calculate a sum of matching costs of all points in the first matching space, and calculate a sum of matching costs of all points in the second matching space.
  • the initial disparity calculation module 1003 is configured to calculate an initial disparity according to a sum of matching costs of all points in the first matching space and a matching cost of all points in the second matching space, and select a reliable point.
  • the image blocking module 1004 adopts any one of the above embodiments to perform image segmentation on the original image.
  • the final disparity calculation module 1005 is configured to calculate a final disparity for each pixel within the first view image based on image segmentation.
  • the global disparity estimation system provided by this embodiment corresponds to the global disparity estimation method described above, and the working principle is not described herein again.
  • FIG. 5 is an experimental result diagram of the global disparity estimation method provided by the embodiment of the present application on the Middlebury data set.
  • the test result on the Middlebury test platform shows that the global disparity estimation method provided by the embodiment of the present application is obtained.
  • the results (line 2 results) are superior to most current methods.
  • “nonocc”, “all” and “disc” are used as evaluation indexes, and the error rate threshold is set to 1.0, that is, the difference from the true ground truth is greater than 1. Then marked as the wrong point.
  • the global disparity estimation method and system provided by the present application obtains the hybrid cost of pixel points based on a robust hybrid cost function, and aggregates the single point cost by using an improved aggregation space; then adopts a fast belief propagation global algorithm for global cost. Optimization calculations; finally, image segmentation specifically for disparity estimation and occlusion point marking are used, so the accuracy of the final disparity calculation can be greatly improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Procédé et système d'estimation globale de disparité. Lorsque des espaces de correspondance sont calculés, un point d'échantillonnage est sélectionné sur une image selon une règle prédéfinie, puis un premier espace de correspondance et un deuxième espace de correspondance sont calculés d'après une condition de contrainte. La condition de contrainte adoptée comprend une condition de contrainte linéaire et une condition de contrainte spatiale basées sur le point d'échantillonnage, la condition de contrainte linéaire étant une contrainte de distance euclidienne sur la couleur entre un point de pixel actuel et un point de recherche, et la condition de contrainte spatiale étant une contrainte de distance euclidienne sur la couleur entre le point de recherche et le point d'échantillonnage. Comme les deux conditions de contraintes sont adoptées simultanément, l'espace de correspondance calculé est plus approché du bord d'un objet dans l'image; par conséquent, la précision de calcul de l'espace de correspondance peut être améliorée, et la précision de calcul de la disparité finale peut être garantie.
PCT/CN2014/089924 2014-10-30 2014-10-30 Procédé et système d'estimation globale de disparité WO2016065579A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/089924 WO2016065579A1 (fr) 2014-10-30 2014-10-30 Procédé et système d'estimation globale de disparité

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/089924 WO2016065579A1 (fr) 2014-10-30 2014-10-30 Procédé et système d'estimation globale de disparité

Publications (1)

Publication Number Publication Date
WO2016065579A1 true WO2016065579A1 (fr) 2016-05-06

Family

ID=55856397

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/089924 WO2016065579A1 (fr) 2014-10-30 2014-10-30 Procédé et système d'estimation globale de disparité

Country Status (1)

Country Link
WO (1) WO2016065579A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275730A (zh) * 2020-01-13 2020-06-12 平安国际智慧城市科技股份有限公司 地图区域的确定方法、装置、设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101790103A (zh) * 2009-01-22 2010-07-28 华为技术有限公司 一种视差计算方法及装置
CN102930530A (zh) * 2012-09-26 2013-02-13 苏州工业职业技术学院 一种双视点图像的立体匹配方法
WO2013173106A1 (fr) * 2012-05-18 2013-11-21 The Regents Of The University Of California Procédé d'estimation de disparité vidéo au moyen de fils indépendants et codec

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101790103A (zh) * 2009-01-22 2010-07-28 华为技术有限公司 一种视差计算方法及装置
WO2013173106A1 (fr) * 2012-05-18 2013-11-21 The Regents Of The University Of California Procédé d'estimation de disparité vidéo au moyen de fils indépendants et codec
CN102930530A (zh) * 2012-09-26 2013-02-13 苏州工业职业技术学院 一种双视点图像的立体匹配方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUAN, LI ET AL.: "Initial Disparity Estimation Algorithm Based on Weighted Matching Cost", COMPUTER TECHNOLOGY AND DEVELOPMENT, vol. 21, no. 10, 31 October 2011 (2011-10-31) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275730A (zh) * 2020-01-13 2020-06-12 平安国际智慧城市科技股份有限公司 地图区域的确定方法、装置、设备及存储介质

Similar Documents

Publication Publication Date Title
Chen et al. Improved saliency detection in RGB-D images using two-phase depth estimation and selective deep fusion
Li et al. PMSC: PatchMatch-based superpixel cut for accurate stereo matching
CN108648161B (zh) 非对称核卷积神经网络的双目视觉障碍物检测系统及方法
US9237326B2 (en) Imaging system and method
WO2016176840A1 (fr) Procédé et dispositif de post-traitement de carte de profondeur/disparité
CN104331890B (zh) 一种全局视差估计方法和系统
CN105513064A (zh) 一种基于图像分割和自适应权重的立体匹配方法
US11995858B2 (en) Method, apparatus and electronic device for stereo matching
CN110189294B (zh) 基于深度可信度分析的rgb-d图像显著性检测方法
KR101869605B1 (ko) 평면정보를 이용한 3차원 공간 모델링 및 데이터 경량화 방법
CN108629809B (zh) 一种精确高效的立体匹配方法
CN104318576B (zh) 一种超像素级别的图像全局匹配方法
CN106997478B (zh) 基于显著中心先验的rgb-d图像显著目标检测方法
Hu et al. Stereo matching using weighted dynamic programming on a single-direction four-connected tree
CN107610148B (zh) 一种基于双目立体视觉系统的前景分割方法
Ni et al. Second-order semi-global stereo matching algorithm based on slanted plane iterative optimization
CN104408710B (zh) 一种全局视差估计方法和系统
CN107578419B (zh) 一种基于一致性轮廓提取的立体图像分割方法
CN117726747A (zh) 补全弱纹理场景的三维重建方法、装置、存储介质和设备
WO2016065579A1 (fr) Procédé et système d'estimation globale de disparité
CN114331919B (zh) 深度恢复方法、电子设备及存储介质
WO2016065578A1 (fr) Procédé et système d'estimation de disparité globale
CN109544619A (zh) 一种基于图割的双目视觉立体匹配方法及系统
CN110298782B (zh) 一种rgb显著性到rgbd显著性的转换方法
CN113344988B (zh) 立体匹配方法、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14904630

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14904630

Country of ref document: EP

Kind code of ref document: A1