WO2016176840A1 - 深度图/视差图的后处理方法和装置 - Google Patents
深度图/视差图的后处理方法和装置 Download PDFInfo
- Publication number
- WO2016176840A1 WO2016176840A1 PCT/CN2015/078382 CN2015078382W WO2016176840A1 WO 2016176840 A1 WO2016176840 A1 WO 2016176840A1 CN 2015078382 W CN2015078382 W CN 2015078382W WO 2016176840 A1 WO2016176840 A1 WO 2016176840A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- point
- processed
- edge
- hole
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000012805 post-processing Methods 0.000 title claims abstract description 24
- 230000001788 irregular Effects 0.000 claims abstract description 45
- 230000011218 segmentation Effects 0.000 claims abstract description 18
- 238000001514 detection method Methods 0.000 claims abstract description 13
- 238000003709 image segmentation Methods 0.000 claims abstract description 11
- 230000000903 blocking effect Effects 0.000 claims description 15
- 230000008439 repair process Effects 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 3
- 238000003708 edge detection Methods 0.000 claims description 2
- 230000000295 complement effect Effects 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 6
- 238000000638 solvent extraction Methods 0.000 abstract 1
- 238000012360 testing method Methods 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000005457 optimization Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 208000006440 Open Bite Diseases 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000035699 permeability Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Definitions
- the present application relates to the field of three-dimensional image processing technologies, and in particular, to a post-processing method and apparatus for a depth map/disparity map.
- the contact three-dimensional scanner mainly measures the three-dimensionality of the object by actually contacting the measured object. Coordinates and other information, in order to obtain its depth, this method is very accurate, but because of its contact with the object, it is easy to damage the object, and it takes a long time, so it is less used; the other method is non-contact
- the method can measure the depth information of the object without contacting the object to be measured, and can be divided into active scanning and passive scanning.
- the active scanning refers to measuring the depth information by actively transmitting signals or energy.
- the method while the passive scanning does not need to transmit energy, only the information of the image is used to calculate the depth information.
- Common active scanning methods include time difference ranging by laser range finder, triangulation, etc., as well as structured light source method realized by projected image.
- Common passive scanning has stereo matching and chromatic forming method, which are mainly calculated by algorithm. achieve.
- a depth map corresponding to the measured scene is generated.
- the image is a grayscale image, and the depth of the object is represented by the depth of the color.
- the quality of the depth map will have a huge impact on the later application.
- the obtained depth map has many defects, such as empty black spots and irregular edges of objects. phenomenon.
- the noise existing in the image is generally removed by filtering, and the stereo matching in the passive scanning is more than one perspective compared to the active scanning, so that the information of the two viewing angles can be used to correct
- Depth maps are generally used for consistency detection by left and right depth maps, detecting inconsistent areas, and then filtering these areas.
- the depth map (or disparity map) post-processing in stereo matching is more refined than the active scan, there are still black holes and edges that are not neat.
- depth information as a key technology in many frontier fields and new applications, has received extensive attention and a large number of methods for obtaining depth information.
- the quality of depth maps is limited by technology.
- Some methods have begun to study the post-processing of depth maps.
- the processed images still have black hole points and irregular edges of the objects, which will seriously affect the subsequent applications. Therefore, the post-processing of the depth map is an urgent problem to be solved and perfected.
- the present application provides a post-processing method for a depth map/disparity map, including:
- the image to be processed being a depth map or a disparity map
- Performing image segmentation on the color image includes: superpixel segmentation on the color image; dividing the grayscale range into preset intervals, and for each superpixel, statistically obtaining a histogram in which all the pixel points fall within the interval Determining whether the ratio of the number of pixels included in the interval with the largest interval distribution value to the total number of pixels in the current super pixel is less than the first threshold value, and if so, using the color block based method on the current super pixel Further segmentation;
- the present application provides a post-processing device for a depth map/disparity map, including:
- An input module configured to input an image to be processed, where the image to be processed is a depth map or a disparity map;
- a non-aligned edge detecting module comprising: an edge extracting unit, an image blocking unit, and a non-aligned edge detecting unit; the edge extracting unit is configured to perform edge extraction on the image to be processed to obtain edge information; and the image blocking unit is configured to The color image corresponding to the image to be processed is subjected to image segmentation to obtain block information; and the irregular edge detecting unit is configured to obtain an irregular edge region of the image to be processed according to the edge information and the block information;
- the image blocking unit when the image blocking unit performs image segmentation on the color image, specifically: the image blocking unit performs super pixel segmentation on the color image; the grayscale range is divided into preset intervals, and for each super pixel, statistics are performed. Obtaining a histogram in which all the pixel points fall within the interval; determining whether the ratio of the number of pixels included in the interval with the largest interval distribution value to the total number of pixels in the current super pixel is less than the first threshold value in the current super pixel, if , the color sub-blocking method is used to further segment the current super pixel;
- An irregular edge repair module for repairing the irregular edge region.
- FIG. 1 is a block diagram of a post-processing device for a depth map/disparity map according to an embodiment of the present application
- FIG. 2 is a schematic flowchart of a post-processing method of a depth map/disparity map according to an embodiment of the present application
- FIG. 3 is a schematic diagram of hole filling in an embodiment of the present application.
- FIG. 4 is a schematic diagram of a detection process of an irregular edge region in an embodiment of the present application.
- FIG. 5 is a schematic diagram of a repair process of an irregular edge region in an embodiment of the present application.
- FIG. 6 is a schematic diagram of comparison before and after repair of an irregular edge region in an embodiment of the present application.
- Figure 7 is an image of the Middlebury experimental test
- FIG. 9 is a front-to-back comparison result of applying the method provided by this embodiment in a global stereo matching algorithm
- FIG. 10 is a front-back comparison effect diagram of the method provided by the embodiment of the Kinect depth map.
- the post-processing method and device for the depth map/disparity map compensates for the shortcomings in the optimization of the depth map/disparity map, and proposes a new depth map/disparity map post-processing optimization method to improve stereo matching.
- the disparity map and the quality of the depth map obtained by the active scan are the same.
- the post-processing method and device of the depth map/disparity map provided by the present application, the common problem regions and error points existing in the depth map/disparity map can be better corrected, and the existing parallax map post-processing Compared with the method, it can find and solve more problem areas, and can support the depth map obtained by the monocular camera, has wider applicability, and can greatly improve the quality of the parallax map/depth map.
- the post-processing device of the depth map/disparity map provided by the embodiment includes an input module 10 , a pre-processing module 20 , a hole detection module 30 , a hole-filling module 40 , a non-aligned edge detection module 50 , and an irregularity Edge repair module 60.
- the irregularity edge detecting module 50 includes an edge extracting unit 501, an image blocking unit 502, and a non-aligned edge detecting unit 503.
- FIG. 2 is a schematic flowchart of a post-processing method of a depth map/disparity map provided by the embodiment, and the method includes the following steps:
- Step 1.1 The input module 10 inputs an image to be processed, and the image to be processed may be a depth map or a disparity map.
- Step 1.2 When the input image to be processed is a depth map, the preprocessing module 20 first preprocesses the depth map and converts the depth map into unified disparity data. Due to depth map and parallax The graphs are grayscale images, and the two are just inversely proportional to the grayscale representation. Therefore, when the depth map is preprocessed, the depth map is inversely processed. However, it is noted that since the depth map itself may have many black hole points, the simple inversion will cause the "black hole" to become white, which will cause serious interference to the subsequent parallax processing, so the hole point is not inverted.
- the preprocessing of the depth map is as follows:
- D(p) represents the gray value of the point p in the depth map
- d(p) represents the gray value of the point p in the converted parallax data (hereinafter collectively referred to as the parallax map).
- Step 1.3 The hole detection module 30 performs hole detection on the preprocessed image to be processed.
- the current information to be processed is the disparity data.
- the embodiment first processes the black hole points remaining therein.
- the traditional disparity map post-processing technique fills the disparity map obtained by stereo matching with "zero-value parallax", there are still many obvious black hole points, and the disparity values of these points may not be zero, so they are not Filled, but still belongs to the wrong parallax point.
- d ⁇ ⁇ *d max , ⁇ , d max are the penalty coefficient and the maximum disparity value, respectively, and if it is less than the threshold, it is considered to be a low trust point, otherwise it is a high trust point.
- the “disparity value” and the “gray value” of the pixel points in the image mentioned in this embodiment can be considered to belong to the same concept, because the disparity value of the pixel point in the image is determined by the gray value. Characterized.
- Step 1.4 The hole filling module 40 fills the calibrated hole.
- the traditional padding method is directly filled with the smaller parallax points around it, that is, with the background point (small parallax), based on the assumption that the hole point (here, the zero point) appears in the background (as shown in Figure 3(a)).
- the hole is at the edge of the image (as shown in Fig. 3(b))
- the hole is not filled with the smaller value point. Therefore, this embodiment treats the two cases differently, and the filling method is as follows: As shown in the formula:
- d * (p) represents the disparity value of the point p after the padding
- p 1 and p 2 are points in the neighborhood around the point p (for example, points above and below the point p), and taking one direction as an example.
- the meaning expressed in the above formula is: when it is detected that the points in the surrounding neighborhood of the current hole point are all non-hole points, the point with the smallest disparity value in the surrounding neighborhood is used to fill the current hole point; the surrounding of the current hole point is detected. When there is a hole in the neighborhood, the point with the largest disparity in the surrounding neighborhood is used to fill the current hole.
- Step 1.5 In addition to the hole points, the common problem areas in the parallax map and the depth map are the irregularities of the contour edges of the object, mainly represented by the convex areas of the "convex parallax” and the concave areas of the "concave parallax", such as As shown in FIG. 4, the block areas indicated by S1 and S2 are non-aligned areas, and these areas are collectively referred to as edge irregularities. For the detection of irregular areas, this embodiment adopts the idea of combining edge information and block information, respectively.
- the edge information of the disparity map is extracted by the edge extracting unit 501 of the untidy edge detecting module 50, and the image blocking unit 502 of the untidy edge detecting module 50 blocks the original color image to obtain the blocking information.
- this embodiment uses the Canny operator, and for the color image segmentation, this embodiment proposes a new super pixel based blocking method, namely "adaptive super pixel", different
- the super pixel blocking method has lower computational complexity and has a greater influence on the speed of the entire post-processing. Therefore, this embodiment adopts a super pixel-based blocking method, and At the same time, the conventional super-pixel segmentation is inaccurate in some regions because its scale is relatively fixed. Therefore, this embodiment proposes an idea of adaptively changing the segmentation accuracy.
- performing image segmentation on the color image includes: superpixel segmentation on the color image; dividing the grayscale range into preset intervals, and for each superpixel, statistically obtaining that all the pixel points fall within the interval. a histogram; determining whether the ratio of the number of pixels included in the interval with the largest interval distribution value to the total number of pixels in the current super pixel is less than the first threshold in the current superpixel, and if so, using the color block based method on the current The superpixel is further divided. details as follows:
- the method of comparing the proportion of the principal component in each superpixel is used to judge whether the superpixel is accurate.
- the process can be described as: dividing the gray scale range into (0 to 50), (50 to 80), (80 to 150), (150 to 230), and (230 to 155) five intervals for each super Pixel, counts the interval in which all the pixels fall, and then generates a histogram, that is, a histogram consisting of 5 columns, each column represents the above-mentioned interval, and the number of pixels included in the interval in which the interval distribution value is the largest is defined as n max , the total number of pixels in the entire super pixel is n all , and the super pixel whose ratio is smaller than the first threshold ⁇ is recorded as a super pixel with insufficient division strength, that is, n max /n all ⁇ .
- the edge information of the parallax map and the block information of the color map are provided, and the irregular edge detecting unit 503 of the irregular edge detecting module 50 uses the both to detect the edge irregularity region. If there is no problem at the edge of the disparity map, the edge should match the edge of the block graph. If it does not match, the edge is considered to be problematic, as shown in Figure 4, when an edge passes through the adaptive superpixel. When the divided block is obtained, the edge is judged to be an irregular edge. Thereafter, if the edge belongs to the edge of the convex area, the error area is on the foreground side, and if it belongs to the edge of the concave area, the error area is on the background side.
- this embodiment adopts a square window to determine which side of the irregular edge is marked. For a point on the irregular edge, a square window is constructed with the center, and the window is divided by the irregular edge. For the two parts with different areas, the wrong area is located on the side of the smaller area, and the wrong area (not neat edge area) can be marked at this time.
- the edge is a vertical or horizontal straight line
- the two parts of the square window may have the same area. In this case, the size of the square window is only increased until the two parts are not equal in area. The above method determines.
- Step 1.6 After the irregularities of the edges in the disparity map are marked, the untidy edge repair module 60 repairs them.
- the error regions are repaired by using a weighted median filtering method.
- the principle of median filtering is to replace the value of the center point with a median of all points in the range within a certain range to complete the filtering.
- the weighted median filtering is based on the traditional median filtering, and the different points in the range are treated differently, for example, different weights are assigned according to the color or the distance.
- the filter kernel of the weighted median filter used in this embodiment is a guide filter coefficient and a guide filter (see Rhemann C, Hosni A, Bleyer M, et al.
- I p and I q are the pixels in the square window
- is the total number of pixels in the square window
- I is the leading image
- ⁇ is the smoothing coefficient
- U is the corresponding unit matrix.
- I p and I q are six-dimensional vectors
- u is a six-dimensional mean vector
- ⁇ is a 6*6 cross-correlation matrix
- I p and I q are three-dimensional vectors (R, G, B)
- u is a three-dimensional mean vector
- ⁇ is a 3*3 cross-correlation matrix.
- FIG. 6(a) and FIG. 6(b) are schematic diagrams before and after the irregular edge repair, respectively.
- Fig. 6(a) is modified by the edge repair method provided in this embodiment (the convex area of the irregularly edged area of the frame area and the concave area of the irregularly shaped area), and
- Fig. 6(b) presents a neat edge.
- the hole detection and filling steps may not be performed, and only the irregular edge region detection and repair steps are performed on the image, or the hole points are The detection and filling is carried out using methods in the prior art.
- an experiment is set to perform the test, and the experiment is divided into a test for the disparity map and a test for the depth map.
- the standard data set of Middlebury is used to verify by different stereo matching algorithms.
- the depth map the currently used depth acquisition device, Kinect, is used as the test image by the Kinect.
- Middlebury http://vision.middlebury.edu/stereo/ provides a professional test platform for stereo matching and corresponding test data.
- the selected test image is shown in Fig. 7, wherein the first behavior is the left image, the second behavior is the right image, and the third behavior is the standard parallax map, which is a different image pair from left to right.
- the performance of the post-processing optimization technology provided by this embodiment is verified by applying a plurality of different stereo matching algorithms.
- the test algorithm mainly has a local algorithm and a global algorithm, and the results are shown in FIG. 8 and FIG. 9 respectively. Among them, nonocc, all, and disc are three different evaluation indicators.
- the sub-tables represent non-occlusion areas, all areas, and inspection discontinuous areas.
- the quality of the disparity map is measured from three different angles, and the ordinate indicates the average error rate. The lower the quality, the better the abscissa is.
- the method 1 to method 5 in FIG. 8 are Box Filter, Guided Filter, Cross Region, Information Permeability, and DTAggr methods, respectively.
- Method 1 and Method 2 in Figure 9 are the Graph Cut and Belief Propagation methods, respectively. It can be seen from the figure that the method provided by this embodiment is tested on various stereo matching algorithms, and each has different degrees of quality improvement, which proves its effectiveness.
- the depth map captured by Kinect is selected as the test, and the processing result is shown in FIG. 10, and the depth map has been shown for the convenience of observation. Converted to disparity data. From left to right, the original color map, the unprocessed depth map, and the depth map processed by the present invention are shown. It can be seen from the figure that the processing of the method of this embodiment is improved and improved to a large extent, whether it is a hole area or an edge irregularity, which proves that it is effective on the monocular depth map data. Sex.
Abstract
Description
Claims (12)
- 一种深度图/视差图的后处理方法,其特征在于,包括:输入待处理图像,所述待处理图像为深度图或视差图;对所述待处理图像进行边缘提取,以得到边缘信息;并对所述待处理图像对应的彩色图像进行图像分块,以得到分块信息;对所述彩色图像进行图像分块包括:对彩色图像进行超像素分割;将灰度范围分为预设个区间,对于每个超像素,统计得到所有像素点落在所述区间内的直方图;判断当前超像素中,区间分布值最大的区间所含的像素数与当前超像素中的全部像素数的比值是否小于第一阈值,如果是,则采用基于颜色分块的方法对当前超像素进行进一步分割;根据所述边缘信息和分块信息得到待处理图像的不整齐边缘区域;对所述不整齐边缘区域进行修复。
- 如权利要求1所述的方法,其特征在于,当所述待处理图像为深度图时,在输入深度图后,还包括先对所述深度图中的非洞点进行反相处理的步骤。
- 如权利要求1所述的方法,其特征在于,根据所述边缘信息和分块信息得到待处理图像的不整齐边缘区域,具体为:根据所述边缘信息和分块信息判断出不整齐边缘,以所述不整齐边缘上的任意一点为中心构造方形窗口,将所述不整齐边缘将方形窗口划分后面积较小的部分作为不整齐边缘区域。
- 如权利要求3所述的方法,其特征在于,对所述不整齐边缘区域进行修复,具体为:采用引导滤波器系数作为滤波核系数的加权中值滤波方法对所述不整齐边缘区域进行修复。
- 如权利要求1至5中任一项所述的方法,其特征在于,对所述待处理图像进行边缘提取前,还包括对所述待处理图像进行洞点检测和填 补的步骤,洞点检测和填补的步骤包括:将视差值小于第二阈值的像素点定义为低信任度点;对于所述低信任度点,当该点的视差值比其周围邻域中的点的视差值小第二阈值时,确定该点为洞点;按此方式检测出待处理图像中的所有洞点;检测到当前洞点的周围邻域中的点都为非洞点时,采用周围邻域中视差值最小的点来填补当前洞点;检测到当前洞点的周围邻域中存在洞点时,采用周围邻域中视差值最大的点来填补当前洞点;按此方式对待处理图像中的所有洞点进行填补。
- 一种深度图/视差图的后处理装置,其特征在于,包括:输入模块,用于输入待处理图像,所述待处理图像为深度图或视差图;不整齐边缘检测模块,其包括边缘提取单元、图像分块单元、不整齐边缘检测单元;边缘提取单元用于对所述待处理图像进行边缘提取,以得到边缘信息;图像分块单元用于对所述待处理图像对应的彩色图像进行图像分块,以得到分块信息;不整齐边缘检测单元用于根据所述边缘信息和分块信息得到待处理图像的不整齐边缘区域;其中,图像分块单元对所述彩色图像进行图像分块时,具体为:图像分块单元对彩色图像进行超像素分割;将灰度范围分为预设个区间,对于每个超像素,统计得到所有像素点落在所述区间内的直方图;判断当前超像素中,区间分布值最大的区间所含的像素数与当前超像素中的全部像素数的比值是否小于第一阈值,如果是,则采用基于颜色分块的方法对当前超像素进行进一步分割;不整齐边缘修复模块,用于对所述不整齐边缘区域进行修复。
- 如权利要求7所述的装置,其特征在于,还包括预处理模块,用于在所述待处理图像为深度图时,对输入模块输入的深度图中的非洞点进行反相处理。
- 如权利要求7所述的装置,其特征在于,不整齐边缘检测单元用于根据所述边缘信息和分块信息得到待处理图像的不整齐边缘区域时,不整齐边缘检测单元根据所述边缘信息和分块信息判断出不整齐边缘,以所述不整齐边缘上的任意一点为中心构造方形窗口,将所述不整齐边缘将方形窗口划分后面积较小的部分作为不整齐边缘区域。
- 如权利要求9所述的装置,其特征在于,不整齐边缘修复模块用于对所述不整齐边缘区域进行修复时,不整齐边缘修复模块采用引导滤波器系数作为滤波核系数的加权中值滤波方法对所述不整齐边缘区域进行修复。
- 如权利要求7-11任一项所述的装置,其特征在于,还包括:洞点检测模块,用于将视差值小于第二阈值的像素点定义为低信任度点;对于所述低信任度点,当该点的视差值比其周围邻域中的点的视差值小第二阈值时,确定该点为洞点;按此方式检测出待处理图像中的所有洞点;洞点填补模块,用于在检测到当前洞点的周围邻域中的点都为非洞点时,采用周围邻域中视差值最小的点来填补当前洞点;检测到当前洞点的周围邻域中存在洞点时,采用周围邻域中视差值最大的点来填补当前洞点;按此方式对待处理图像中的所有洞点进行填补。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201580000247.6A CN105517677B (zh) | 2015-05-06 | 2015-05-06 | 深度图/视差图的后处理方法和装置 |
US15/565,877 US10424075B2 (en) | 2015-05-06 | 2015-05-06 | Depth/disparity map post-processing method and device |
PCT/CN2015/078382 WO2016176840A1 (zh) | 2015-05-06 | 2015-05-06 | 深度图/视差图的后处理方法和装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2015/078382 WO2016176840A1 (zh) | 2015-05-06 | 2015-05-06 | 深度图/视差图的后处理方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016176840A1 true WO2016176840A1 (zh) | 2016-11-10 |
Family
ID=55724960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/078382 WO2016176840A1 (zh) | 2015-05-06 | 2015-05-06 | 深度图/视差图的后处理方法和装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10424075B2 (zh) |
CN (1) | CN105517677B (zh) |
WO (1) | WO2016176840A1 (zh) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107993239A (zh) * | 2017-12-25 | 2018-05-04 | 北京邮电大学 | 一种计算单目图像的深度次序的方法和装置 |
GB2563596A (en) * | 2017-06-19 | 2018-12-26 | Shortbite Ltd | System and method for modeling a three dimensional space based on a two dimensional image |
CN109684932A (zh) * | 2018-11-30 | 2019-04-26 | 华南农业大学 | 一种基于双目视觉的托盘位姿识别方法 |
US20190362511A1 (en) * | 2018-05-23 | 2019-11-28 | Apple Inc. | Efficient scene depth map enhancement for low power devices |
CN111223059A (zh) * | 2020-01-04 | 2020-06-02 | 西安交通大学 | 一种基于引导滤波器的鲁棒深度图结构重建和去噪方法 |
CN111292367A (zh) * | 2020-02-18 | 2020-06-16 | 青岛联合创智科技有限公司 | 一种基线可变的双目相机深度图生成方法 |
CN111833393A (zh) * | 2020-07-05 | 2020-10-27 | 桂林电子科技大学 | 一种基于边缘信息的双目立体匹配方法 |
CN112053394A (zh) * | 2020-07-14 | 2020-12-08 | 北京迈格威科技有限公司 | 图像处理方法、装置、电子设备及存储介质 |
WO2022160587A1 (zh) * | 2021-01-26 | 2022-08-04 | 深圳市商汤科技有限公司 | 深度检测方法、装置、电子设备、存储介质及程序产品 |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10462445B2 (en) * | 2016-07-19 | 2019-10-29 | Fotonation Limited | Systems and methods for estimating and refining depth maps |
US10839535B2 (en) | 2016-07-19 | 2020-11-17 | Fotonation Limited | Systems and methods for providing depth map information |
CN106231292B (zh) * | 2016-09-07 | 2017-08-25 | 深圳超多维科技有限公司 | 一种立体虚拟现实直播方法、装置及设备 |
CN106341676B (zh) * | 2016-09-29 | 2017-06-16 | 济南大学 | 基于超像素的深度图像预处理和深度空洞填充方法 |
TWI595771B (zh) | 2016-10-20 | 2017-08-11 | 聚晶半導體股份有限公司 | 影像深度資訊的優化方法與影像處理裝置 |
KR102351542B1 (ko) * | 2017-06-23 | 2022-01-17 | 삼성전자주식회사 | 시차 보상 기능을 갖는 애플리케이션 프로세서, 및 이를 구비하는 디지털 촬영 장치 |
CN108537798B (zh) * | 2017-11-29 | 2021-05-18 | 浙江工业大学 | 一种快速超像素分割方法 |
US10645357B2 (en) * | 2018-03-01 | 2020-05-05 | Motorola Mobility Llc | Selectively applying color to an image |
US11501543B2 (en) * | 2018-03-26 | 2022-11-15 | Videonetics Technology Private Limited | System and method for automatic real-time localization of license plate of vehicle from plurality of images of the vehicle |
CN108596040A (zh) * | 2018-03-29 | 2018-09-28 | 中山大学 | 一种基于双目视觉的串联通道融合行人检测方法 |
US10621730B2 (en) * | 2018-05-22 | 2020-04-14 | Sony Corporation | Missing feet recovery of a human object from an image sequence based on ground plane detection |
US10878590B2 (en) * | 2018-05-25 | 2020-12-29 | Microsoft Technology Licensing, Llc | Fusing disparity proposals in stereo matching |
CN109636732B (zh) * | 2018-10-24 | 2023-06-23 | 深圳先进技术研究院 | 一种深度图像的空洞修复方法以及图像处理装置 |
CN109522833A (zh) * | 2018-11-06 | 2019-03-26 | 深圳市爱培科技术股份有限公司 | 一种用于道路检测的双目视觉立体匹配方法及系统 |
CN111383185B (zh) * | 2018-12-29 | 2023-09-22 | 海信集团有限公司 | 一种基于稠密视差图的孔洞填充方法及车载设备 |
CN112541920A (zh) * | 2019-09-23 | 2021-03-23 | 大连民族大学 | 基于多通道式的图像超像素目标行人分割方法 |
CN110675346B (zh) * | 2019-09-26 | 2023-05-30 | 武汉科技大学 | 适用于Kinect的图像采集与深度图增强方法及装置 |
CN110796600B (zh) * | 2019-10-29 | 2023-08-11 | Oppo广东移动通信有限公司 | 一种图像超分重建方法、图像超分重建装置及电子设备 |
KR20210056540A (ko) | 2019-11-11 | 2021-05-20 | 삼성전자주식회사 | 디스패리티 이미지를 생성하는 알고리즘 갱신 방법 및 장치 |
CN111127535B (zh) * | 2019-11-22 | 2023-06-20 | 北京华捷艾米科技有限公司 | 一种手部深度图像的处理方法及装置 |
CN111243000A (zh) * | 2020-01-13 | 2020-06-05 | 北京工业大学 | 多约束代价计算与聚合的立体匹配方法 |
CN111784703B (zh) * | 2020-06-17 | 2023-07-14 | 泰康保险集团股份有限公司 | 一种图像分割方法、装置、电子设备和存储介质 |
CN113838075B (zh) * | 2020-06-23 | 2024-01-09 | 南宁富联富桂精密工业有限公司 | 单目测距方法、装置及计算机可读存储介质 |
CN112016441B (zh) * | 2020-08-26 | 2023-10-13 | 大连海事大学 | 基于Radon变换多特征融合的Sentinel-1图像海岸带养殖池提取方法 |
US20220076502A1 (en) * | 2020-09-08 | 2022-03-10 | XRSpace CO., LTD. | Method for adjusting skin tone of avatar and avatar skin tone adjusting system |
CN112529773B (zh) * | 2020-12-17 | 2024-02-02 | 豪威科技(武汉)有限公司 | Qpd图像后处理方法及qpd相机 |
CN113160297A (zh) * | 2021-04-25 | 2021-07-23 | Oppo广东移动通信有限公司 | 图像深度估计方法和装置、电子设备、计算机可读存储介质 |
CN113516699A (zh) * | 2021-05-18 | 2021-10-19 | 哈尔滨理工大学 | 一种基于超像素分割的立体匹配系统 |
CN113792583A (zh) * | 2021-08-03 | 2021-12-14 | 北京中科慧眼科技有限公司 | 基于可行驶区域的障碍物检测方法、系统和智能终端 |
CN113345015A (zh) * | 2021-08-05 | 2021-09-03 | 浙江华睿科技股份有限公司 | 一种包裹位置检测方法、装置、设备及可读存储介质 |
CN114866758B (zh) * | 2022-05-31 | 2024-02-23 | 星宸科技股份有限公司 | 视差图像填补方法以及图像处理装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140003711A1 (en) * | 2012-06-29 | 2014-01-02 | Hong Kong Applied Science And Technology Research Institute Co. Ltd. | Foreground extraction and depth initialization for multi-view baseline images |
CN103942756A (zh) * | 2014-03-13 | 2014-07-23 | 华中科技大学 | 一种深度图后处理滤波的方法 |
CN104537627A (zh) * | 2015-01-08 | 2015-04-22 | 北京交通大学 | 一种深度图像的后处理方法 |
US20150110391A1 (en) * | 2013-10-21 | 2015-04-23 | Nokia Corporation | Method and apparatus for scene segmentation from focal stack images |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4037659C2 (de) * | 1990-11-27 | 1998-04-09 | Dbt Gmbh | Rinnenschuß für Kettenkratzförderer, insbesondere für den Bergbaueinsatz |
US20010035502A1 (en) * | 2000-03-13 | 2001-11-01 | Satoshi Arakawa | Radiation image storage panel and cassette |
WO2006041812A2 (en) * | 2004-10-05 | 2006-04-20 | Threeflow, Inc. | Method of producing improved lenticular images |
US8029139B2 (en) * | 2008-01-29 | 2011-10-04 | Eastman Kodak Company | 2D/3D switchable color display apparatus with narrow band emitters |
US9380292B2 (en) * | 2009-07-31 | 2016-06-28 | 3Dmedia Corporation | Methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene |
US8571314B2 (en) * | 2010-09-02 | 2013-10-29 | Samsung Electronics Co., Ltd. | Three-dimensional display system with depth map mechanism and method of operation thereof |
US9123115B2 (en) * | 2010-11-23 | 2015-09-01 | Qualcomm Incorporated | Depth estimation based on global motion and optical flow |
US9087375B2 (en) * | 2011-03-28 | 2015-07-21 | Sony Corporation | Image processing device, image processing method, and program |
EP2786580B1 (en) * | 2011-11-30 | 2015-12-16 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Spatio-temporal disparity-map smoothing by joint multilateral filtering |
US8989515B2 (en) * | 2012-01-12 | 2015-03-24 | Kofax, Inc. | Systems and methods for mobile image capture and processing |
KR102033618B1 (ko) * | 2012-12-18 | 2019-10-17 | 엘지디스플레이 주식회사 | 표시장치와 이의 구동방법 |
US9519972B2 (en) * | 2013-03-13 | 2016-12-13 | Kip Peli P1 Lp | Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies |
JP6136537B2 (ja) * | 2013-04-26 | 2017-05-31 | オムロン株式会社 | 画像処理装置、画像処理方法、画像処理制御プログラム、および記録媒体 |
JP2015156607A (ja) * | 2014-02-21 | 2015-08-27 | ソニー株式会社 | 画像処理装置、画像処理装置、及び電子機器 |
US10089740B2 (en) * | 2014-03-07 | 2018-10-02 | Fotonation Limited | System and methods for depth regularization and semiautomatic interactive matting using RGB-D images |
EP3086552A1 (en) * | 2015-04-20 | 2016-10-26 | Thomson Licensing | Method and apparatus for image colorization |
-
2015
- 2015-05-06 WO PCT/CN2015/078382 patent/WO2016176840A1/zh active Application Filing
- 2015-05-06 CN CN201580000247.6A patent/CN105517677B/zh active Active
- 2015-05-06 US US15/565,877 patent/US10424075B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140003711A1 (en) * | 2012-06-29 | 2014-01-02 | Hong Kong Applied Science And Technology Research Institute Co. Ltd. | Foreground extraction and depth initialization for multi-view baseline images |
US20150110391A1 (en) * | 2013-10-21 | 2015-04-23 | Nokia Corporation | Method and apparatus for scene segmentation from focal stack images |
CN103942756A (zh) * | 2014-03-13 | 2014-07-23 | 华中科技大学 | 一种深度图后处理滤波的方法 |
CN104537627A (zh) * | 2015-01-08 | 2015-04-22 | 北京交通大学 | 一种深度图像的后处理方法 |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2563596B (en) * | 2017-06-19 | 2021-06-09 | Shortbite Ltd | System and method for modeling a three dimensional space based on a two dimensional image |
GB2563596A (en) * | 2017-06-19 | 2018-12-26 | Shortbite Ltd | System and method for modeling a three dimensional space based on a two dimensional image |
CN107993239B (zh) * | 2017-12-25 | 2022-04-12 | 北京邮电大学 | 一种计算单目图像的深度次序的方法和装置 |
CN107993239A (zh) * | 2017-12-25 | 2018-05-04 | 北京邮电大学 | 一种计算单目图像的深度次序的方法和装置 |
US20190362511A1 (en) * | 2018-05-23 | 2019-11-28 | Apple Inc. | Efficient scene depth map enhancement for low power devices |
US10755426B2 (en) * | 2018-05-23 | 2020-08-25 | Apple Inc. | Efficient scene depth map enhancement for low power devices |
CN109684932A (zh) * | 2018-11-30 | 2019-04-26 | 华南农业大学 | 一种基于双目视觉的托盘位姿识别方法 |
CN109684932B (zh) * | 2018-11-30 | 2023-05-23 | 华南农业大学 | 一种基于双目视觉的托盘位姿识别方法 |
CN111223059A (zh) * | 2020-01-04 | 2020-06-02 | 西安交通大学 | 一种基于引导滤波器的鲁棒深度图结构重建和去噪方法 |
CN111223059B (zh) * | 2020-01-04 | 2022-02-11 | 西安交通大学 | 一种基于引导滤波器的鲁棒深度图结构重建和去噪方法 |
CN111292367A (zh) * | 2020-02-18 | 2020-06-16 | 青岛联合创智科技有限公司 | 一种基线可变的双目相机深度图生成方法 |
CN111292367B (zh) * | 2020-02-18 | 2023-04-07 | 青岛联合创智科技有限公司 | 一种基线可变的双目相机深度图生成方法 |
CN111833393A (zh) * | 2020-07-05 | 2020-10-27 | 桂林电子科技大学 | 一种基于边缘信息的双目立体匹配方法 |
CN112053394A (zh) * | 2020-07-14 | 2020-12-08 | 北京迈格威科技有限公司 | 图像处理方法、装置、电子设备及存储介质 |
WO2022160587A1 (zh) * | 2021-01-26 | 2022-08-04 | 深圳市商汤科技有限公司 | 深度检测方法、装置、电子设备、存储介质及程序产品 |
Also Published As
Publication number | Publication date |
---|---|
CN105517677B (zh) | 2018-10-12 |
US10424075B2 (en) | 2019-09-24 |
US20180061068A1 (en) | 2018-03-01 |
CN105517677A (zh) | 2016-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016176840A1 (zh) | 深度图/视差图的后处理方法和装置 | |
US9171372B2 (en) | Depth estimation based on global motion | |
US9123115B2 (en) | Depth estimation based on global motion and optical flow | |
KR101055411B1 (ko) | 입체 영상 생성 방법 및 그 장치 | |
AU2022203854A1 (en) | Methods and systems for large-scale determination of RGBD camera poses | |
TWI489418B (zh) | Parallax Estimation Depth Generation | |
CN109345502B (zh) | 一种基于视差图立体结构信息提取的立体图像质量评价方法 | |
KR100793076B1 (ko) | 에지 적응형 스테레오/다시점 영상 정합 장치 및 그 방법 | |
EP3311361B1 (en) | Method and apparatus for determining a depth map for an image | |
KR100745691B1 (ko) | 차폐영역 검출을 이용한 양안 또는 다시점 스테레오 정합장치 및 그 방법 | |
KR20110014067A (ko) | 스테레오 컨텐트의 변환 방법 및 시스템 | |
WO2012020558A1 (ja) | 画像処理装置、画像処理方法、表示装置、表示方法およびプログラム | |
CN111105452B (zh) | 基于双目视觉的高低分辨率融合立体匹配方法 | |
CN110120012A (zh) | 基于双目摄像头的同步关键帧提取的视频拼接方法 | |
Muddala et al. | Depth-based inpainting for disocclusion filling | |
Chen et al. | Depth map generation based on depth from focus | |
US20230162338A1 (en) | Virtual viewpoint synthesis method, electronic apparatus, and computer readable medium | |
Jorissen et al. | Multi-view wide baseline depth estimation robust to sparse input sampling | |
Akimov et al. | Single-image depth map estimation using blur information | |
Wang et al. | Quality assessment for DIBR-synthesized images with local and global distortions | |
Devernay et al. | Focus mismatch detection in stereoscopic content | |
US20130108149A1 (en) | Processing Method for a Pair of Stereo Images | |
CN112767317B (zh) | 裸眼3d显示器光栅膜检测方法 | |
Chen et al. | Research on safe distance measuring method of front vehicle in foggy environment | |
CN112991419B (zh) | 视差数据生成方法、装置、计算机设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15891092 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15565877 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.04.2018) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15891092 Country of ref document: EP Kind code of ref document: A1 |