WO2016176840A1 - 深度图/视差图的后处理方法和装置 - Google Patents

深度图/视差图的后处理方法和装置 Download PDF

Info

Publication number
WO2016176840A1
WO2016176840A1 PCT/CN2015/078382 CN2015078382W WO2016176840A1 WO 2016176840 A1 WO2016176840 A1 WO 2016176840A1 CN 2015078382 W CN2015078382 W CN 2015078382W WO 2016176840 A1 WO2016176840 A1 WO 2016176840A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
point
processed
edge
hole
Prior art date
Application number
PCT/CN2015/078382
Other languages
English (en)
French (fr)
Inventor
焦剑波
王荣刚
王振宇
王文敏
高文
Original Assignee
北京大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学深圳研究生院 filed Critical 北京大学深圳研究生院
Priority to CN201580000247.6A priority Critical patent/CN105517677B/zh
Priority to US15/565,877 priority patent/US10424075B2/en
Priority to PCT/CN2015/078382 priority patent/WO2016176840A1/zh
Publication of WO2016176840A1 publication Critical patent/WO2016176840A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Definitions

  • the present application relates to the field of three-dimensional image processing technologies, and in particular, to a post-processing method and apparatus for a depth map/disparity map.
  • the contact three-dimensional scanner mainly measures the three-dimensionality of the object by actually contacting the measured object. Coordinates and other information, in order to obtain its depth, this method is very accurate, but because of its contact with the object, it is easy to damage the object, and it takes a long time, so it is less used; the other method is non-contact
  • the method can measure the depth information of the object without contacting the object to be measured, and can be divided into active scanning and passive scanning.
  • the active scanning refers to measuring the depth information by actively transmitting signals or energy.
  • the method while the passive scanning does not need to transmit energy, only the information of the image is used to calculate the depth information.
  • Common active scanning methods include time difference ranging by laser range finder, triangulation, etc., as well as structured light source method realized by projected image.
  • Common passive scanning has stereo matching and chromatic forming method, which are mainly calculated by algorithm. achieve.
  • a depth map corresponding to the measured scene is generated.
  • the image is a grayscale image, and the depth of the object is represented by the depth of the color.
  • the quality of the depth map will have a huge impact on the later application.
  • the obtained depth map has many defects, such as empty black spots and irregular edges of objects. phenomenon.
  • the noise existing in the image is generally removed by filtering, and the stereo matching in the passive scanning is more than one perspective compared to the active scanning, so that the information of the two viewing angles can be used to correct
  • Depth maps are generally used for consistency detection by left and right depth maps, detecting inconsistent areas, and then filtering these areas.
  • the depth map (or disparity map) post-processing in stereo matching is more refined than the active scan, there are still black holes and edges that are not neat.
  • depth information as a key technology in many frontier fields and new applications, has received extensive attention and a large number of methods for obtaining depth information.
  • the quality of depth maps is limited by technology.
  • Some methods have begun to study the post-processing of depth maps.
  • the processed images still have black hole points and irregular edges of the objects, which will seriously affect the subsequent applications. Therefore, the post-processing of the depth map is an urgent problem to be solved and perfected.
  • the present application provides a post-processing method for a depth map/disparity map, including:
  • the image to be processed being a depth map or a disparity map
  • Performing image segmentation on the color image includes: superpixel segmentation on the color image; dividing the grayscale range into preset intervals, and for each superpixel, statistically obtaining a histogram in which all the pixel points fall within the interval Determining whether the ratio of the number of pixels included in the interval with the largest interval distribution value to the total number of pixels in the current super pixel is less than the first threshold value, and if so, using the color block based method on the current super pixel Further segmentation;
  • the present application provides a post-processing device for a depth map/disparity map, including:
  • An input module configured to input an image to be processed, where the image to be processed is a depth map or a disparity map;
  • a non-aligned edge detecting module comprising: an edge extracting unit, an image blocking unit, and a non-aligned edge detecting unit; the edge extracting unit is configured to perform edge extraction on the image to be processed to obtain edge information; and the image blocking unit is configured to The color image corresponding to the image to be processed is subjected to image segmentation to obtain block information; and the irregular edge detecting unit is configured to obtain an irregular edge region of the image to be processed according to the edge information and the block information;
  • the image blocking unit when the image blocking unit performs image segmentation on the color image, specifically: the image blocking unit performs super pixel segmentation on the color image; the grayscale range is divided into preset intervals, and for each super pixel, statistics are performed. Obtaining a histogram in which all the pixel points fall within the interval; determining whether the ratio of the number of pixels included in the interval with the largest interval distribution value to the total number of pixels in the current super pixel is less than the first threshold value in the current super pixel, if , the color sub-blocking method is used to further segment the current super pixel;
  • An irregular edge repair module for repairing the irregular edge region.
  • FIG. 1 is a block diagram of a post-processing device for a depth map/disparity map according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a post-processing method of a depth map/disparity map according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of hole filling in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a detection process of an irregular edge region in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a repair process of an irregular edge region in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of comparison before and after repair of an irregular edge region in an embodiment of the present application.
  • Figure 7 is an image of the Middlebury experimental test
  • FIG. 9 is a front-to-back comparison result of applying the method provided by this embodiment in a global stereo matching algorithm
  • FIG. 10 is a front-back comparison effect diagram of the method provided by the embodiment of the Kinect depth map.
  • the post-processing method and device for the depth map/disparity map compensates for the shortcomings in the optimization of the depth map/disparity map, and proposes a new depth map/disparity map post-processing optimization method to improve stereo matching.
  • the disparity map and the quality of the depth map obtained by the active scan are the same.
  • the post-processing method and device of the depth map/disparity map provided by the present application, the common problem regions and error points existing in the depth map/disparity map can be better corrected, and the existing parallax map post-processing Compared with the method, it can find and solve more problem areas, and can support the depth map obtained by the monocular camera, has wider applicability, and can greatly improve the quality of the parallax map/depth map.
  • the post-processing device of the depth map/disparity map provided by the embodiment includes an input module 10 , a pre-processing module 20 , a hole detection module 30 , a hole-filling module 40 , a non-aligned edge detection module 50 , and an irregularity Edge repair module 60.
  • the irregularity edge detecting module 50 includes an edge extracting unit 501, an image blocking unit 502, and a non-aligned edge detecting unit 503.
  • FIG. 2 is a schematic flowchart of a post-processing method of a depth map/disparity map provided by the embodiment, and the method includes the following steps:
  • Step 1.1 The input module 10 inputs an image to be processed, and the image to be processed may be a depth map or a disparity map.
  • Step 1.2 When the input image to be processed is a depth map, the preprocessing module 20 first preprocesses the depth map and converts the depth map into unified disparity data. Due to depth map and parallax The graphs are grayscale images, and the two are just inversely proportional to the grayscale representation. Therefore, when the depth map is preprocessed, the depth map is inversely processed. However, it is noted that since the depth map itself may have many black hole points, the simple inversion will cause the "black hole" to become white, which will cause serious interference to the subsequent parallax processing, so the hole point is not inverted.
  • the preprocessing of the depth map is as follows:
  • D(p) represents the gray value of the point p in the depth map
  • d(p) represents the gray value of the point p in the converted parallax data (hereinafter collectively referred to as the parallax map).
  • Step 1.3 The hole detection module 30 performs hole detection on the preprocessed image to be processed.
  • the current information to be processed is the disparity data.
  • the embodiment first processes the black hole points remaining therein.
  • the traditional disparity map post-processing technique fills the disparity map obtained by stereo matching with "zero-value parallax", there are still many obvious black hole points, and the disparity values of these points may not be zero, so they are not Filled, but still belongs to the wrong parallax point.
  • d ⁇ ⁇ *d max , ⁇ , d max are the penalty coefficient and the maximum disparity value, respectively, and if it is less than the threshold, it is considered to be a low trust point, otherwise it is a high trust point.
  • the “disparity value” and the “gray value” of the pixel points in the image mentioned in this embodiment can be considered to belong to the same concept, because the disparity value of the pixel point in the image is determined by the gray value. Characterized.
  • Step 1.4 The hole filling module 40 fills the calibrated hole.
  • the traditional padding method is directly filled with the smaller parallax points around it, that is, with the background point (small parallax), based on the assumption that the hole point (here, the zero point) appears in the background (as shown in Figure 3(a)).
  • the hole is at the edge of the image (as shown in Fig. 3(b))
  • the hole is not filled with the smaller value point. Therefore, this embodiment treats the two cases differently, and the filling method is as follows: As shown in the formula:
  • d * (p) represents the disparity value of the point p after the padding
  • p 1 and p 2 are points in the neighborhood around the point p (for example, points above and below the point p), and taking one direction as an example.
  • the meaning expressed in the above formula is: when it is detected that the points in the surrounding neighborhood of the current hole point are all non-hole points, the point with the smallest disparity value in the surrounding neighborhood is used to fill the current hole point; the surrounding of the current hole point is detected. When there is a hole in the neighborhood, the point with the largest disparity in the surrounding neighborhood is used to fill the current hole.
  • Step 1.5 In addition to the hole points, the common problem areas in the parallax map and the depth map are the irregularities of the contour edges of the object, mainly represented by the convex areas of the "convex parallax” and the concave areas of the "concave parallax", such as As shown in FIG. 4, the block areas indicated by S1 and S2 are non-aligned areas, and these areas are collectively referred to as edge irregularities. For the detection of irregular areas, this embodiment adopts the idea of combining edge information and block information, respectively.
  • the edge information of the disparity map is extracted by the edge extracting unit 501 of the untidy edge detecting module 50, and the image blocking unit 502 of the untidy edge detecting module 50 blocks the original color image to obtain the blocking information.
  • this embodiment uses the Canny operator, and for the color image segmentation, this embodiment proposes a new super pixel based blocking method, namely "adaptive super pixel", different
  • the super pixel blocking method has lower computational complexity and has a greater influence on the speed of the entire post-processing. Therefore, this embodiment adopts a super pixel-based blocking method, and At the same time, the conventional super-pixel segmentation is inaccurate in some regions because its scale is relatively fixed. Therefore, this embodiment proposes an idea of adaptively changing the segmentation accuracy.
  • performing image segmentation on the color image includes: superpixel segmentation on the color image; dividing the grayscale range into preset intervals, and for each superpixel, statistically obtaining that all the pixel points fall within the interval. a histogram; determining whether the ratio of the number of pixels included in the interval with the largest interval distribution value to the total number of pixels in the current super pixel is less than the first threshold in the current superpixel, and if so, using the color block based method on the current The superpixel is further divided. details as follows:
  • the method of comparing the proportion of the principal component in each superpixel is used to judge whether the superpixel is accurate.
  • the process can be described as: dividing the gray scale range into (0 to 50), (50 to 80), (80 to 150), (150 to 230), and (230 to 155) five intervals for each super Pixel, counts the interval in which all the pixels fall, and then generates a histogram, that is, a histogram consisting of 5 columns, each column represents the above-mentioned interval, and the number of pixels included in the interval in which the interval distribution value is the largest is defined as n max , the total number of pixels in the entire super pixel is n all , and the super pixel whose ratio is smaller than the first threshold ⁇ is recorded as a super pixel with insufficient division strength, that is, n max /n all ⁇ .
  • the edge information of the parallax map and the block information of the color map are provided, and the irregular edge detecting unit 503 of the irregular edge detecting module 50 uses the both to detect the edge irregularity region. If there is no problem at the edge of the disparity map, the edge should match the edge of the block graph. If it does not match, the edge is considered to be problematic, as shown in Figure 4, when an edge passes through the adaptive superpixel. When the divided block is obtained, the edge is judged to be an irregular edge. Thereafter, if the edge belongs to the edge of the convex area, the error area is on the foreground side, and if it belongs to the edge of the concave area, the error area is on the background side.
  • this embodiment adopts a square window to determine which side of the irregular edge is marked. For a point on the irregular edge, a square window is constructed with the center, and the window is divided by the irregular edge. For the two parts with different areas, the wrong area is located on the side of the smaller area, and the wrong area (not neat edge area) can be marked at this time.
  • the edge is a vertical or horizontal straight line
  • the two parts of the square window may have the same area. In this case, the size of the square window is only increased until the two parts are not equal in area. The above method determines.
  • Step 1.6 After the irregularities of the edges in the disparity map are marked, the untidy edge repair module 60 repairs them.
  • the error regions are repaired by using a weighted median filtering method.
  • the principle of median filtering is to replace the value of the center point with a median of all points in the range within a certain range to complete the filtering.
  • the weighted median filtering is based on the traditional median filtering, and the different points in the range are treated differently, for example, different weights are assigned according to the color or the distance.
  • the filter kernel of the weighted median filter used in this embodiment is a guide filter coefficient and a guide filter (see Rhemann C, Hosni A, Bleyer M, et al.
  • I p and I q are the pixels in the square window
  • is the total number of pixels in the square window
  • I is the leading image
  • is the smoothing coefficient
  • U is the corresponding unit matrix.
  • I p and I q are six-dimensional vectors
  • u is a six-dimensional mean vector
  • is a 6*6 cross-correlation matrix
  • I p and I q are three-dimensional vectors (R, G, B)
  • u is a three-dimensional mean vector
  • is a 3*3 cross-correlation matrix.
  • FIG. 6(a) and FIG. 6(b) are schematic diagrams before and after the irregular edge repair, respectively.
  • Fig. 6(a) is modified by the edge repair method provided in this embodiment (the convex area of the irregularly edged area of the frame area and the concave area of the irregularly shaped area), and
  • Fig. 6(b) presents a neat edge.
  • the hole detection and filling steps may not be performed, and only the irregular edge region detection and repair steps are performed on the image, or the hole points are The detection and filling is carried out using methods in the prior art.
  • an experiment is set to perform the test, and the experiment is divided into a test for the disparity map and a test for the depth map.
  • the standard data set of Middlebury is used to verify by different stereo matching algorithms.
  • the depth map the currently used depth acquisition device, Kinect, is used as the test image by the Kinect.
  • Middlebury http://vision.middlebury.edu/stereo/ provides a professional test platform for stereo matching and corresponding test data.
  • the selected test image is shown in Fig. 7, wherein the first behavior is the left image, the second behavior is the right image, and the third behavior is the standard parallax map, which is a different image pair from left to right.
  • the performance of the post-processing optimization technology provided by this embodiment is verified by applying a plurality of different stereo matching algorithms.
  • the test algorithm mainly has a local algorithm and a global algorithm, and the results are shown in FIG. 8 and FIG. 9 respectively. Among them, nonocc, all, and disc are three different evaluation indicators.
  • the sub-tables represent non-occlusion areas, all areas, and inspection discontinuous areas.
  • the quality of the disparity map is measured from three different angles, and the ordinate indicates the average error rate. The lower the quality, the better the abscissa is.
  • the method 1 to method 5 in FIG. 8 are Box Filter, Guided Filter, Cross Region, Information Permeability, and DTAggr methods, respectively.
  • Method 1 and Method 2 in Figure 9 are the Graph Cut and Belief Propagation methods, respectively. It can be seen from the figure that the method provided by this embodiment is tested on various stereo matching algorithms, and each has different degrees of quality improvement, which proves its effectiveness.
  • the depth map captured by Kinect is selected as the test, and the processing result is shown in FIG. 10, and the depth map has been shown for the convenience of observation. Converted to disparity data. From left to right, the original color map, the unprocessed depth map, and the depth map processed by the present invention are shown. It can be seen from the figure that the processing of the method of this embodiment is improved and improved to a large extent, whether it is a hole area or an edge irregularity, which proves that it is effective on the monocular depth map data. Sex.

Abstract

一种深度图/视差图的后处理方法和装置,在进行不整齐边缘区域检测时,采用边缘信息与分块信息相结合的思路。对彩色图像进行图像分块时,先对彩色图像进行超像素分割;将灰度范围分为预设个区间,对于每个超像素,统计得到所有像素点落在所述区间内的直方图;判断当前超像素中,区间分布值最大的区间所含的像素数与当前超像素中的全部像素数的比值是否小于第一阈值,如果是,则采用基于颜色分块的方法对当前超像素进行进一步分割。在保证图像处理速度的同时,提高彩色图像分割的准确性,从而提高不整齐边缘区域检测的准确性。

Description

深度图/视差图的后处理方法和装置 技术领域
本申请涉及三维图像处理技术领域,具体涉及一种深度图/视差图的后处理方法和装置。
背景技术
随着技术的发展及人们需求的不断提升,对外部世界的信息获取变得日益重要,从最早的黑白相片到彩色相片,再到能够记录时间信息的视频,记录和展现人们所处的世界的手段在不断地改进,而近年来开始出现的3D技术更是将人类感知世界的方式进行了更大的改进,3D电影、裸眼电视、虚拟现实、增强现实等应用极大地丰富了人们的生活,同时也使得一些科学研究变得更加方便。而这些应用区别于以往应用的最关键的地方就是多了“深度”信息,营造了一种三维的视觉感受,使得临场感增强。因此,对于深度信息的诸多研究开始成为热点。
对于深度信息的获取,目前已有多种方法,主要可分为接触式三维扫描和非接触式三维扫描两种方式,其中接触式三维扫描仪主要通过与被测物体实际接触而测量物体的三维坐标等信息,进而获取其深度,此方法精度非常高,但是由于其与物体接触的特性,容易对物体造成损伤,同时耗时比较长,因此较少使用;另一种方法为非接触式的三维扫描,该方法不用与被测物体接触便可测得物体的深度信息,其又可分为主动式扫描与被动式扫描两类,主动式扫描是指通过主动发射信号或能量来测量深度信息的方法,而被动式扫描则不需要发射能量,只需通过图像的信息来计算得到深度信息。常见的主动式扫描有通过激光测距仪实现的时差测距、三角测距等,以及通过投影图像实现的结构光源法,常见的被动式扫描有立体匹配、色度成形法等,主要通过算法计算实现。
无论是主动扫描还是被动扫描,都会生成一幅与被测场景对应的深度图,该图像为灰度图像,通过颜色的深浅来表示物体深度的远近。由上所述可知,深度图的质量对后期的应用将产生巨大的影响,然而目前主流的获取深度的方法中,所得到的深度图都会存在诸多缺陷,例如空洞黑点、物体边缘不整齐等现象。对于主动式扫描得到的深度图,一般只是通过滤波的方法去除图像中存在的噪点,而相对于主动扫描,被动扫描中的立体匹配则多出一个视角,因而可以利用两个视角的信息来修正深度图,一般是通过左右深度图进行一致性检测,检测到不一致的区域,然后将这些区域进行滤波等处理。尽管立体匹配中的深度图(或视差图)后处理过程比主动扫描式的要更加细化,但仍然存在黑洞以及边缘不整齐的现象。
综合上述所述可以得出,深度信息作为目前诸多前沿领域和新型应用的关键技术,已受到广泛的关注并有大量的方法用于深度信息的获取,然而深度图的质量由于技术所限,会存在很多问题,已有一些方法开始研究深度图的后处理,然而处理过后的图像仍然存在黑色洞点、物体边缘轮廓不整齐等现象,这将严重影响后续的应用。因此,深度图的后处理是一个亟待解决和完善的问题。
发明内容
根据本申请的第一方面,本申请提供一种深度图/视差图的后处理方法,包括:
输入待处理图像,所述待处理图像为深度图或视差图;
对所述待处理图像进行边缘提取,以得到边缘信息;并对所述待处理图像对应的彩色图像进行图像分块,以得到分块信息;
对所述彩色图像进行图像分块包括:对彩色图像进行超像素分割;将灰度范围分为预设个区间,对于每个超像素,统计得到所有像素点落在所述区间内的直方图;判断当前超像素中,区间分布值最大的区间所含的像素数与当前超像素中的全部像素数的比值是否小于第一阈值,如果是,则采用基于颜色分块的方法对当前超像素进行进一步分割;
根据所述边缘信息和分块信息得到待处理图像的不整齐边缘区域;
对所述不整齐边缘区域进行修复。
根据本申请的第二方面,本申请提供一种深度图/视差图的后处理装置,包括:
输入模块,用于输入待处理图像,所述待处理图像为深度图或视差图;
不整齐边缘检测模块,其包括边缘提取单元、图像分块单元、不整齐边缘检测单元;边缘提取单元用于对所述待处理图像进行边缘提取,以得到边缘信息;图像分块单元用于对所述待处理图像对应的彩色图像进行图像分块,以得到分块信息;不整齐边缘检测单元用于根据所述边缘信息和分块信息得到待处理图像的不整齐边缘区域;
其中,图像分块单元对所述彩色图像进行图像分块时,具体为:图像分块单元对彩色图像进行超像素分割;将灰度范围分为预设个区间,对于每个超像素,统计得到所有像素点落在所述区间内的直方图;判断当前超像素中,区间分布值最大的区间所含的像素数与当前超像素中的全部像素数的比值是否小于第一阈值,如果是,则采用基于颜色分块的方法对当前超像素进行进一步分割;
不整齐边缘修复模块,用于对所述不整齐边缘区域进行修复。
附图说明
图1为本申请一种实施例中深度图/视差图的后处理装置的模块示意图;
图2为本申请一种实施例中深度图/视差图的后处理方法的流程示意图;
图3为本申请一种实施例中洞点填补的示意图;
图4为本申请一种实施例中不整齐边缘区域的检测流程示意图;
图5为本申请一种实施例中不整齐边缘区域的修复流程示意图;
图6为本申请一种实施例中不整齐边缘区域修复前后的对比示意图;
图7为Middlebury实验测试图像;
图8为局部立体匹配算法中应用本实施例提供的方法的前后对比结果;
图9为全局立体匹配算法中应用本实施例提供的方法的前后对比结果;
图10为Kinect深度图应用本实施例提供的方法的前后对比效果图。
具体实施方式
本申请实施例提供的深度图/视差图的后处理方法和装置,弥补了目前对于深度图/视差图优化中的不足,提出一种新的深度图/视差图后处理优化方法,提高立体匹配的视差图、以及主动扫描获得的深度图的质量。
通过本申请提供的深度图/视差图的后处理方法和装置,常见的存在于深度图/视差图中的问题区域和错误点都能被较好的修正,与目前已有的视差图后处理方法相比,能够发现并解决更多的问题区域,同时可以支持基于单目相机获得的深度图,具有更广泛的适用性,同时可以在很大程度上提升视差图/深度图的质量。
下面通过具体实施方式结合附图对本申请作进一步详细说明。
请参考图1,本实施例提供的深度图/视差图的后处理装置包括输入模块10、预处理模块20、洞点检测模块30、洞点填补模块40、不整齐边缘检测模块50和不整齐边缘修复模块60。其中,不整齐边缘检测模块50包括边缘提取单元501、图像分块单元502和不整齐边缘检测单元503。
请参考图2,为本实施例提供的深度图/视差图的后处理方法的流程示意图,该方法包括下面步骤:
步骤1.1:输入模块10输入待处理图像,待处理图像可以是深度图或视差图。
步骤1.2:当输入的待处理图像为深度图时,预处理模块20先对深度图进行预处理,将深度图转化为统一的视差数据。由于深度图和视差 图都是灰度图像,两者在灰度的表现上刚好成反比例关系,因此,对深度图进行预处理时,即对深度图进行“反相”处理。但注意到,由于深度图本身可能存在很多黑色洞点,简单的反相会使得“黑洞”变为白色,对后续的视差处理产生严重干扰,因此洞点不做反相处理。对深度图的预处理操作如下式所示:
Figure PCTCN2015078382-appb-000001
其中D(p)表示深度图中点p的灰度值,d(p)表示转化后的视差数据(以下统称视差图)中点p的灰度值。
步骤1.3:洞点检测模块30对预处理后的待处理图像进行洞点检测。
经过预处理之后,目前的待处理信息均为视差数据,对于视差图的后处理优化,本实施例先处理其中残留的黑色洞点。尽管传统的视差图后处理技术都会对立体匹配得到的视差图进行“零值视差”的填补,但仍会残留很多明显的黑色洞点,这些点的视差值可能不为零,因此未被填补,但依然属于错误视差点。
对于这些洞点的检测,本实施例首先将所有点分为“高信任度点”和“低信任度点”,其衡量标准是看该点的视差值是否小于一个足够小的阈值dλ,其中dλ=λ*dmax,λ、dmax分别为惩罚系数和最大视差值,若小于该阈值则认为是低信任度点,否则为高信任度点。根据信任度将点分类后,对于低信任度点,若其明显小于其周围邻域的点,则将其标定为“洞点”,标定过程如下式所示:
Figure PCTCN2015078382-appb-000002
其中,Hole(p)=1表示点p为洞点,为0则表示非洞点,q点表示点p周围邻域中的点。
需要说明的是,本实施例中提到的图像中像素点的“视差值”与“灰度值”可以认为属于同一个概念,因为图像中像素点的视差值是通过灰度值来表征的。
步骤1.4:洞点填补模块40对标定后的洞点进行填补。传统的填补方法直接用周围较小的视差点填补,即用背景点(视差较小),基于的假设是洞点(此处指零值点)出现在背景(如图3(a)所示),然而当洞点在图像边缘时(如图3(b)所示),用较小值点填补则无法补上洞点,因此,本实施例区别对待两种情况,得出填补方法如下式所示:
Figure PCTCN2015078382-appb-000003
其中d*(p)表示填补后的点p的视差值,p1、p2分别为点p周围邻域中的点(例如点p上下左右的点),此处以取一个方向为例。上式所表达的意思为:检测到当前洞点的周围邻域中的点都为非洞点时,采用周围邻域中视差值最小的点来填补当前洞点;检测到当前洞点的周围邻域中存在洞点时,采用周围邻域中视差值最大的点来填补当前洞点。
步骤1.5:除洞点外,视差图和深度图中常见的问题区域还有物体的轮廓边缘的不整齐区域,主要表现为“凸起视差”的凸区域和“凹陷视差”的凹区域,如图4所示,S1、S2表示的方框区域为不整齐区域,这些区域统称为边缘不整齐区域,对于不整齐区域的检测,本实施例采用边缘信息与分块信息相结合的思路,分别由不整齐边缘检测模块50的边缘提取单元501提取视差图的边缘信息,不整齐边缘检测模块50的图像分块单元502对原彩色图像进行分块,得到分块信息。对于边缘提取,本实施例采用Canny算子进行,而对于彩色图像的分块,本实施例提出一种新的基于超像素(super pixel)的分块方法,即“自适应超像素”,不同于传统的诸如Mean Shift等颜色分块方法,超像素分块方法具有更低的计算复杂度,对整个后处理的速度有较大影响,因此本实施例采用基于超像素的分块方法,而与此同时,传统的超像素分割由于其尺度相对固定而使得有些区域的分割并不准确,因此,本实施例提出了自适应改变分割准确性的思路。
本实施例中,对彩色图像进行图像分块包括:对彩色图像进行超像素分割;将灰度范围分为预设个区间,对于每个超像素,统计得到所有像素点落在上述区间内的直方图;判断当前超像素中,区间分布值最大的区间所含的像素数与当前超像素中的全部像素数的比值是否小于第一阈值,如果是,则采用基于颜色分块的方法对当前超像素进行进一步分割。具体如下:
首先,在对彩色图像进行超像素分割之后,对分割得到的所有超像素评判其分割的准确性,本实施例采用对每个超像素中的主成分占比的方法来评判该超像素是否准确,其过程可描述为:将灰度范围划分为(0~50)、(50~80)、(80~150)、(150~230)、(230~155)5个区间,对于每个超像素,统计其中所有像素点落在的区间,然后产生一个直方图,即由5列构成的直方分布,每一列代表上述一个区间,取其中区间分布值最大的区间所含的像素数,定义为nmax,记整个超像素中的全部像素数为nall,则将二者比值小于第一阈值ρ的超像素记作分割力度不够的超像素,即nmax/nall<ρ。采用主成分的思想,当超像素中的主成分占比都 过低的时候,即认为该超像素分割不够准确,之后采用Mean Shift分块方法对该超像素进行进一步分割。同时采用超像素和Mean Shift分块方法,不仅在速度上得到了提升,同时在准确性上也得到了保证。
此时,具备视差图的边缘信息以及彩色图的分块信息,不整齐边缘检测模块50的不整齐边缘检测单元503利用两者共同检测边缘不整齐区域。若视差图的边缘没有问题,则其边缘应与分块图的块的边缘相吻合,如果不吻合则认为该边缘是有问题的,如图4所示,当一条边缘穿过自适应超像素分割后的分块时,该边缘即判定为不整齐边缘。之后,如果该边缘属于凸区域的边缘,则错误区域在前景一侧,如果属于凹区域的边缘,则错误区域位于背景一侧。为方便计算机查找,本实施例采用方形窗口来判定对不整齐边缘的哪一侧进行标注,对于不整齐边缘上的某一点,以其为中心构造方形窗口,此时不整齐边缘将该窗口划分为面积不等的两部分,错误区域即位于面积较小的那部分的一侧,此时即可标注出错误区域(不整齐边缘区域)。特别的,如果该边缘是垂直或水平的直线,方形窗口内的两部分可能会出现面积相等的情况,此时只需将方形窗口的尺寸加大,直到两部分面积不等时即可再次采用上述方式进行判定。
步骤1.6:对视差图中边缘的不整齐区域进行标注过后,由不整齐边缘修复模块60对其进行修复,本实施例采用基于加权中值滤波的方法对这些错误区域进行修复。中值滤波的原理是在一定范围内,选取该范围内所有点的中值来替换中心点的值,完成滤波。加权中值滤波则是在传统中值滤波的基础上,对范围内的不同点做区别处理,比如根据颜色或距离赋以不同权重。本实施例采用的加权中值滤波的滤波核为引导滤波器系数,引导滤波器(详见Rhemann C,Hosni A,Bleyer M,et al.Fast cost-volume filtering for visual correspondence and beyond[C]//Computer Vision and Pattern Recognition(CVPR),2011IEEE Conference on.IEEE,2011:3017-3024.)所达到的效果是尽可能的使待滤波图像与引导图像保持一致,特别是边缘细节等地方。为保持视差的边缘与原彩色图像相近,同时充分利用双目信息,本实施例采用双目图像作为引导图,滤波核系数的计算如下式所示:
Figure PCTCN2015078382-appb-000004
其中,p、q为方形窗口中的像素点,|w|为方形窗口中的像素总数,I为引导图像,ε为平滑系数,U为对应的单位矩阵。当待处理图像为 视差图时,采用双目图像作为引导图,Ip、Iq为六维向量,u为六维均值向量,Σ为6*6互相关矩阵;当待处理图像为深度图时,采用单目图像作为引导图,Ip、Iq为三维向量(R、G、B),u为三维均值向量,Σ为3*3互相关矩阵。
整个加权中值滤波的过程如图5所示。首先待处理的视差图按视差层级分成不同层,然后对这个三维的“视差体”进行滤波,最终再合成为一幅视差图,完成边缘不整齐区域的修复过程。最后通过简单中值滤波去除噪点。
请参考图6(a)和图6(b),分别为不整齐边缘修复前和修改后的示意图。图6(a)经过本实施例提供的边缘修复方法修改后(方框区域为不整齐边缘的凸区域,图形区域为不整齐边缘的凹区域),图6(b)呈现出整齐的边缘。
经过以上步骤之后,原存在诸多问题和错误区域的深度图/视差图便得到了修复,其质量进一步被提升。
另外,需要说明的是,在某些实施例中,图像存在洞点较少时,可以不进行洞点检测和填补步骤,只对图像进行不整齐边缘区域检测和修复步骤,或者,对洞点的检测和填补采用现有技术中的方法进行。
为验证本实施例提供的深度图/视差图的后处理方法和装置的性能,设置实验进行测试,实验分为对视差图的测试和对深度图的测试。对于视差图,采用Middlebury的标准数据集,通过不同的立体匹配算法来验证;而对于深度图,采用目前较常用的获取深度的设备——Kinect,通过Kinect采集的深度图作为测试图像。
Middlebury(http://vision.middlebury.edu/stereo/)提供立体匹配领域专业的测试平台以及相应的测试数据。选取的测试图像如图7所示,其中第一行为左图,第二行为右图,第三行为标准视差图,由左到右依次为不同的图像对。通过在多种不同立体匹配算法上应用本实施例提供的后处理优化技术,来验证其性能,测试算法主要有局部算法和全局算法,其结果分别如图8、图9所示。其中nonocc、all、disc为三种不同的评价指标,分表代表非遮挡区域、全部区域、以及视察不连续区域,从三种不同的角度衡量视差图的质量,纵坐标表示平均错误率,越低质量越好,横坐标为不同的算法。图8中方法一至方法五分别为Box Filter、Guided Filter、Cross Region、Information Permeability、DTAggr方法。图9中方法一和方法二分别为Graph Cut和Belief Propagation方法。由图中可以看出,本实施例提供的方法在多种立体匹配算法上进行了测试,都有不同程度的质量提升,证明了其有效性。
除此之外,为验证本实施例对深度图的处理效果,选取了Kinect捕获的深度图作为测试,处理结果如图10所示,为便于观测,已将深度图 转化为视差数据。图中由左至右分别为原彩色图、未处理的深度图、经本发明处理后的深度图。由图中可以看出,无论是洞点区域还是边缘不整齐问题,经过本实施例方法的处理,都在很大程度上得到了提升和改善,证明了其在单目深度图数据上的有效性。
通过深度图和视差图,以及单目、双目的不同测试,充分证明了本实施例所提供的深度图/视差图后处理优化方法的有效性和广泛适用特性。
本领域技术人员可以理解,上述实施方式中各种方法的全部或部分步骤可以通过程序来指令相关硬件完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器、随机存取存储器、磁盘或光盘等。
以上内容是结合具体的实施方式对本申请所作的进一步详细说明,不能认定本申请的具体实施只局限于这些说明。对于本申请所属技术领域的普通技术人员来说,在不脱离本申请发明构思的前提下,还可以做出若干简单推演或替换。

Claims (12)

  1. 一种深度图/视差图的后处理方法,其特征在于,包括:
    输入待处理图像,所述待处理图像为深度图或视差图;
    对所述待处理图像进行边缘提取,以得到边缘信息;并对所述待处理图像对应的彩色图像进行图像分块,以得到分块信息;
    对所述彩色图像进行图像分块包括:对彩色图像进行超像素分割;将灰度范围分为预设个区间,对于每个超像素,统计得到所有像素点落在所述区间内的直方图;判断当前超像素中,区间分布值最大的区间所含的像素数与当前超像素中的全部像素数的比值是否小于第一阈值,如果是,则采用基于颜色分块的方法对当前超像素进行进一步分割;
    根据所述边缘信息和分块信息得到待处理图像的不整齐边缘区域;
    对所述不整齐边缘区域进行修复。
  2. 如权利要求1所述的方法,其特征在于,当所述待处理图像为深度图时,在输入深度图后,还包括先对所述深度图中的非洞点进行反相处理的步骤。
  3. 如权利要求1所述的方法,其特征在于,根据所述边缘信息和分块信息得到待处理图像的不整齐边缘区域,具体为:根据所述边缘信息和分块信息判断出不整齐边缘,以所述不整齐边缘上的任意一点为中心构造方形窗口,将所述不整齐边缘将方形窗口划分后面积较小的部分作为不整齐边缘区域。
  4. 如权利要求3所述的方法,其特征在于,对所述不整齐边缘区域进行修复,具体为:采用引导滤波器系数作为滤波核系数的加权中值滤波方法对所述不整齐边缘区域进行修复。
  5. 如权利要求4所述的方法,其特征在于,所述引导滤波器系数为:
    Figure PCTCN2015078382-appb-100001
    其中,p、q为方形窗口中的像素点,|w|为方形窗口中的像素总数,I为引导图像,ε为平滑系数,U为对应的单位矩阵;
    当待处理图像为视差图时,采用双目图像作为引导图,Ip、Iq为六维向量,u为六维均值向量,Σ为6*6互相关矩阵;
    当待处理图像为深度图时,采用单目图像作为引导图,Ip、Iq为三维向量,u为三维均值向量,Σ为3*3互相关矩阵。
  6. 如权利要求1至5中任一项所述的方法,其特征在于,对所述待处理图像进行边缘提取前,还包括对所述待处理图像进行洞点检测和填 补的步骤,洞点检测和填补的步骤包括:
    将视差值小于第二阈值的像素点定义为低信任度点;
    对于所述低信任度点,当该点的视差值比其周围邻域中的点的视差值小第二阈值时,确定该点为洞点;按此方式检测出待处理图像中的所有洞点;
    检测到当前洞点的周围邻域中的点都为非洞点时,采用周围邻域中视差值最小的点来填补当前洞点;检测到当前洞点的周围邻域中存在洞点时,采用周围邻域中视差值最大的点来填补当前洞点;按此方式对待处理图像中的所有洞点进行填补。
  7. 一种深度图/视差图的后处理装置,其特征在于,包括:
    输入模块,用于输入待处理图像,所述待处理图像为深度图或视差图;
    不整齐边缘检测模块,其包括边缘提取单元、图像分块单元、不整齐边缘检测单元;边缘提取单元用于对所述待处理图像进行边缘提取,以得到边缘信息;图像分块单元用于对所述待处理图像对应的彩色图像进行图像分块,以得到分块信息;不整齐边缘检测单元用于根据所述边缘信息和分块信息得到待处理图像的不整齐边缘区域;
    其中,图像分块单元对所述彩色图像进行图像分块时,具体为:图像分块单元对彩色图像进行超像素分割;将灰度范围分为预设个区间,对于每个超像素,统计得到所有像素点落在所述区间内的直方图;判断当前超像素中,区间分布值最大的区间所含的像素数与当前超像素中的全部像素数的比值是否小于第一阈值,如果是,则采用基于颜色分块的方法对当前超像素进行进一步分割;
    不整齐边缘修复模块,用于对所述不整齐边缘区域进行修复。
  8. 如权利要求7所述的装置,其特征在于,还包括预处理模块,用于在所述待处理图像为深度图时,对输入模块输入的深度图中的非洞点进行反相处理。
  9. 如权利要求7所述的装置,其特征在于,不整齐边缘检测单元用于根据所述边缘信息和分块信息得到待处理图像的不整齐边缘区域时,不整齐边缘检测单元根据所述边缘信息和分块信息判断出不整齐边缘,以所述不整齐边缘上的任意一点为中心构造方形窗口,将所述不整齐边缘将方形窗口划分后面积较小的部分作为不整齐边缘区域。
  10. 如权利要求9所述的装置,其特征在于,不整齐边缘修复模块用于对所述不整齐边缘区域进行修复时,不整齐边缘修复模块采用引导滤波器系数作为滤波核系数的加权中值滤波方法对所述不整齐边缘区域进行修复。
  11. 如权利要求10所述的装置,其特征在于,所述引导滤波器系数 为:
    Figure PCTCN2015078382-appb-100002
    其中,p、q为方形窗口中的像素点,|w|为方形窗口中的像素总数,I为引导图像,ε为平滑系数,U为对应的单位矩阵;
    当待处理图像为视差图时,采用双目图像作为引导图,Ip、Iq为六维向量,u为六维均值向量,Σ为6*6互相关矩阵;
    当待处理图像为深度图时,采用单目图像作为引导图,Ip、Iq为三维向量,u为三维均值向量,Σ为3*3互相关矩阵。
  12. 如权利要求7-11任一项所述的装置,其特征在于,还包括:
    洞点检测模块,用于将视差值小于第二阈值的像素点定义为低信任度点;对于所述低信任度点,当该点的视差值比其周围邻域中的点的视差值小第二阈值时,确定该点为洞点;按此方式检测出待处理图像中的所有洞点;
    洞点填补模块,用于在检测到当前洞点的周围邻域中的点都为非洞点时,采用周围邻域中视差值最小的点来填补当前洞点;检测到当前洞点的周围邻域中存在洞点时,采用周围邻域中视差值最大的点来填补当前洞点;按此方式对待处理图像中的所有洞点进行填补。
PCT/CN2015/078382 2015-05-06 2015-05-06 深度图/视差图的后处理方法和装置 WO2016176840A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201580000247.6A CN105517677B (zh) 2015-05-06 2015-05-06 深度图/视差图的后处理方法和装置
US15/565,877 US10424075B2 (en) 2015-05-06 2015-05-06 Depth/disparity map post-processing method and device
PCT/CN2015/078382 WO2016176840A1 (zh) 2015-05-06 2015-05-06 深度图/视差图的后处理方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/078382 WO2016176840A1 (zh) 2015-05-06 2015-05-06 深度图/视差图的后处理方法和装置

Publications (1)

Publication Number Publication Date
WO2016176840A1 true WO2016176840A1 (zh) 2016-11-10

Family

ID=55724960

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/078382 WO2016176840A1 (zh) 2015-05-06 2015-05-06 深度图/视差图的后处理方法和装置

Country Status (3)

Country Link
US (1) US10424075B2 (zh)
CN (1) CN105517677B (zh)
WO (1) WO2016176840A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993239A (zh) * 2017-12-25 2018-05-04 北京邮电大学 一种计算单目图像的深度次序的方法和装置
GB2563596A (en) * 2017-06-19 2018-12-26 Shortbite Ltd System and method for modeling a three dimensional space based on a two dimensional image
CN109684932A (zh) * 2018-11-30 2019-04-26 华南农业大学 一种基于双目视觉的托盘位姿识别方法
US20190362511A1 (en) * 2018-05-23 2019-11-28 Apple Inc. Efficient scene depth map enhancement for low power devices
CN111223059A (zh) * 2020-01-04 2020-06-02 西安交通大学 一种基于引导滤波器的鲁棒深度图结构重建和去噪方法
CN111292367A (zh) * 2020-02-18 2020-06-16 青岛联合创智科技有限公司 一种基线可变的双目相机深度图生成方法
CN111833393A (zh) * 2020-07-05 2020-10-27 桂林电子科技大学 一种基于边缘信息的双目立体匹配方法
CN112053394A (zh) * 2020-07-14 2020-12-08 北京迈格威科技有限公司 图像处理方法、装置、电子设备及存储介质
WO2022160587A1 (zh) * 2021-01-26 2022-08-04 深圳市商汤科技有限公司 深度检测方法、装置、电子设备、存储介质及程序产品

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10462445B2 (en) * 2016-07-19 2019-10-29 Fotonation Limited Systems and methods for estimating and refining depth maps
US10839535B2 (en) 2016-07-19 2020-11-17 Fotonation Limited Systems and methods for providing depth map information
CN106231292B (zh) * 2016-09-07 2017-08-25 深圳超多维科技有限公司 一种立体虚拟现实直播方法、装置及设备
CN106341676B (zh) * 2016-09-29 2017-06-16 济南大学 基于超像素的深度图像预处理和深度空洞填充方法
TWI595771B (zh) 2016-10-20 2017-08-11 聚晶半導體股份有限公司 影像深度資訊的優化方法與影像處理裝置
KR102351542B1 (ko) * 2017-06-23 2022-01-17 삼성전자주식회사 시차 보상 기능을 갖는 애플리케이션 프로세서, 및 이를 구비하는 디지털 촬영 장치
CN108537798B (zh) * 2017-11-29 2021-05-18 浙江工业大学 一种快速超像素分割方法
US10645357B2 (en) * 2018-03-01 2020-05-05 Motorola Mobility Llc Selectively applying color to an image
US11501543B2 (en) * 2018-03-26 2022-11-15 Videonetics Technology Private Limited System and method for automatic real-time localization of license plate of vehicle from plurality of images of the vehicle
CN108596040A (zh) * 2018-03-29 2018-09-28 中山大学 一种基于双目视觉的串联通道融合行人检测方法
US10621730B2 (en) * 2018-05-22 2020-04-14 Sony Corporation Missing feet recovery of a human object from an image sequence based on ground plane detection
US10878590B2 (en) * 2018-05-25 2020-12-29 Microsoft Technology Licensing, Llc Fusing disparity proposals in stereo matching
CN109636732B (zh) * 2018-10-24 2023-06-23 深圳先进技术研究院 一种深度图像的空洞修复方法以及图像处理装置
CN109522833A (zh) * 2018-11-06 2019-03-26 深圳市爱培科技术股份有限公司 一种用于道路检测的双目视觉立体匹配方法及系统
CN111383185B (zh) * 2018-12-29 2023-09-22 海信集团有限公司 一种基于稠密视差图的孔洞填充方法及车载设备
CN112541920A (zh) * 2019-09-23 2021-03-23 大连民族大学 基于多通道式的图像超像素目标行人分割方法
CN110675346B (zh) * 2019-09-26 2023-05-30 武汉科技大学 适用于Kinect的图像采集与深度图增强方法及装置
CN110796600B (zh) * 2019-10-29 2023-08-11 Oppo广东移动通信有限公司 一种图像超分重建方法、图像超分重建装置及电子设备
KR20210056540A (ko) 2019-11-11 2021-05-20 삼성전자주식회사 디스패리티 이미지를 생성하는 알고리즘 갱신 방법 및 장치
CN111127535B (zh) * 2019-11-22 2023-06-20 北京华捷艾米科技有限公司 一种手部深度图像的处理方法及装置
CN111243000A (zh) * 2020-01-13 2020-06-05 北京工业大学 多约束代价计算与聚合的立体匹配方法
CN111784703B (zh) * 2020-06-17 2023-07-14 泰康保险集团股份有限公司 一种图像分割方法、装置、电子设备和存储介质
CN113838075B (zh) * 2020-06-23 2024-01-09 南宁富联富桂精密工业有限公司 单目测距方法、装置及计算机可读存储介质
CN112016441B (zh) * 2020-08-26 2023-10-13 大连海事大学 基于Radon变换多特征融合的Sentinel-1图像海岸带养殖池提取方法
US20220076502A1 (en) * 2020-09-08 2022-03-10 XRSpace CO., LTD. Method for adjusting skin tone of avatar and avatar skin tone adjusting system
CN112529773B (zh) * 2020-12-17 2024-02-02 豪威科技(武汉)有限公司 Qpd图像后处理方法及qpd相机
CN113160297A (zh) * 2021-04-25 2021-07-23 Oppo广东移动通信有限公司 图像深度估计方法和装置、电子设备、计算机可读存储介质
CN113516699A (zh) * 2021-05-18 2021-10-19 哈尔滨理工大学 一种基于超像素分割的立体匹配系统
CN113792583A (zh) * 2021-08-03 2021-12-14 北京中科慧眼科技有限公司 基于可行驶区域的障碍物检测方法、系统和智能终端
CN113345015A (zh) * 2021-08-05 2021-09-03 浙江华睿科技股份有限公司 一种包裹位置检测方法、装置、设备及可读存储介质
CN114866758B (zh) * 2022-05-31 2024-02-23 星宸科技股份有限公司 视差图像填补方法以及图像处理装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140003711A1 (en) * 2012-06-29 2014-01-02 Hong Kong Applied Science And Technology Research Institute Co. Ltd. Foreground extraction and depth initialization for multi-view baseline images
CN103942756A (zh) * 2014-03-13 2014-07-23 华中科技大学 一种深度图后处理滤波的方法
CN104537627A (zh) * 2015-01-08 2015-04-22 北京交通大学 一种深度图像的后处理方法
US20150110391A1 (en) * 2013-10-21 2015-04-23 Nokia Corporation Method and apparatus for scene segmentation from focal stack images

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4037659C2 (de) * 1990-11-27 1998-04-09 Dbt Gmbh Rinnenschuß für Kettenkratzförderer, insbesondere für den Bergbaueinsatz
US20010035502A1 (en) * 2000-03-13 2001-11-01 Satoshi Arakawa Radiation image storage panel and cassette
WO2006041812A2 (en) * 2004-10-05 2006-04-20 Threeflow, Inc. Method of producing improved lenticular images
US8029139B2 (en) * 2008-01-29 2011-10-04 Eastman Kodak Company 2D/3D switchable color display apparatus with narrow band emitters
US9380292B2 (en) * 2009-07-31 2016-06-28 3Dmedia Corporation Methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene
US8571314B2 (en) * 2010-09-02 2013-10-29 Samsung Electronics Co., Ltd. Three-dimensional display system with depth map mechanism and method of operation thereof
US9123115B2 (en) * 2010-11-23 2015-09-01 Qualcomm Incorporated Depth estimation based on global motion and optical flow
US9087375B2 (en) * 2011-03-28 2015-07-21 Sony Corporation Image processing device, image processing method, and program
EP2786580B1 (en) * 2011-11-30 2015-12-16 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Spatio-temporal disparity-map smoothing by joint multilateral filtering
US8989515B2 (en) * 2012-01-12 2015-03-24 Kofax, Inc. Systems and methods for mobile image capture and processing
KR102033618B1 (ko) * 2012-12-18 2019-10-17 엘지디스플레이 주식회사 표시장치와 이의 구동방법
US9519972B2 (en) * 2013-03-13 2016-12-13 Kip Peli P1 Lp Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
JP6136537B2 (ja) * 2013-04-26 2017-05-31 オムロン株式会社 画像処理装置、画像処理方法、画像処理制御プログラム、および記録媒体
JP2015156607A (ja) * 2014-02-21 2015-08-27 ソニー株式会社 画像処理装置、画像処理装置、及び電子機器
US10089740B2 (en) * 2014-03-07 2018-10-02 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
EP3086552A1 (en) * 2015-04-20 2016-10-26 Thomson Licensing Method and apparatus for image colorization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140003711A1 (en) * 2012-06-29 2014-01-02 Hong Kong Applied Science And Technology Research Institute Co. Ltd. Foreground extraction and depth initialization for multi-view baseline images
US20150110391A1 (en) * 2013-10-21 2015-04-23 Nokia Corporation Method and apparatus for scene segmentation from focal stack images
CN103942756A (zh) * 2014-03-13 2014-07-23 华中科技大学 一种深度图后处理滤波的方法
CN104537627A (zh) * 2015-01-08 2015-04-22 北京交通大学 一种深度图像的后处理方法

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2563596B (en) * 2017-06-19 2021-06-09 Shortbite Ltd System and method for modeling a three dimensional space based on a two dimensional image
GB2563596A (en) * 2017-06-19 2018-12-26 Shortbite Ltd System and method for modeling a three dimensional space based on a two dimensional image
CN107993239B (zh) * 2017-12-25 2022-04-12 北京邮电大学 一种计算单目图像的深度次序的方法和装置
CN107993239A (zh) * 2017-12-25 2018-05-04 北京邮电大学 一种计算单目图像的深度次序的方法和装置
US20190362511A1 (en) * 2018-05-23 2019-11-28 Apple Inc. Efficient scene depth map enhancement for low power devices
US10755426B2 (en) * 2018-05-23 2020-08-25 Apple Inc. Efficient scene depth map enhancement for low power devices
CN109684932A (zh) * 2018-11-30 2019-04-26 华南农业大学 一种基于双目视觉的托盘位姿识别方法
CN109684932B (zh) * 2018-11-30 2023-05-23 华南农业大学 一种基于双目视觉的托盘位姿识别方法
CN111223059A (zh) * 2020-01-04 2020-06-02 西安交通大学 一种基于引导滤波器的鲁棒深度图结构重建和去噪方法
CN111223059B (zh) * 2020-01-04 2022-02-11 西安交通大学 一种基于引导滤波器的鲁棒深度图结构重建和去噪方法
CN111292367A (zh) * 2020-02-18 2020-06-16 青岛联合创智科技有限公司 一种基线可变的双目相机深度图生成方法
CN111292367B (zh) * 2020-02-18 2023-04-07 青岛联合创智科技有限公司 一种基线可变的双目相机深度图生成方法
CN111833393A (zh) * 2020-07-05 2020-10-27 桂林电子科技大学 一种基于边缘信息的双目立体匹配方法
CN112053394A (zh) * 2020-07-14 2020-12-08 北京迈格威科技有限公司 图像处理方法、装置、电子设备及存储介质
WO2022160587A1 (zh) * 2021-01-26 2022-08-04 深圳市商汤科技有限公司 深度检测方法、装置、电子设备、存储介质及程序产品

Also Published As

Publication number Publication date
CN105517677B (zh) 2018-10-12
US10424075B2 (en) 2019-09-24
US20180061068A1 (en) 2018-03-01
CN105517677A (zh) 2016-04-20

Similar Documents

Publication Publication Date Title
WO2016176840A1 (zh) 深度图/视差图的后处理方法和装置
US9171372B2 (en) Depth estimation based on global motion
US9123115B2 (en) Depth estimation based on global motion and optical flow
KR101055411B1 (ko) 입체 영상 생성 방법 및 그 장치
AU2022203854A1 (en) Methods and systems for large-scale determination of RGBD camera poses
TWI489418B (zh) Parallax Estimation Depth Generation
CN109345502B (zh) 一种基于视差图立体结构信息提取的立体图像质量评价方法
KR100793076B1 (ko) 에지 적응형 스테레오/다시점 영상 정합 장치 및 그 방법
EP3311361B1 (en) Method and apparatus for determining a depth map for an image
KR100745691B1 (ko) 차폐영역 검출을 이용한 양안 또는 다시점 스테레오 정합장치 및 그 방법
KR20110014067A (ko) 스테레오 컨텐트의 변환 방법 및 시스템
WO2012020558A1 (ja) 画像処理装置、画像処理方法、表示装置、表示方法およびプログラム
CN111105452B (zh) 基于双目视觉的高低分辨率融合立体匹配方法
CN110120012A (zh) 基于双目摄像头的同步关键帧提取的视频拼接方法
Muddala et al. Depth-based inpainting for disocclusion filling
Chen et al. Depth map generation based on depth from focus
US20230162338A1 (en) Virtual viewpoint synthesis method, electronic apparatus, and computer readable medium
Jorissen et al. Multi-view wide baseline depth estimation robust to sparse input sampling
Akimov et al. Single-image depth map estimation using blur information
Wang et al. Quality assessment for DIBR-synthesized images with local and global distortions
Devernay et al. Focus mismatch detection in stereoscopic content
US20130108149A1 (en) Processing Method for a Pair of Stereo Images
CN112767317B (zh) 裸眼3d显示器光栅膜检测方法
Chen et al. Research on safe distance measuring method of front vehicle in foggy environment
CN112991419B (zh) 视差数据生成方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15891092

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15565877

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.04.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15891092

Country of ref document: EP

Kind code of ref document: A1