CN102831582A - Method for enhancing depth image of Microsoft somatosensory device - Google Patents

Method for enhancing depth image of Microsoft somatosensory device Download PDF

Info

Publication number
CN102831582A
CN102831582A CN2012102653728A CN201210265372A CN102831582A CN 102831582 A CN102831582 A CN 102831582A CN 2012102653728 A CN2012102653728 A CN 2012102653728A CN 201210265372 A CN201210265372 A CN 201210265372A CN 102831582 A CN102831582 A CN 102831582A
Authority
CN
China
Prior art keywords
depth image
pixels
edge
pixel
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102653728A
Other languages
Chinese (zh)
Other versions
CN102831582B (en
Inventor
李树涛
陈理
卢婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN201210265372.8A priority Critical patent/CN102831582B/en
Publication of CN102831582A publication Critical patent/CN102831582A/en
Application granted granted Critical
Publication of CN102831582B publication Critical patent/CN102831582B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种微软体感装置深度图像增强方法。它包括以下步骤:对彩色图像和深度图像分别作边缘检测,并以两个边缘图像为输入,采用区域生长法得到错误像素所在的区域;移除错误像素深度值;用区域生长法在无效像素周围构建平滑区域;用双边滤波法估计平滑区域中无效像素深度值;用双边滤波法估计剩余无效像素深度值,得到增强后的深度图像。本发明首次指出深度图像和对应的彩色图像边缘不匹配的问题是由错误像素引起的,进而提出了错误像素的检测方法。本发明能够有效的填补Kinect深度图像的空洞,并且很好地解决了深度图像边缘不匹配的问题,极大地提高了Kinect深度图像的质量。

Figure 201210265372

The invention discloses a depth image enhancement method of a Microsoft somatosensory device. It includes the following steps: perform edge detection on the color image and the depth image respectively, and use the two edge images as input, use the region growing method to obtain the region where the error pixel is located; remove the depth value of the error pixel; use the region growing method to remove the invalid pixel A smooth area is constructed around it; the depth value of invalid pixels in the smooth area is estimated by bilateral filtering method; the depth value of remaining invalid pixels is estimated by bilateral filtering method, and an enhanced depth image is obtained. The invention points out for the first time that the problem of edge mismatch between the depth image and the corresponding color image is caused by wrong pixels, and further proposes a detection method for wrong pixels. The invention can effectively fill the holes of the Kinect depth image, well solve the problem of edge mismatch of the depth image, and greatly improve the quality of the Kinect depth image.

Figure 201210265372

Description

一种微软体感装置深度图像增强方法A depth image enhancement method of Microsoft somatosensory device

技术领域 technical field

本发明涉及一种深度图像增强方法,更具体的说是一种微软体感装置深度图像增强方法。The invention relates to a depth image enhancement method, in particular to a depth image enhancement method of a Microsoft somatosensory device.

背景技术 Background technique

Kinect是微软发布的一款廉价的深度图像获取设备。它能够以30fps的速度同时产生大小为640×480的彩色图像和深度图像。由于这种廉价和实时的特性,Kinect发布不久就被广泛用在了医院、图书馆、报告厅等互动场所。Kinect is an inexpensive depth image acquisition device released by Microsoft. It is capable of simultaneously producing color and depth images of size 640×480 at 30fps. Due to this cheap and real-time feature, Kinect was widely used in interactive places such as hospitals, libraries, and lecture halls shortly after its release.

由于测量原理的限制,Kinect的深度图像在物体的边缘附近和反射率较差的表面会产生空洞,并且,深度图像的边缘和对应的彩色图像的边缘往往不是匹配的。Due to the limitation of the measurement principle, the depth image of Kinect will produce holes near the edge of the object and the surface with poor reflectivity, and the edge of the depth image and the edge of the corresponding color image often do not match.

为了解决空洞填补问题,研究人员尝试了一些填补方法。传统的方法主要分为基于像素的方法和基于点云的方法。基于像素方法的思想是将深度图像看成普通的灰度图像,将空洞看成是待修复的区域。这样,空洞填补的问题转化成了传统的图像修复问题。这类方法主要利用彩色信息作指导,通过插值、快速修复和置信度传播等图像修复的方法来估计无效点的深度值。但是,由于深度图像的边缘和彩色图像的边缘并不匹配,所以物体边缘的深度信息是不可靠的,估计出的深度值往往也不准确。To solve the hole-filling problem, researchers have tried several filling methods. Traditional methods are mainly divided into pixel-based methods and point cloud-based methods. The idea of the pixel-based method is to treat the depth image as a normal grayscale image, and regard the hole as the area to be repaired. In this way, the problem of hole filling is transformed into a traditional image restoration problem. This type of method mainly uses color information as a guide, and estimates the depth value of invalid points through image restoration methods such as interpolation, fast repair, and belief propagation. However, since the edge of the depth image does not match the edge of the color image, the depth information of the object edge is unreliable, and the estimated depth value is often inaccurate.

基于点云方法的思想是将深度图像当作描述物体表面的数据,这样,空洞填补的问题就转化为物体表面补全的问题。这类方法首先将深度图像数据转化为点云数据,通过点云重构出3D表面,然后依据表面结构的特性(如形状的相似性、表面法向量之间的关系等等)来找到与空洞最匹配的图像块。这类方法缓和了第一类方法中估计出的深度值不准确的问题,但并没有彻底解决这个问题。并且,这类方法需要重构出3D表面,对于不需要3D重构的应用来说,增加了不必要的计算量。The idea of the point cloud-based method is to treat the depth image as the data describing the surface of the object, so that the problem of filling the hole is transformed into the problem of completing the surface of the object. This type of method first converts the depth image data into point cloud data, reconstructs the 3D surface through the point cloud, and then finds the hole and hole according to the characteristics of the surface structure (such as the similarity of shape, the relationship between surface normal vectors, etc.) The best matching image patch. Such methods alleviate the problem of inaccurate estimated depth values in the first type of methods, but do not completely solve the problem. Moreover, this type of method needs to reconstruct a 3D surface, which increases unnecessary calculation for applications that do not require 3D reconstruction.

对于深度图像边缘和彩色图像边缘不匹配的问题,现有的方法主要是发掘深度图像序列的信息,用较长的时间窗口滤波来得到稳定的深度图像边缘。这种方法需要对相邻的图像进行运动估计,由于图像噪声等因素的影响,图像序列的运动估计并不十分准确,并且计算量也较大。For the mismatch between depth image edges and color image edges, the existing methods mainly explore the information of depth image sequences, and use longer time window filtering to obtain stable depth image edges. This method requires motion estimation of adjacent images. Due to the influence of image noise and other factors, the motion estimation of image sequences is not very accurate, and the amount of calculation is also large.

发明内容 Contents of the invention

为了解决Kinect深度图像中存在的上述问题,本发明提供了一种微软体感装置深度图像增强方法。本发明可以作为对Kinect深度数据的前处理而广泛地运用于各种kinect实际系统中。In order to solve the above-mentioned problems existing in the Kinect depth image, the present invention provides a depth image enhancement method of a Microsoft somatosensory device. The present invention can be widely used in various kinect actual systems as preprocessing to Kinect depth data.

本发明解决上述技术问题的技术方案包括以下步骤:The technical scheme that the present invention solves the problems of the technologies described above comprises the following steps:

1)对Kinect彩色图像和深度图像分别作边缘检测,得到彩色图像边缘和深度图像边缘;1) Edge detection is performed on the Kinect color image and depth image respectively, and the edge of the color image and the edge of the depth image are obtained;

2)以两个边缘图像为输入,采用区域生长法得到两个边缘图像中间的区域,即错误像素所在的区域;2) Take two edge images as input, and use the region growing method to obtain the middle area of the two edge images, that is, the area where the error pixel is located;

3)移除错误像素深度值;3) Remove the wrong pixel depth value;

4)用区域生长法在无效像素周围构建平滑区域;4) Construct a smooth region around invalid pixels with the region growing method;

5)用双边滤波法估计平滑区域中无效像素的深度值。5) Estimate the depth value of invalid pixels in the smooth area by bilateral filtering method.

6)用双边滤波法估计剩余无效像素深度值,得到一幅边缘与彩色图像边缘一致的无空洞的深度图像;6) Estimate the depth value of the remaining invalid pixels by bilateral filtering method, and obtain a depth image without holes whose edges are consistent with the edges of the color image;

上述的微软体感装置深度图像增强方法中,所述步骤1)为:In the above-mentioned Microsoft somatosensory device depth image enhancement method, the step 1) is:

将从Kinect采集的彩色图像和深度图像分别转为8位灰度图像,然后对两幅8位灰度图像采用Canny边缘检测算法分别进行边缘检测。其中,Canny边缘检测的上限阈值和下限阈值分别为200和100。The color image and depth image collected from Kinect are converted into 8-bit grayscale images, and then edge detection is performed on the two 8-bit grayscale images using Canny edge detection algorithm. Among them, the upper threshold and lower threshold of Canny edge detection are 200 and 100, respectively.

上述的微软体感装置深度图像增强方法中,所述步骤2)包括以下步骤:In the above-mentioned Microsoft somatosensory device depth image enhancement method, the step 2) comprises the following steps:

a)用区域生长法分别在彩色图像边缘和深度图像边缘构建区域,形成掩模图像mask1和掩模图像mask2。a) Use the region growing method to build regions on the edge of the color image and the edge of the depth image respectively to form a mask image mask1 and a mask image mask2.

其中深度图像边缘构建区域的方法为:以深度图像边缘上的所有的像素为种子进行区域生长,直到碰到彩色图像边缘或者达到指定距离为止。The method of constructing the region on the edge of the depth image is: using all the pixels on the edge of the depth image as seeds to grow the region until it touches the edge of the color image or reaches a specified distance.

其中彩色图像边缘构建区域的方法为:以彩色图像边缘上的所有的像素为种子进行区域生长,直到遇到深度图像边缘或者达到指定距离为止。The method of constructing the region on the edge of the color image is: using all the pixels on the edge of the color image as seeds to grow the region until the edge of the depth image is encountered or a specified distance is reached.

b)将深度边缘图像进行形态学膨胀操作。b) Perform a morphological dilation operation on the depth edge image.

c)将掩模图像mask1和掩模图像mask2按像素求与操作得到掩模图像mask4,然后将掩模图像mask4和掩模图像mask3按像素求或操作得到掩模图像mask5,此即为错误像素检测的结果,其中非零像素表示错误像素。c) The mask image mask1 and the mask image mask2 are summed by pixels to obtain the mask image mask4, and then the mask image mask4 and the mask image mask3 are ORed by pixels to obtain the mask image mask5, which is the wrong pixel The result of the detection, where non-zero pixels represent error pixels.

上述的微软体感装置深度图像增强方法中,所述步骤4)为:在以每一个无效像素Pi为中心的5×5窗口中进行区域生长,并在其周围构建平滑区域。In the above-mentioned depth image enhancement method of the Microsoft somatosensory device, the step 4) is: perform region growing in a 5×5 window centered on each invalid pixel Pi , and construct a smooth region around it.

上述的微软体感装置深度图像增强方法中,所述步骤5)中双边滤波法为:In the above-mentioned Microsoft somatosensory device depth image enhancement method, the bilateral filtering method in the step 5) is:

DD. ii EE. == &Sigma;&Sigma; jj &Element;&Element; &Omega;&Omega; DD. jj &NotElement;&NotElement; 00 CC ii -- CC jj << TT GG sthe s (( ii -- jj )) GG cc (( CC ii -- CC jj )) DD. jj GG sthe s (( ii -- jj )) GG cc (( CC ii -- CC jj )) -- -- -- (( 11 ))

其中,Ω为Pi周围的平滑区域,

Figure BDA00001943061300032
为像素Pi的深度估计值,Dj为像素Pj的深度值,Gs和Gc为均值为0,方差为1.5和3的高斯函数。i-j为像素Pi与Pj的欧氏距离,Ci-Cj为像素Pi与Pj在彩色空间的欧氏距离。T为一个给定的阈值,其值为40。且参与计算的像素个数达到3时估计值才被采纳。where Ω is the smooth area around Pi ,
Figure BDA00001943061300032
is the estimated depth value of pixel P i , D j is the depth value of pixel P j , G s and G c are Gaussian functions with mean value 0 and variance 1.5 and 3. ij is the Euclidean distance between pixels P i and P j , and C i -C j is the Euclidean distance between pixels P i and P j in the color space. T is a given threshold, whose value is 40. And the estimated value is adopted when the number of pixels involved in the calculation reaches 3.

重复双边滤波,直至平滑区域内的没有无效像素或者虽有无效像素但其估计值均不被采纳为止。Repeat the bilateral filtering until there are no invalid pixels in the smooth area or the estimated values of invalid pixels are not adopted even though there are invalid pixels.

上述的微软体感装置深度图像增强方法中,所述步骤6)中对于剩余无效像素采用的双边滤波法为:In the above-mentioned Microsoft somatosensory device depth image enhancement method, the bilateral filtering method adopted for the remaining invalid pixels in the step 6) is:

DD. ii EE. == &Sigma;&Sigma; jj &Element;&Element; &Omega;&Omega; DD. jj &NotElement;&NotElement; 00 GG sthe s (( ii -- jj )) GG cc (( CC ii -- CC jj )) DD. jj GG sthe s (( ii -- jj )) GG cc (( CC ii -- CC jj )) -- -- -- (( 22 ))

其中,Pi为平滑区域外的无效像素,即剩余无效像素;Ω为像素Pi的一个邻域,大小为5×5,为像素Pi的深度估计值,Dj为像素Pj的深度值,Gs和Gc为均值为0,方差为1.5和3的高斯函数;i-j为像素Pi与Pj的欧氏距离;Ci-Cj为像素Pi与Pj在彩色空间的欧氏距离;T为一个给定的阈值,其值为40;且参与计算的像素个数达到3时,估计值才被采纳;重复双边滤波,直至没有剩余无效像素或者虽有无效像素但其估计值均不被采纳为止。Among them, P i is the invalid pixel outside the smooth area, that is, the remaining invalid pixels; Ω is a neighborhood of pixel P i , the size is 5×5, is the estimated depth value of pixel P i , D j is the depth value of pixel P j , G s and G c are Gaussian functions with mean value 0, variance 1.5 and 3; ij is the Euclidean distance between pixel P i and P j ; C i -C j is the Euclidean distance between pixels P i and P j in the color space; T is a given threshold, its value is 40; and when the number of pixels involved in the calculation reaches 3, the estimated value is adopted ; Repeat the bilateral filtering until there are no remaining invalid pixels or the estimated values of invalid pixels are not adopted even though there are invalid pixels.

由于采用上述技术方案,本发明的技术效果在于:本发明采用去除错误像素的办法来避免用错误的深度值估计无效点的深度值,使得深度值估计更准确。此外,由于去除了错误像素,使得深度图像边缘与对应的彩色图像边缘相匹配。为了更精确地估计无效点的深度值,用区域生长法在无效点附近构建平滑区域,并用平滑区域中有效的像素来估计无效点的深度值,使得估计出的深度值的误差达到最小,从而得到一幅完整的高准确度的深度图像。本发明有效的填补Kinect深度图像的空洞,并且很好地解决了深度图像边缘不匹配的问题,极大地提高了Kinect深度图像的质量,并对深度图像的后续处理具有重大意义和实用价值。Due to the adoption of the above technical solution, the technical effect of the present invention is that the present invention uses the method of removing wrong pixels to avoid using wrong depth values to estimate the depth values of invalid points, so that the depth value estimation is more accurate. Furthermore, due to the removal of erroneous pixels, the depth image edges are matched to the corresponding color image edges. In order to estimate the depth value of the invalid point more accurately, the region growing method is used to construct a smooth area near the invalid point, and the effective pixels in the smooth area are used to estimate the depth value of the invalid point, so that the error of the estimated depth value is minimized, thus Get a complete high-accuracy depth image. The invention effectively fills the holes of the Kinect depth image, well solves the problem of edge mismatch of the depth image, greatly improves the quality of the Kinect depth image, and has great significance and practical value for the subsequent processing of the depth image.

下面结合附图对本发明作进一步的说明。The present invention will be further described below in conjunction with the accompanying drawings.

附图说明 Description of drawings

图1为本发明的流程图。Fig. 1 is a flowchart of the present invention.

图2为本发明实施例中检测错误像素所在区域示意图。FIG. 2 is a schematic diagram of an area where an error pixel is detected in an embodiment of the present invention.

图3为本发明实施例中深度图像空洞填充示意图。Fig. 3 is a schematic diagram of hole filling in a depth image in an embodiment of the present invention.

图4为图像增强实例,其中(a)为双边滤波法所得图像,(b)为本发明方法所得图像。Figure 4 is an example of image enhancement, where (a) is the image obtained by the bilateral filtering method, and (b) is the image obtained by the method of the present invention.

具体实施方式 Detailed ways

参见图1,图1为本发明的流程图,其具体实施步骤如下:Referring to Fig. 1, Fig. 1 is a flowchart of the present invention, and its specific implementation steps are as follows:

1对Kinect彩色图像和深度图像分别作边缘检测,得到彩色图像边缘和深度图像边缘。1. Perform edge detection on Kinect color image and depth image respectively, and obtain color image edge and depth image edge.

将从Kinect采集的彩色图像和深度图像分别转为8位灰度图像,然后对这两幅8位灰度图像分别进行边缘检测,得到彩色边缘图像和深度边缘图像。边缘检测方法具体采用Canny边缘检测算法(Canny边缘检测算法的具体实施细节参考于1986年发表在IEEE Transactions on Pattern Analysis andMachine Intelligence上的论文John Canny,“A computational approach toedge detection”.IEEE Trans.Pattern Analysis and Machine Intelligence,vol.8,no.6,pp.679-714.)。其中,Canny边缘检测的上限阈值和下限阈值分别为200和100。The color image and depth image collected from Kinect are respectively converted into 8-bit grayscale images, and then edge detection is performed on the two 8-bit grayscale images to obtain color edge images and depth edge images. The edge detection method specifically adopts the Canny edge detection algorithm (the specific implementation details of the Canny edge detection algorithm refer to the paper John Canny, "A computational approach to edge detection" published in IEEE Transactions on Pattern Analysis and Machine Intelligence in 1986. IEEE Trans.Pattern Analysis and Machine Intelligence, vol.8, no.6, pp.679-714.). Among them, the upper threshold and lower threshold of Canny edge detection are 200 and 100, respectively.

2以两个边缘图像为输入,采用区域生长法得到两个边缘图像中间的区域,即错误像素所在区域,如图2所述,具体包含:2 Take two edge images as input, and use the region growing method to obtain the area in the middle of the two edge images, that is, the area where the error pixel is located, as shown in Figure 2, specifically including:

1)用区域生长法分别在彩色图像边缘和深度图像边缘构建区域,形成掩模图像mask1和掩模图像mask2。1) Use the region growing method to construct regions on the edge of the color image and the edge of the depth image respectively to form mask image mask1 and mask image mask2.

其中,在深度图像边缘构建区域的方法为:对于每一个深度图像边缘上的像素,以此像素为种子进行区域生长,直到碰到彩色图像边缘或者达到窗口边界为止。具体步骤为:Wherein, the method of constructing a region on the edge of the depth image is: for each pixel on the edge of the depth image, use this pixel as a seed to grow the region until it touches the edge of the color image or reaches the window boundary. The specific steps are:

步骤1:对于深度图像边缘上的每一个像素,如果它不在彩色图像边缘上,则将它放入待考察像素集A中;Step 1: For each pixel on the edge of the depth image, if it is not on the edge of the color image, put it into the pixel set A to be investigated;

步骤2:对A中的每一像素P,分别考察其四邻域,被考察点如果不在彩色图像边缘上且在以P为中心的9×9的考察窗口之内,则将该点放入待考察像素集A中,然后将P从A中移除,直到A为空集为止。Step 2: For each pixel P in A, inspect its four neighborhoods separately. If the inspected point is not on the edge of the color image and is within the 9×9 inspection window centered on P, put the point into the pending Consider the pixel set A, and then remove P from A until A is an empty set.

彩色图像边缘构建区域的方法为:以彩色图像边缘上的所有的像素为种子进行区域生长,直到遇到深度图像边缘或者达到指定的距离为止。The method of constructing the region on the edge of the color image is: using all the pixels on the edge of the color image as seeds to grow the region until the edge of the depth image is encountered or the specified distance is reached.

2)将深度边缘图像用3×3的模板进行形态学膨胀操作,得到掩模图像mask3。2) Perform morphological expansion on the depth edge image with a 3×3 template to obtain the mask image mask3.

3)将掩模图像mask1和掩模图像mask2按像素求与操作得到掩模图像mask4,然后将掩模图像mask4和掩模图像mask3按像素求或操作得到掩模图像mask5,此即为错误像素检测的结果,其中非零像素表示错误像素。3) The mask image mask1 and the mask image mask2 are summed by pixels to obtain the mask image mask4, and then the mask image mask4 and the mask image mask3 are ORed by pixels to obtain the mask image mask5, which is the wrong pixel The result of the detection, where non-zero pixels represent error pixels.

3移除错误像素深度值。3 Remove wrong pixel depth values.

4用区域生长的方法在无效像素周围构建平滑区域。4 Construct smooth regions around invalid pixels by region growing method.

如图3所示,对于每一个无效像素Pi,在以此像素为中心的5×5窗口中进行区域生长,并在其周围构建平滑区域。As shown in FIG. 3 , for each invalid pixel P i , region growing is performed in a 5×5 window centered on this pixel, and a smooth region is constructed around it.

5用双边滤波法估计平滑区域中无效像素的深度值。5 Estimate the depth value of invalid pixels in the smooth area by bilateral filtering method.

如图3所示,采用以下双边滤波法来估计此无效像素的深度值:As shown in Figure 3, the following bilateral filtering method is used to estimate the depth value of this invalid pixel:

DD. ii EE. == &Sigma;&Sigma; jj &Element;&Element; &Omega;&Omega; DD. jj &NotElement;&NotElement; 00 CC ii -- CC jj << TT GG sthe s (( ii -- jj )) GG cc (( CC ii -- CC jj )) DD. jj GG sthe s (( ii -- jj )) GG cc (( CC ii -- CC jj )) -- -- -- (( 11 ))

其中Ω为Pi周围的平滑区域,

Figure BDA00001943061300062
为像素Pi的深度估计值,Dj为像素Pj的深度值,Gs和Gc为均值为0,方差为1.5和3的高斯函数。i-j为像素Pi与Pj在图像空间的欧氏距离,Ci-Cj为像素Pi与Pj在彩色空间的欧氏距离。T为一个给定的阈值,其值为40。where Ω is the smooth area around Pi ,
Figure BDA00001943061300062
is the estimated depth value of pixel P i , D j is the depth value of pixel P j , G s and G c are Gaussian functions with mean value 0 and variance 1.5 and 3. ij is the Euclidean distance between pixels P i and P j in the image space, and C i -C j is the Euclidean distance between pixels P i and P j in the color space. T is a given threshold, whose value is 40.

为了精确地估计无效点的深度值,只有参与计算的像素个数达到3时估计值才被采纳。为了填补较大的空洞,采取循环的方法实施双边滤波,直至平滑区域内的没有无效像素或者虽有无效像素但其估计值均不被采纳为止。In order to accurately estimate the depth value of invalid points, the estimated value is adopted only when the number of pixels involved in the calculation reaches 3. In order to fill the larger holes, a loop method is adopted to implement bilateral filtering until there are no invalid pixels in the smooth area or the estimated values of invalid pixels are not adopted even though there are invalid pixels.

6用双边滤波法估计剩余无效像素的深度值,得到一幅边缘与彩色图像边缘一致的无空洞的深度图像。6 Estimate the depth value of the remaining invalid pixels by bilateral filtering method, and obtain a depth image without holes whose edges are consistent with the edges of the color image.

如图3所示,采用以下双边滤波来估计它的深度值:As shown in Figure 3, the following bilateral filtering is used to estimate its depth value:

DD. ii EE. == &Sigma;&Sigma; jj &Element;&Element; &Omega;&Omega; DD. jj &NotElement;&NotElement; 00 GG sthe s (( ii -- jj )) GG cc (( CC ii -- CC jj )) DD. jj GG sthe s (( ii -- jj )) GG cc (( CC ii -- CC jj )) -- -- -- (( 22 ))

其中,Pi为平滑区域外的无效像素,即剩余无效像素;Ω为像素Pi的一个邻域,大小为5×5;

Figure BDA00001943061300072
为像素Pi的深度估计值,Dj为像素Pj的深度值,Gs和Gc为均值为0,方差为1.5和3的高斯函数;i-j为像素Pi与Pj的欧氏距离;Ci-Cj为像素Pi与Pj在彩色空间的欧氏距离;T为一个给定的阈值,其值为40;且参与计算的像素个数达到3时,估计值才被采纳;重复双边滤波,直至没有剩余无效像素或者虽有剩余无效像素但其估计值均不被采纳为止。Among them, P i is the invalid pixel outside the smooth area, that is, the remaining invalid pixels; Ω is a neighborhood of pixel P i , and the size is 5×5;
Figure BDA00001943061300072
is the estimated depth value of pixel P i , D j is the depth value of pixel P j , G s and G c are Gaussian functions with mean value 0, variance 1.5 and 3; ij is the Euclidean distance between pixel P i and P j ; C i -C j is the Euclidean distance between pixels P i and P j in the color space; T is a given threshold, its value is 40; and when the number of pixels involved in the calculation reaches 3, the estimated value is adopted ; Repeat the bilateral filtering until there are no remaining invalid pixels or the estimated values of the remaining invalid pixels are not adopted.

本发明所提供的方法与一般双边滤波法进行了比较,如图4所示。从图4中可以看出,本方法既有效的填补了空洞,也极大地增强了边缘的稳定性,使深度图像和彩色图像的边缘匹配良好。The method provided by the present invention is compared with the general bilateral filtering method, as shown in FIG. 4 . It can be seen from Figure 4 that this method not only effectively fills the hole, but also greatly enhances the stability of the edge, so that the edge of the depth image and the color image match well.

Claims (8)

1. Microsoft's body induction device depth image Enhancement Method may further comprise the steps:
1) Kinect coloured image and depth image are made rim detection respectively, obtain Color Image Edge and depth image edge;
2) be input with two edge images, adopt region-growing method to obtain the middle zone of two edge images, i.e. the zone at erroneous pixel place;
3) remove the erroneous pixel depth value;
4) around inactive pixels, make up smooth region with region-growing method;
5) estimate the depth value of inactive pixels in the smooth region with the bilateral filtering method;
6), obtain the depth image in the breadths edge nothing cavity consistent with Color Image Edge with bilateral filtering method estimated remaining inactive pixels depth value.
2. Microsoft according to claim 1 body induction device depth image Enhancement Method, said step 1) is:
To transfer 8 gray level images respectively to from coloured image and the depth image that Kinect gathers; Adopt the Canny edge detection algorithm to carry out rim detection respectively to two 8 gray level images then; Wherein, the upper limit threshold of Canny rim detection and lower threshold are respectively 200 and 100.
3. Microsoft according to claim 1 body induction device depth image Enhancement Method is characterized in that said step 2) be:
A) make up the zone in Color Image Edge and depth image edge respectively with region-growing method, form mask images mask1 and mask images mask2;
B) the depth edge image is carried out the morphology expansive working;
C) mask images mask1 and mask images mask2 are according to pixels asked with operation obtain mask images mask4; Then mask images mask4 and mask images mask3 are according to pixels asked or operate and obtain mask images mask5; This is the result that erroneous pixel detects, and wherein non-zero pixels is represented erroneous pixel.
4. Microsoft according to claim 3 body induction device depth image Enhancement Method; The method that said depth image edge makes up the zone is: with all pixels on the depth image edge is that seed carries out region growing, till running into Color Image Edge or reaching distance to a declared goal.
5. Microsoft according to claim 3 body induction device depth image Enhancement Method; The method that said Color Image Edge makes up the zone is: with all pixels on the Color Image Edge is that seed carries out region growing, till running into the depth image edge or reaching distance to a declared goal.
6. Microsoft according to claim 1 body induction device depth image Enhancement Method, said step 4) is: with each inactive pixels P iCarry out region growing in 5 * 5 windows for the center, and around it, make up smooth region.
7. Microsoft according to claim 1 body induction device depth image Enhancement Method, the bilateral filtering method is in the said step 5):
D i E = &Sigma; j &Element; &Omega; D j &NotElement; 0 C i - C j < T G s ( i - j ) G c ( C i - C j ) D j G s ( i - j ) G c ( C i - C j ) - - - ( 1 )
Wherein Ω is P iSmooth region on every side;
Figure FDA00001943061200022
Be pixel P iThe estimation of Depth value, D jBe pixel P jDepth value; G sAnd G cFor average is 0, variance is 1.5 and 3 Gaussian function; I-j is pixel P iWith P jEuclidean distance, C i-C jBe pixel P iWith P jEuclidean distance in the color space; T is a given threshold value, and its value is 40; And when the number of pixels of participating in calculating reached 3, estimated value was just adopted; Repeat bilateral filtering, though not having inactive pixels or having its estimated value of inactive pixels all not to be adopted as in smooth region ended.
8. Microsoft according to claim 1 body induction device depth image Enhancement Method, the bilateral filtering method that adopts for the residue inactive pixels in the said step 6) is:
D i E = &Sigma; j &Element; &Omega; D j &NotElement; 0 G s ( i - j ) G c ( C i - C j ) D j G s ( i - j ) G c ( C i - C j ) - - - ( 2 )
Wherein, P iFor the outer inactive pixels of smooth region, promptly remain inactive pixels; Ω is pixel P iA neighborhood, size is 5 * 5; Be pixel P iThe estimation of Depth value, D jBe pixel P jDepth value, G sAnd G cFor average is 0, variance is 1.5 and 3 Gaussian function; I-j is pixel P iWith P jEuclidean distance; C i-C jBe pixel P iWith P jEuclidean distance in the color space; T is a given threshold value, and its value is 40; And the number of pixels of participating in calculating reaches at 3 o'clock, and estimated value is just adopted; Repeat bilateral filtering, though end until not remaining inactive pixels or having its estimated value of residue inactive pixels all not to be adopted as.
CN201210265372.8A 2012-07-27 2012-07-27 A kind of depth image of Microsoft somatosensory device Enhancement Method Expired - Fee Related CN102831582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210265372.8A CN102831582B (en) 2012-07-27 2012-07-27 A kind of depth image of Microsoft somatosensory device Enhancement Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210265372.8A CN102831582B (en) 2012-07-27 2012-07-27 A kind of depth image of Microsoft somatosensory device Enhancement Method

Publications (2)

Publication Number Publication Date
CN102831582A true CN102831582A (en) 2012-12-19
CN102831582B CN102831582B (en) 2015-08-12

Family

ID=47334699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210265372.8A Expired - Fee Related CN102831582B (en) 2012-07-27 2012-07-27 A kind of depth image of Microsoft somatosensory device Enhancement Method

Country Status (1)

Country Link
CN (1) CN102831582B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198486A (en) * 2013-04-10 2013-07-10 浙江大学 Depth image enhancement method based on anisotropic diffusion
CN103413276A (en) * 2013-08-07 2013-11-27 清华大学深圳研究生院 Depth enhancing method based on texture distribution characteristics
CN103578113A (en) * 2013-11-19 2014-02-12 汕头大学 Method for extracting foreground images
CN103942756A (en) * 2014-03-13 2014-07-23 华中科技大学 Post-processing filtering method for depth map
CN104036472A (en) * 2013-03-06 2014-09-10 北京三星通信技术研究有限公司 Method and device for enhancing quality of 3D image
CN104320649A (en) * 2014-11-04 2015-01-28 北京邮电大学 Multi-view depth map enhancing system based on total probabilistic models
CN104680496A (en) * 2015-03-17 2015-06-03 山东大学 Kinect deep image remediation method based on colorful image segmentation
CN104778673A (en) * 2015-04-23 2015-07-15 上海师范大学 Improved depth image enhancing algorithm based on Gaussian mixed model
CN105096259A (en) * 2014-05-09 2015-11-25 株式会社理光 Depth value restoration method and system for depth image
CN105678765A (en) * 2016-01-07 2016-06-15 深圳市未来媒体技术研究院 Texture-based depth boundary correction method
CN106162147A (en) * 2016-07-28 2016-11-23 天津大学 Depth recovery method based on binocular Kinect depth camera system
US9530215B2 (en) 2015-03-20 2016-12-27 Qualcomm Incorporated Systems and methods for enhanced depth map retrieval for moving objects using active sensing technology
CN106462745A (en) * 2014-06-19 2017-02-22 高通股份有限公司 Structured light three-dimensional (3D) depth map based on content filtering
CN106504294A (en) * 2016-10-17 2017-03-15 浙江工业大学 RGBD image vector methods based on diffusion profile
US9635339B2 (en) 2015-08-14 2017-04-25 Qualcomm Incorporated Memory-efficient coded light error correction
CN107133956A (en) * 2016-02-29 2017-09-05 汤姆逊许可公司 Adaptive depth guiding feeling of unreality rendering method and equipment
CN107330893A (en) * 2017-08-23 2017-11-07 无锡北斗星通信息科技有限公司 Canned vehicle image recognition system
CN107358680A (en) * 2017-08-29 2017-11-17 无锡北斗星通信息科技有限公司 A kind of personnel characteristics' deep treatment method
US9846943B2 (en) 2015-08-31 2017-12-19 Qualcomm Incorporated Code domain power control for structured light
CN107516081A (en) * 2017-08-23 2017-12-26 无锡北斗星通信息科技有限公司 A kind of canned vehicle image recognition method
US9948920B2 (en) 2015-02-27 2018-04-17 Qualcomm Incorporated Systems and methods for error correction in structured light
CN107993201A (en) * 2017-11-24 2018-05-04 北京理工大学 A kind of depth image enhancement method for retaining boundary characteristic
US10068338B2 (en) 2015-03-12 2018-09-04 Qualcomm Incorporated Active sensing spatial resolution improvement through multiple receivers and code reuse
CN108629756A (en) * 2018-04-28 2018-10-09 东北大学 A kind of Kinect v2 depth images Null Spot restorative procedure
CN108806121A (en) * 2017-05-04 2018-11-13 上海弘视通信技术有限公司 Active ATM in bank guard method and its device
CN109598736A (en) * 2018-11-30 2019-04-09 深圳奥比中光科技有限公司 The method for registering and device of depth image and color image
CN109961406A (en) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 Image processing method and device and terminal equipment
CN110097590A (en) * 2019-04-24 2019-08-06 成都理工大学 Color depth image repair method based on depth adaptive filtering
CN110322411A (en) * 2019-06-27 2019-10-11 Oppo广东移动通信有限公司 Optimization method, terminal and the storage medium of depth image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938670A (en) * 2009-06-26 2011-01-05 Lg电子株式会社 Image display device and method of operation thereof
JP4670994B2 (en) * 2010-04-05 2011-04-13 オムロン株式会社 Color image processing method and image processing apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938670A (en) * 2009-06-26 2011-01-05 Lg电子株式会社 Image display device and method of operation thereof
JP4670994B2 (en) * 2010-04-05 2011-04-13 オムロン株式会社 Color image processing method and image processing apparatus

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KANG XU等: "A Method of Hole-filling for the Depth Map Generated by Kinect with Moving Objects Detection", 《2012 IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING(BMSB)》, 29 June 2012 (2012-06-29), pages 1 - 5, XP032222275, DOI: 10.1109/BMSB.2012.6264232 *
MASSIMO 等: "Efficient Spatio-Temporal Hole Filling Strategy for Kinect Depth Maps", 《PROCESSING SPIE 8290,THREE-DIMENSIONAL IMAGE PROCESSING(3DIP) AND APPLICATION Ⅱ》, vol. 8290, 9 February 2012 (2012-02-09), pages 1 - 10 *
史延新: "结合边缘检测和区域方法的医学图像分割算法", 《西安工程大学学报》, vol. 24, no. 3, 25 June 2010 (2010-06-25), pages 320 - 329 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036472A (en) * 2013-03-06 2014-09-10 北京三星通信技术研究有限公司 Method and device for enhancing quality of 3D image
CN103198486A (en) * 2013-04-10 2013-07-10 浙江大学 Depth image enhancement method based on anisotropic diffusion
CN103198486B (en) * 2013-04-10 2015-09-09 浙江大学 A kind of depth image enhancement method based on anisotropy parameter
CN103413276B (en) * 2013-08-07 2015-11-25 清华大学深圳研究生院 A kind of degree of depth Enhancement Method based on grain distribution feature
CN103413276A (en) * 2013-08-07 2013-11-27 清华大学深圳研究生院 Depth enhancing method based on texture distribution characteristics
CN103578113A (en) * 2013-11-19 2014-02-12 汕头大学 Method for extracting foreground images
CN103942756A (en) * 2014-03-13 2014-07-23 华中科技大学 Post-processing filtering method for depth map
CN103942756B (en) * 2014-03-13 2017-03-29 华中科技大学 A kind of method of depth map post processing and filtering
CN105096259A (en) * 2014-05-09 2015-11-25 株式会社理光 Depth value restoration method and system for depth image
CN105096259B (en) * 2014-05-09 2018-01-09 株式会社理光 The depth value restoration methods and system of depth image
CN106462745A (en) * 2014-06-19 2017-02-22 高通股份有限公司 Structured light three-dimensional (3D) depth map based on content filtering
CN104320649A (en) * 2014-11-04 2015-01-28 北京邮电大学 Multi-view depth map enhancing system based on total probabilistic models
US9948920B2 (en) 2015-02-27 2018-04-17 Qualcomm Incorporated Systems and methods for error correction in structured light
US10068338B2 (en) 2015-03-12 2018-09-04 Qualcomm Incorporated Active sensing spatial resolution improvement through multiple receivers and code reuse
CN104680496A (en) * 2015-03-17 2015-06-03 山东大学 Kinect deep image remediation method based on colorful image segmentation
CN104680496B (en) * 2015-03-17 2018-01-05 山东大学 A kind of Kinect depth map restorative procedures based on color images
US9530215B2 (en) 2015-03-20 2016-12-27 Qualcomm Incorporated Systems and methods for enhanced depth map retrieval for moving objects using active sensing technology
CN104778673A (en) * 2015-04-23 2015-07-15 上海师范大学 Improved depth image enhancing algorithm based on Gaussian mixed model
US9635339B2 (en) 2015-08-14 2017-04-25 Qualcomm Incorporated Memory-efficient coded light error correction
US9846943B2 (en) 2015-08-31 2017-12-19 Qualcomm Incorporated Code domain power control for structured light
US10223801B2 (en) 2015-08-31 2019-03-05 Qualcomm Incorporated Code domain power control for structured light
CN105678765A (en) * 2016-01-07 2016-06-15 深圳市未来媒体技术研究院 Texture-based depth boundary correction method
US11176728B2 (en) 2016-02-29 2021-11-16 Interdigital Ce Patent Holdings, Sas Adaptive depth-guided non-photorealistic rendering method and device
CN107133956A (en) * 2016-02-29 2017-09-05 汤姆逊许可公司 Adaptive depth guiding feeling of unreality rendering method and equipment
CN106162147B (en) * 2016-07-28 2018-10-16 天津大学 Depth recovery method based on binocular Kinect depth camera systems
CN106162147A (en) * 2016-07-28 2016-11-23 天津大学 Depth recovery method based on binocular Kinect depth camera system
CN106504294A (en) * 2016-10-17 2017-03-15 浙江工业大学 RGBD image vector methods based on diffusion profile
CN106504294B (en) * 2016-10-17 2019-04-26 浙江工业大学 RGBD Image Vectorization Method Based on Diffusion Curve
CN108806121A (en) * 2017-05-04 2018-11-13 上海弘视通信技术有限公司 Active ATM in bank guard method and its device
CN107330893A (en) * 2017-08-23 2017-11-07 无锡北斗星通信息科技有限公司 Canned vehicle image recognition system
CN107516081A (en) * 2017-08-23 2017-12-26 无锡北斗星通信息科技有限公司 A kind of canned vehicle image recognition method
CN107516081B (en) * 2017-08-23 2018-05-18 赵志坚 A kind of canned vehicle image recognition method
CN107358680A (en) * 2017-08-29 2017-11-17 无锡北斗星通信息科技有限公司 A kind of personnel characteristics' deep treatment method
CN107358680B (en) * 2017-08-29 2019-07-23 上海旗沃信息技术有限公司 A kind of personnel characteristics' deep treatment method
CN107993201B (en) * 2017-11-24 2021-11-16 北京理工大学 Depth image enhancement method with retained boundary characteristics
CN107993201A (en) * 2017-11-24 2018-05-04 北京理工大学 A kind of depth image enhancement method for retaining boundary characteristic
CN109961406B (en) * 2017-12-25 2021-06-25 深圳市优必选科技有限公司 Image processing method and device and terminal equipment
CN109961406A (en) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 Image processing method and device and terminal equipment
CN108629756A (en) * 2018-04-28 2018-10-09 东北大学 A kind of Kinect v2 depth images Null Spot restorative procedure
CN108629756B (en) * 2018-04-28 2021-06-25 东北大学 A Kinectv2 Depth Image Invalid Point Repair Method
CN109598736A (en) * 2018-11-30 2019-04-09 深圳奥比中光科技有限公司 The method for registering and device of depth image and color image
CN110097590A (en) * 2019-04-24 2019-08-06 成都理工大学 Color depth image repair method based on depth adaptive filtering
CN110322411A (en) * 2019-06-27 2019-10-11 Oppo广东移动通信有限公司 Optimization method, terminal and the storage medium of depth image

Also Published As

Publication number Publication date
CN102831582B (en) 2015-08-12

Similar Documents

Publication Publication Date Title
CN102831582B (en) A kind of depth image of Microsoft somatosensory device Enhancement Method
Mongus et al. Ground and building extraction from LiDAR data based on differential morphological profiles and locally fitted surfaces
Zheng et al. Robust and accurate coronary artery centerline extraction in CTA by combining model-driven and data-driven approaches
CN108205806B (en) Automatic analysis method for three-dimensional craniofacial structure of cone beam CT image
CN103870845B (en) Novel K value optimization method in point cloud clustering denoising process
Ma et al. Cortexode: Learning cortical surface reconstruction by neural odes
CN101639935A (en) Digital human serial section image segmentation method based on geometric active contour target tracking
CN103353987A (en) Superpixel segmentation method based on fuzzy theory
WO2013091186A1 (en) Multi-parametric 3d magnetic resonance image brain tumor segmentation method
CN102314609B (en) Skeleton extraction method and device for polygonal image
CN104933709A (en) Automatic random-walk CT lung parenchyma image segmentation method based on prior information
CN108460780A (en) A kind of adhesion grain of rice image partition method based on background framework characteristic
CN103455991A (en) Multi-focus image fusion method
CN102760236A (en) Priori shape modeling method based on combined sparse model
CN107862735A (en) A kind of RGBD method for reconstructing three-dimensional scene based on structural information
CN106485203A (en) Carotid ultrasound image Internal-media thickness measuring method and system
CN105574528A (en) Synechia cell image segmenting method based on polyphase mutual exclusion level set
Mahmood et al. Ultrasound liver image enhancement using watershed segmentation method
CN111311515B (en) A Fast Iterative Repair Method for Depth Image with Automatic Detection of Error Regions
CN104036491B (en) Divide based on region and the SAR image segmentation method of the hidden model of Adaptive Polynomial
Wang et al. Bact-3D: A level set segmentation approach for dense multi-layered 3D bacterial biofilms
CN102663728B (en) Dictionary learning-based medical image interactive joint segmentation
CN103886289B (en) Direction self-adaptive method and system for identifying on-water bridge targets
CN104361612B (en) Non-supervision color image segmentation method based on watershed transformation
CN110047085A (en) A kind of accurate restorative procedure in lung film coalescence knuckle areas for lung CT carrying out image threshold segmentation result

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150812

Termination date: 20170727