WO2016179979A1 - 一种用于处理深度图像的方法及装置 - Google Patents
一种用于处理深度图像的方法及装置 Download PDFInfo
- Publication number
- WO2016179979A1 WO2016179979A1 PCT/CN2015/093867 CN2015093867W WO2016179979A1 WO 2016179979 A1 WO2016179979 A1 WO 2016179979A1 CN 2015093867 W CN2015093867 W CN 2015093867W WO 2016179979 A1 WO2016179979 A1 WO 2016179979A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- depth
- image
- segmentation
- pixel
- value
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000011218 segmentation Effects 0.000 claims description 114
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 230000002093 peripheral effect Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 8
- 230000009191 jumping Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 125000004122 cyclic group Chemical group 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 230000003238 somatosensory effect Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/564—Depth or shape recovery from multiple images from contours
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/257—Colour aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20152—Watershed segmentation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the present invention relates to a method and apparatus for processing a depth image.
- the mainstream somatosensory devices on the market such as Kinect and Prime Sence, usually use the depth camera imaging of TOF (Time of Flight 3D imaging) principle, and then transmit the acquired depth image to the middleware, such as NITE (NITE). It can realize the functions of gesture recognition and motion capture), and obtain the skeleton and joint point information of the human body through software calculation, so that the sense of body manipulation can be performed.
- TOF Time of Flight 3D imaging
- NITE NITE
- a method for processing a depth image includes: acquiring a depth image, and capturing an image corresponding to the depth image; dividing the depth image to obtain a plurality of segmentation units; and calculating a corresponding depth reference value for each segmentation unit; Corresponding depth reference values determine a depth value standard range of each segmentation unit; and adjust a depth value of each pixel of the depth image to a depth value standard range corresponding to the segmentation unit in which each pixel is located.
- the depth image is segmented to obtain a plurality of segmentation units; the corresponding depth reference value and the depth value standard range are calculated for each segmentation unit; and then the pixels of the depth image are traversed, and the depth of the pixel is further The value is adjusted to be within the standard range of the depth value corresponding to the split unit.
- the segmenting the depth image to obtain a plurality of segmentation units includes: performing contour segmentation on the captured image to obtain contour segmentation information; and segmenting the depth image according to the contour segmentation information to obtain Multiple split units.
- performing contour segmentation on the captured image to obtain contour segmentation information includes: performing grayscale processing on the captured image to obtain a grayscale image; and performing edge extraction on the grayscale image Obtaining a contour image; performing contour expansion processing on the contour image to obtain a contour expansion image; inverting the contour expansion image to obtain an inverted image; and calculating a contour segmentation information of the inverted image by using a watershed algorithm.
- the above embodiment can calculate the contour segmentation information of the real-time captured image in a short time, thereby facilitating the speed of denoising the depth image.
- the calculating a corresponding depth reference value for each segmentation unit includes: removing a black point pixel and a bright point pixel in the segmentation unit; and differently counting the segmentation unit for removing the black point pixel and the bright point pixel The number of pixels of the depth value; the depth value that has the largest number of pixels is determined as the depth reference value of the segmentation unit.
- the above embodiment can remove the black dot pixel and the bright spot pixel in the dividing unit, that is, remove the noise pixel, thereby improving the accuracy of the calculation result.
- the depth value standard ranges from 0.5 to 1.3 times the depth reference value.
- adjusting a depth value of each pixel to a depth value standard range corresponding to the dividing unit where each pixel is located includes: traversing the traversing in the set direction The pixel of the depth image, in the traversal process, if the depth value of the current pixel exceeds the standard range of the depth value corresponding to the segmentation unit, the depth values of the peripheral pixels centered on the current pixel in the depth image are sequentially read according to the set direction and Expand the loop to set the number of times; If the current read pixel is located in the splitting unit and the depth value is in a range of depth values corresponding to the partitioning unit, jumping out of the loop and adjusting the depth value of the current pixel to a depth value of the currently read pixel; After the number of loops is set and the depth value of the current pixel is not adjusted, the depth value of the current pixel is adjusted to a depth reference value corresponding to the partitioning unit.
- the depth value transition between the adjusted pixel and the surrounding pixels can be smoothed, thereby improving the quality of the processed depth image.
- the peripheral pixels are located in a cross direction of the current pixel.
- the above embodiment can increase the calculation processing speed, thereby improving the speed of denoising processing of the depth image.
- the cycle setting times are five times; the setting direction is a row direction from left to right, or a row direction from right to left, or a column direction from top to bottom, or a column direction from bottom to top .
- the above embodiment can effectively fill the pixels in a short time with a small calculation amount, thereby facilitating the speed of denoising the depth image.
- an apparatus for processing a depth image includes: an acquisition module configured to acquire a depth image, and a captured image corresponding to the depth image; a segmentation module configured to segment the depth image to obtain a plurality of segmentation units; and a calculation module Configuring a corresponding depth reference value for each of the segmentation units; a determination module configured to determine a depth value standard range for each of the segmentation units based on the corresponding depth reference value; and an adjustment module that is And configured to adjust a depth value of each pixel of the depth image to a depth value standard range corresponding to the dividing unit where each pixel is located.
- the segmentation module is further configured to perform contour segmentation on the captured image to obtain contour segmentation information; and segment the depth image according to the contour segmentation information to obtain a plurality of segmentation units.
- the segmentation module is further configured to perform grayscale processing on the captured image to obtain a grayscale image; perform edge extraction on the grayscale image to obtain a contour image; and the contour image Performing a contour expansion process to obtain a contour expansion image;
- the expanded image is inverted to obtain an inverted image; the watershed algorithm is used to calculate the contour segmentation information of the inverted image.
- the calculation module is further configured to remove black point pixels and bright point pixels in the dividing unit; for dividing the black point pixels and the bright point pixels, the number of pixels of different depth values is counted; The most depth value is determined as the depth reference value of the segmentation unit.
- the depth value standard ranges from 0.5 to 1.3 times the depth reference value.
- the adjustment module is further configured to traverse the pixels of the depth image in a set direction. If the depth value of the current pixel exceeds the standard range of the depth value corresponding to the segmentation unit during the traversal process, The depth values of the peripheral pixels centered on the current pixel in the depth image are sequentially read and sequentially set to the number of times; if the current read pixel is located in the segmentation unit and the depth value is located at a depth value corresponding to the segmentation unit a standard range, the loop is skipped and the depth value of the current pixel is adjusted to a depth value of the currently read pixel; if the cycle set number of times ends and the depth value of the current pixel is not adjusted, the current The depth value of the pixel is adjusted to the depth reference value corresponding to the segmentation unit.
- the peripheral pixels are located in a cross direction of the current pixel.
- the cycle setting times are five times; the setting direction is a row direction from left to right, or a row direction from right to left, or a column direction from top to bottom, or a column direction from bottom to top .
- the depth image can be denoised, and the processed depth image has a clear outline and is easy to recognize. Therefore, the quality of the depth image can be improved.
- FIG. 1 shows a flow chart of a method for processing a depth image according to an embodiment of the present invention
- FIG. 2 is a schematic diagram showing contour segmentation of an image taken in accordance with an embodiment of the present invention
- FIG. 3 illustrates a histogram of depth values and pixel numbers of a segmentation unit according to an embodiment of the present invention
- FIG. 4 is a schematic diagram showing a principle of pixel filling according to an embodiment of the present invention.
- FIG. 5 illustrates a comparison diagram before and after depth image denoising processing according to an embodiment of the present invention
- FIG. 6 shows a block diagram of an apparatus for processing a depth image in accordance with an embodiment of the present invention.
- the depth image may be acquired from any other suitable imaging device, or a depth image generated by other methods, and the like.
- Embodiments of the present invention provide a method and an apparatus for processing a depth image.
- the present invention will be further described in detail below.
- a method for processing a depth image includes the following steps:
- Step S101 acquiring a depth image, and a captured image corresponding to the depth image
- Step S102 dividing the depth image to obtain a plurality of segmentation units
- Step S103 calculating a corresponding depth reference value for each segmentation unit
- Step S104 determining a standard value of the depth value of each segmentation unit according to the corresponding depth reference value.
- Step S105 the depth value of each pixel of the depth image is adjusted to a depth value standard range corresponding to the dividing unit where each pixel is located.
- the depth image and the taken image may be acquired from a binocular camera, or obtained by other means, such as a depth image and a taken image that have been obtained by any other suitable solution.
- the binocular camera includes a main camera and a secondary camera.
- the coordinate system of the secondary camera and the main camera has a certain positional deviation.
- the control chip of the binocular camera calculates the depth information image of the target body in space, that is, the depth image.
- the depth image acquired by the binocular camera uses a coordinate system that is consistent with the image captured by the main camera.
- the images taken by the two cameras of the binocular camera are typically RGB color images, and can be any other suitable color image.
- step S102 may include:
- the depth image is segmented based on the contour segmentation information to obtain a plurality of segmentation units.
- step S105 For the captured image of the main camera, if each small contour range corresponds to the same object or the same part of an object in reality, the depth values within the contour range should be relatively close. Therefore, the captured image of the main camera can be subjected to contour segmentation, and the depth image is segmented based on the obtained contour segmentation information to obtain a plurality of segmentation units.
- "adjusting" may include re-filling the pixels of the depth image with depth values.
- contour segmentation of the captured image of the main camera is not limited.
- a pyramid segmentation algorithm a mean shift segmentation algorithm, a watershed segmentation algorithm, and the like may be employed.
- a watershed algorithm may be employed in view of the fact that the somatosensory device needs to process the real-time depth image in a short period of time (contour segmentation and pixel padding need to be completed within 30 ms).
- step S102 includes the following sub-steps:
- Contour expansion processing of the contour image to obtain a contour expansion image ie, the fourth image of Figure 2) image
- the watershed algorithm is used to calculate the contour segmentation information of the inverted image (ie, the sixth image of FIG. 2).
- the above method can calculate the contour segmentation information of the real-time image taken by the main camera in a short time, thereby facilitating the speed of denoising the depth image.
- the contour splitting information may include contour position information and encoding information of the splitting unit.
- the canny edge extraction algorithm can be used.
- the canny edge extraction algorithm is a fast algorithm for image edge detection.
- the result obtained by the canny edge extraction algorithm is a binary image of a white outline line and a black background.
- step S103 may include the following sub-steps:
- the segmentation unit that removes the black dot pixel and the bright spot pixel For the segmentation unit that removes the black dot pixel and the bright spot pixel, the number of pixels of different depth values is counted, and the depth value with the largest number of pixels is used as the depth reference value of the segmentation unit.
- the black point pixels for example, the depth value is 0
- the bright point pixels for example, the depth value is 255
- the noise pixels are removed, and the reliability of the pixels is not high, and the calculation result can be improved after the removal. degree.
- the depth reference value may be determined by dividing the segmentation unit of the black dot pixel and the bright spot pixel by a depth value as an abscissa, and the number of pixels of the depth value is vertical.
- the coordinates are used as a histogram, and the depth value with the largest number of pixels (ie, the peak value of the histogram) is taken as the depth reference value depth(i) corresponding to the dividing unit.
- the depth value standard range of each segmentation unit may be determined according to the corresponding depth reference value depth(i). Specifically, for each segmentation unit, the depth value standard range is 0.5 to 1.3 times depth(i).
- the inventors of the present application obtained a large number of statistical analysis, and the effective depth values of the same segmentation unit are usually concentrated in the above range. Therefore, the pixels of the segmentation unit are filled by using the above range as the standard range of the depth value, and the processing result of the depth image can be made. More accurate.
- step S105 may include:
- the depth values of the peripheral pixels centered on the current pixel in the depth image are sequentially read according to the set direction and the number of times is set to be expanded outward;
- the loop is jumped out and the depth value of the current pixel is adjusted to the depth value of the currently read pixel;
- the depth value of the current pixel is adjusted to the depth reference value corresponding to the segmentation unit.
- the depth image is processed to obtain the processed depth image, which can be implemented in the following three ways:
- the processed depth image information is re-determined point by point according to step S105, and stored in the memory component, such as memory, and the entire depth image pixel traversal is finished, and then the memory is stored again.
- the determined information covers the depth image and finally the processed depth image.
- the peripheral pixels may be located in an oblique direction of the current pixel, for example, 30°, 45°, or the like. In one embodiment, the peripheral pixels are located in the cross direction of the current pixel, which increases the computational processing speed, thereby increasing the speed at which the depth image is denoised.
- the number of loop settings is five times in consideration of the calculation processing speed of the device.
- the setting direction followed by the traversal can be from left to right in the row direction, from right to left in the row direction, from column top to bottom in the column direction, from bottom to top in the column direction, etc.
- the order in which pixels are read in the cross direction is not limited, and can be read clockwise or counterclockwise.
- the setting direction is the row direction from left to right, and the depth values of the pixels in the cross direction centered on the current pixel are read in the order of right, bottom, left, and top.
- the binocular cameras are sequentially accessed from left to right in the row direction.
- Each pixel of the set depth image continues to access the next pixel when the depth value of the currently accessed pixel is within the standard range of the depth value corresponding to the dividing unit.
- the direction of the right pixel is centered in the direction of the right, the left, the left, and the upper.
- the depth value of the pixel is expanded five times, that is, B1, B2, B3, and B4 are read for the first time, C1, C2, C3, and C4 are read for the second time, and so on.
- the loop is jumped out and the depth value of the current pixel is adjusted to be the depth value of the currently read pixel.
- C2 is located outside the dividing unit, so the condition is not satisfied
- C3 is located in the dividing unit and the depth value is within the standard range of the depth value corresponding to the dividing unit, and therefore, reading is performed.
- C3 jump out of the loop and adjust the depth value of A1 to the depth value of C3. If the pixel that meets the above conditions is not read after the end of the 5th cycle, the depth value of the current pixel is adjusted to be the depth reference value depth(i) corresponding to the segmentation unit.
- the depth value transition in the entire segmentation unit can be smoothed and the cavity can be reduced, thereby facilitating the improvement of the quality of the depth image.
- the depth image is segmented to obtain a plurality of segmentation units; the corresponding depth reference value and the depth value standard range are calculated for each segmentation unit; and then the pixels of the depth image are traversed, The depth value of the pixel is adjusted to the depth value standard range corresponding to the segmentation unit.
- the depth image can be denoised, and the processed depth image has a clear outline (as shown in FIG. 5), which is easy to recognize. Therefore, the quality of the depth image acquired, for example, by the binocular camera can be improved.
- an embodiment of the present invention also provides an apparatus for processing a depth image.
- the apparatus will be described below with reference to Fig. 6, and the same portions or functions that have been described in the above embodiments are omitted for brevity.
- the apparatus includes:
- the obtaining module 21 is configured to acquire a depth image and a captured image corresponding to the depth image;
- the segmentation module 22 is configured to segment the depth image to obtain a plurality of segmentation units
- the calculating module 23 is configured to calculate a corresponding depth reference value for each segmentation unit
- the determining module 24 is configured to determine a depth value standard range of each of the dividing units according to the corresponding depth reference value
- the adjustment module 25 is configured to adjust the depth value of each pixel of the depth image to a depth value standard range corresponding to the division unit where each pixel is located.
- the segmentation module 22 is further configured to perform contour segmentation on the taken image to obtain contour segmentation information; and segment the depth image according to the contour segmentation information to obtain a plurality of segmentation units.
- the segmentation module 22 is further configured to perform grayscale processing on the captured image to obtain a grayscale image; perform edge extraction on the grayscale image to obtain a contour image; and contourize the contour image Processing, obtaining a contour expansion image; inverting the contour expansion image to obtain a reverse image; using a watershed algorithm to calculate contour segmentation information of the inverse image.
- the calculation module 23 is further configured to remove the black point pixel and the bright point pixel in the dividing unit; for the dividing unit that removes the black point pixel and the bright point pixel, count the number of pixels of different depth values; The maximum number of depth values is determined as the depth reference value of the segmentation unit.
- the depth value standard ranges from 0.5 to 1.3 times the depth reference value for each segmentation unit.
- the adjustment module 25 is further configured to traverse the pixels of the depth image in the set direction. During the traversal process, if the depth value of the current pixel exceeds the standard range of the depth value corresponding to the segmentation unit, Set the direction to sequentially read the depth value of the peripheral pixels centered on the current pixel in the depth image and expand the loop to set the number of times; if the current read pixel is located in the split unit and the depth value is in the depth value standard corresponding to the split unit The range jumps out of the loop and adjusts the depth value of the current pixel to the depth value of the currently read pixel; if the loop set number of times ends and the depth value of the current pixel is not adjusted, the depth value of the current pixel is adjusted to the partition unit Corresponding depth reference value.
- the peripheral pixels are located in the cross direction of the current pixel.
- the number of cycles is set five times; the direction of the setting is from left to right, or the direction of the line is from right to left, or the direction of the column is from top to bottom, or the direction of the column is from below. Supreme.
- the depth image can be denoised, and the processed depth image contour is clear and easy to recognize. Therefore, the quality of the depth image acquired, for example, by the binocular camera can be improved.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims (16)
- 一种用于处理深度图像的方法,包括:获取深度图像,以及与所述深度图像对应的摄取图像;对所述深度图像进行分割,得到多个分割单元;针对每个分割单元计算出对应的深度参考值;根据所述对应的深度参考值确定每个分割单元的深度值标准范围;以及将所述深度图像的每个像素的深度值调整到所述每个像素所在分割单元对应的深度值标准范围内。
- 如权利要求1所述的方法,其中,所述对所述深度图像进行分割,得到多个分割单元,包括:对所述摄取图像进行轮廓分割,得到轮廓分割信息;根据所述轮廓分割信息分割所述深度图像,得到多个分割单元。
- 如权利要求2所述的方法,其中,所述对所述摄取图像进行轮廓分割,得到轮廓分割信息,包括:对所述摄取图像进行灰度化处理,得到灰度化图像;对所述灰度化图像进行边缘提取,得到轮廓图像;对所述轮廓图像进行轮廓膨胀处理,得到轮廓膨胀图像;对所述轮廓膨胀图像进行取反,得到取反图像;采用分水岭算法计算所述取反图像的轮廓分割信息。
- 如权利要求1-3中的任何一项所述的方法,其中,所述针对每个分割单元计算出对应的深度参考值,包括:去除所述分割单元中的黑点像素和亮点像素;针对去除黑点像素和亮点像素的分割单元,统计不同深度值的像素数量;将像素数量最多的深度值确定为所述分割单元的深度参考值。
- 如权利要求1-4中的任何一项所述的方法,其中,针对每个分割单 元,所述深度值标准范围为0.5~1.3倍的深度参考值。
- 如权利要求1-5中的任何一项所述的方法,其中,针对所述深度图像的每个像素,将所述每个像素的深度值调整为所述每个像素所在分割单元对应的深度值标准范围包括:沿设定方向遍历所述深度图像的像素,在遍历过程中:若当前像素的深度值超出所在分割单元对应的深度值标准范围,则按照设定方向顺序读取深度图像中以当前像素为中心的周边像素的深度值并向外展开循环设定次数;若当前读取像素位于所述分割单元内并且深度值位于所在分割单元对应的深度值标准范围,则跳出循环并将所述当前像素的深度值调整为当前读取像素的深度值;若所述循环设定次数结束且所述当前像素的深度值未被调整,则将所述当前像素的深度值调整为所在分割单元对应的深度参考值。
- 如权利要求6所述的方法,其中,所述周边像素位于所述当前像素的十字方向上。
- 如权利要求6或7所述的方法,其中,所述循环设定次数为五次;所述设定方向为行方向从左至右,或者行方向从右至左,或者列方向从上至下,或者列方向从下至上。
- 一种用于处理深度图像的装置,包括:获取模块,其被配置为获取深度图像,以及与所述深度图像对应的摄取图像;分割模块,其被配置为对所述深度图像进行分割,得到多个分割单元;计算模块,其被配置为针对每个分割单元计算出对应的深度参考值;确定模块,其被配置为根据所述对应的深度参考值确定每个分割单元的深度值标准范围;以及调整模块,其被配置为将所述深度图像的每个像素的深度值调整到所述每个像素所在分割单元对应的深度值标准范围内。
- 如权利要求9所述的装置,其中,所述分割模块还被配置为对所 述摄取图像进行轮廓分割,得到轮廓分割信息;根据所述轮廓分割信息分割所述深度图像,得到多个分割单元。
- 如权利要求10所述的装置,其中,所述分割模块还被配置为对所述摄取图像进行灰度化处理,得到灰度化图像;对所述灰度化图像进行边缘提取,得到轮廓图像;对所述轮廓图像进行轮廓膨胀处理,得到轮廓膨胀图像;对所述轮廓膨胀图像进行取反,得到取反图像;采用分水岭算法计算所述取反图像的轮廓分割信息。
- 如权利要求9-11中的任何一项所述的装置,其中,所述计算模块还被配置为去除所述分割单元中的黑点像素和亮点像素;针对去除黑点像素和亮点像素的分割单元,统计不同深度值的像素数量;将像素数量最多的深度值确定为所述分割单元的深度参考值。
- 如权利要求9-12中的任何一项所述的装置,其中,针对每个分割单元,所述深度值标准范围为0.5~1.3倍的深度参考值。
- 如权利要求9-13中的任何一项所述的装置,其中,所述调整模块还被配置为沿设定方向遍历所述深度图像的像素,在遍历过程中:若当前像素的深度值超出所在分割单元对应的深度值标准范围,则按照设定方向顺序读取深度图像中以当前像素为中心的周边像素的深度值并向外展开循环设定次数;若当前读取像素位于所述分割单元内并且深度值位于所在分割单元对应的深度值标准范围,则跳出循环并将所述当前像素的深度值调整为当前读取像素的深度值;若所述循环设定次数结束且所述当前像素的深度值未被调整,则将所述当前像素的深度值调整为所在分割单元对应的深度参考值。
- 如权利要求14所述的装置,其中,所述周边像素位于所述当前像素的十字方向上。
- 如权利要求14或15所述的装置,其中,所述循环设定次数为五次;所述设定方向为行方向从左至右,或者行方向从右至左,或者列方向从上至下,或者列方向从下至上。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/026,870 US9811921B2 (en) | 2015-05-11 | 2015-11-05 | Apparatus and method for processing a depth image |
JP2017500972A JP6577565B2 (ja) | 2015-05-11 | 2015-11-05 | 深度画像処理用の方法及び装置 |
KR1020167031512A KR101881243B1 (ko) | 2015-05-11 | 2015-11-05 | 깊이 이미지를 프로세싱하기 위한 방법 및 장치 |
EP15839095.5A EP3296953B1 (en) | 2015-05-11 | 2015-11-05 | Method and device for processing depth images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510236717.0 | 2015-05-11 | ||
CN201510236717.0A CN104835164B (zh) | 2015-05-11 | 2015-05-11 | 一种双目摄像头深度图像的处理方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016179979A1 true WO2016179979A1 (zh) | 2016-11-17 |
Family
ID=53813029
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/093867 WO2016179979A1 (zh) | 2015-05-11 | 2015-11-05 | 一种用于处理深度图像的方法及装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US9811921B2 (zh) |
EP (1) | EP3296953B1 (zh) |
JP (1) | JP6577565B2 (zh) |
KR (1) | KR101881243B1 (zh) |
CN (1) | CN104835164B (zh) |
WO (1) | WO2016179979A1 (zh) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104835164B (zh) * | 2015-05-11 | 2017-07-28 | 京东方科技集团股份有限公司 | 一种双目摄像头深度图像的处理方法及装置 |
CN105825494B (zh) * | 2015-08-31 | 2019-01-29 | 维沃移动通信有限公司 | 一种图像处理方法及移动终端 |
US10558855B2 (en) * | 2016-08-17 | 2020-02-11 | Technologies Holdings Corp. | Vision system with teat detection |
CN106506969B (zh) | 2016-11-29 | 2019-07-19 | Oppo广东移动通信有限公司 | 摄像模组、通过其进行人像追踪的方法以及电子设备 |
CN106713890A (zh) * | 2016-12-09 | 2017-05-24 | 宇龙计算机通信科技(深圳)有限公司 | 一种图像处理方法及其装置 |
CN106919928A (zh) * | 2017-03-08 | 2017-07-04 | 京东方科技集团股份有限公司 | 手势识别系统、方法及显示设备 |
CN107891813B (zh) * | 2017-10-31 | 2020-04-24 | 北京新能源汽车股份有限公司 | 一种车辆的控制方法、装置、汽车及计算机可读存储介质 |
CN108195305B (zh) * | 2018-02-09 | 2020-03-31 | 京东方科技集团股份有限公司 | 一种双目检测系统及其深度检测方法 |
KR102158034B1 (ko) * | 2018-05-24 | 2020-09-22 | 주식회사 탑 엔지니어링 | 기판 절단 장치 및 기판 절단 방법 |
CN108765482B (zh) * | 2018-05-31 | 2021-07-13 | 长春博立电子科技有限公司 | 一种基于硬件加速的低功耗实时双目摄像头及使用方法 |
TWI670467B (zh) * | 2018-10-15 | 2019-09-01 | 立普思股份有限公司 | 使用深度影像偵測的加工方法 |
CN110390681B (zh) * | 2019-07-17 | 2023-04-11 | 海伯森技术(深圳)有限公司 | 一种基于深度相机的深度图物体轮廓快速提取方法及装置 |
CN111242137B (zh) * | 2020-01-13 | 2023-05-26 | 江西理工大学 | 一种基于形态成分分析的椒盐噪声滤波方法、装置 |
CN112085675B (zh) * | 2020-08-31 | 2023-07-04 | 四川大学 | 深度图像去噪方法、前景分割方法及人体运动监测方法 |
KR102599855B1 (ko) * | 2020-12-22 | 2023-11-07 | 주식회사 포스코디엑스 | 철강제품용 태그 부착 시스템 및 방법 |
CN112819878B (zh) * | 2021-01-28 | 2023-01-31 | 北京市商汤科技开发有限公司 | 一种深度检测方法、装置、计算机设备和存储介质 |
CN114956287B (zh) * | 2022-06-14 | 2023-08-29 | 西安清源盈科环保科技有限公司 | 一种污水除磷方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101465004A (zh) * | 2007-12-21 | 2009-06-24 | 三星电子株式会社 | 表示3d深度图像的自适应信息的方法、介质和设备 |
WO2010138434A2 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Environment and/or target segmentation |
CN104835164A (zh) * | 2015-05-11 | 2015-08-12 | 京东方科技集团股份有限公司 | 一种双目摄像头深度图像的处理方法及装置 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3850541B2 (ja) * | 1998-02-18 | 2006-11-29 | 富士重工業株式会社 | 高度計測装置 |
JP2007048006A (ja) * | 2005-08-09 | 2007-02-22 | Olympus Corp | 画像処理装置および画像処理プログラム |
KR101175196B1 (ko) * | 2010-12-28 | 2012-08-20 | 강원대학교산학협력단 | 모바일 환경의 입체영상 생성방법 및 장치 |
KR101978172B1 (ko) | 2011-11-29 | 2019-05-15 | 삼성전자주식회사 | 깊이 영상을 고해상도로 변환하는 방법 및 장치 |
US8873835B2 (en) * | 2011-11-30 | 2014-10-28 | Adobe Systems Incorporated | Methods and apparatus for correcting disparity maps using statistical analysis on local neighborhoods |
JP5951367B2 (ja) * | 2012-01-17 | 2016-07-13 | シャープ株式会社 | 撮像装置、撮像画像処理システム、プログラムおよび記録媒体 |
CN102750694B (zh) * | 2012-06-04 | 2014-09-10 | 清华大学 | 基于局部最优置信传播算法的双目视频深度图求取方法 |
CN103546736B (zh) * | 2012-07-12 | 2016-12-28 | 三星电子株式会社 | 图像处理设备和方法 |
CN103136775A (zh) * | 2013-03-19 | 2013-06-05 | 武汉大学 | 基于局部约束重建的kinect深度图空洞填充方法 |
CN103455984B (zh) * | 2013-09-02 | 2016-08-31 | 清华大学深圳研究生院 | 一种Kinect深度图像获取方法与装置 |
CN104574342B (zh) * | 2013-10-14 | 2017-06-23 | 株式会社理光 | 视差深度图像的噪声识别方法和噪声识别装置 |
US9630318B2 (en) * | 2014-10-02 | 2017-04-25 | Brain Corporation | Feature detection apparatus and methods for training of robotic navigation |
TWI558167B (zh) * | 2014-12-30 | 2016-11-11 | 友達光電股份有限公司 | 立體影像顯示系統與顯示方法 |
-
2015
- 2015-05-11 CN CN201510236717.0A patent/CN104835164B/zh active Active
- 2015-11-05 KR KR1020167031512A patent/KR101881243B1/ko active IP Right Grant
- 2015-11-05 WO PCT/CN2015/093867 patent/WO2016179979A1/zh active Application Filing
- 2015-11-05 JP JP2017500972A patent/JP6577565B2/ja active Active
- 2015-11-05 EP EP15839095.5A patent/EP3296953B1/en active Active
- 2015-11-05 US US15/026,870 patent/US9811921B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101465004A (zh) * | 2007-12-21 | 2009-06-24 | 三星电子株式会社 | 表示3d深度图像的自适应信息的方法、介质和设备 |
WO2010138434A2 (en) * | 2009-05-29 | 2010-12-02 | Microsoft Corporation | Environment and/or target segmentation |
CN104835164A (zh) * | 2015-05-11 | 2015-08-12 | 京东方科技集团股份有限公司 | 一种双目摄像头深度图像的处理方法及装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3296953A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP3296953B1 (en) | 2021-09-15 |
EP3296953A4 (en) | 2018-11-21 |
CN104835164A (zh) | 2015-08-12 |
EP3296953A1 (en) | 2018-03-21 |
JP6577565B2 (ja) | 2019-09-18 |
US9811921B2 (en) | 2017-11-07 |
US20170132803A1 (en) | 2017-05-11 |
KR101881243B1 (ko) | 2018-07-23 |
CN104835164B (zh) | 2017-07-28 |
KR20160148577A (ko) | 2016-12-26 |
JP2018515743A (ja) | 2018-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016179979A1 (zh) | 一种用于处理深度图像的方法及装置 | |
US10573018B2 (en) | Three dimensional scene reconstruction based on contextual analysis | |
TWI489418B (zh) | Parallax Estimation Depth Generation | |
AU2022203854A1 (en) | Methods and systems for large-scale determination of RGBD camera poses | |
CN106023303B (zh) | 一种基于轮廓有效性提高三维重建点云稠密程度的方法 | |
WO2016176840A1 (zh) | 深度图/视差图的后处理方法和装置 | |
WO2016034059A1 (zh) | 基于颜色-结构特征的目标对象跟踪方法 | |
US20130004079A1 (en) | Image processing apparatus, image processing method, and program thereof | |
KR20170008638A (ko) | 3차원 컨텐츠 생성 장치 및 그 3차원 컨텐츠 생성 방법 | |
US20210185274A1 (en) | Video decoding method and camera | |
WO2020037881A1 (zh) | 运动轨迹绘制方法、装置、设备和存储介质 | |
WO2020119467A1 (zh) | 高精度稠密深度图像的生成方法和装置 | |
CN111080776B (zh) | 人体动作三维数据采集和复现的处理方法及系统 | |
US20150131853A1 (en) | Stereo matching system and method for generating disparity map using same | |
TW201432620A (zh) | 具有邊緣選擇功能性之影像處理器 | |
US10593044B2 (en) | Information processing apparatus, information processing method, and storage medium | |
KR20140134090A (ko) | 이미지 센서와 대상 객체 사이의 상대적인 각도를 이용하는 깊이 영상 처리 장치 및 방법 | |
US9947106B2 (en) | Method and electronic device for object tracking in a light-field capture | |
JP2013185905A (ja) | 情報処理装置及び方法、並びにプログラム | |
US20200202495A1 (en) | Apparatus and method for dynamically adjusting depth resolution | |
US10397540B2 (en) | Method for obtaining and merging multi-resolution data | |
CN110009683B (zh) | 基于MaskRCNN的实时平面上物体检测方法 | |
JP2016156702A (ja) | 撮像装置および撮像方法 | |
US9721151B2 (en) | Method and apparatus for detecting interfacing region in depth image | |
US20210042947A1 (en) | Method and apparatus for processing data, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 15026870 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 20167031512 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15839095 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017500972 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015839095 Country of ref document: EP |