WO2016179979A1 - 一种用于处理深度图像的方法及装置 - Google Patents

一种用于处理深度图像的方法及装置 Download PDF

Info

Publication number
WO2016179979A1
WO2016179979A1 PCT/CN2015/093867 CN2015093867W WO2016179979A1 WO 2016179979 A1 WO2016179979 A1 WO 2016179979A1 CN 2015093867 W CN2015093867 W CN 2015093867W WO 2016179979 A1 WO2016179979 A1 WO 2016179979A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
image
segmentation
pixel
value
Prior art date
Application number
PCT/CN2015/093867
Other languages
English (en)
French (fr)
Inventor
赵骥伯
赵星星
靳小利
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US15/026,870 priority Critical patent/US9811921B2/en
Priority to JP2017500972A priority patent/JP6577565B2/ja
Priority to KR1020167031512A priority patent/KR101881243B1/ko
Priority to EP15839095.5A priority patent/EP3296953B1/en
Publication of WO2016179979A1 publication Critical patent/WO2016179979A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present invention relates to a method and apparatus for processing a depth image.
  • the mainstream somatosensory devices on the market such as Kinect and Prime Sence, usually use the depth camera imaging of TOF (Time of Flight 3D imaging) principle, and then transmit the acquired depth image to the middleware, such as NITE (NITE). It can realize the functions of gesture recognition and motion capture), and obtain the skeleton and joint point information of the human body through software calculation, so that the sense of body manipulation can be performed.
  • TOF Time of Flight 3D imaging
  • NITE NITE
  • a method for processing a depth image includes: acquiring a depth image, and capturing an image corresponding to the depth image; dividing the depth image to obtain a plurality of segmentation units; and calculating a corresponding depth reference value for each segmentation unit; Corresponding depth reference values determine a depth value standard range of each segmentation unit; and adjust a depth value of each pixel of the depth image to a depth value standard range corresponding to the segmentation unit in which each pixel is located.
  • the depth image is segmented to obtain a plurality of segmentation units; the corresponding depth reference value and the depth value standard range are calculated for each segmentation unit; and then the pixels of the depth image are traversed, and the depth of the pixel is further The value is adjusted to be within the standard range of the depth value corresponding to the split unit.
  • the segmenting the depth image to obtain a plurality of segmentation units includes: performing contour segmentation on the captured image to obtain contour segmentation information; and segmenting the depth image according to the contour segmentation information to obtain Multiple split units.
  • performing contour segmentation on the captured image to obtain contour segmentation information includes: performing grayscale processing on the captured image to obtain a grayscale image; and performing edge extraction on the grayscale image Obtaining a contour image; performing contour expansion processing on the contour image to obtain a contour expansion image; inverting the contour expansion image to obtain an inverted image; and calculating a contour segmentation information of the inverted image by using a watershed algorithm.
  • the above embodiment can calculate the contour segmentation information of the real-time captured image in a short time, thereby facilitating the speed of denoising the depth image.
  • the calculating a corresponding depth reference value for each segmentation unit includes: removing a black point pixel and a bright point pixel in the segmentation unit; and differently counting the segmentation unit for removing the black point pixel and the bright point pixel The number of pixels of the depth value; the depth value that has the largest number of pixels is determined as the depth reference value of the segmentation unit.
  • the above embodiment can remove the black dot pixel and the bright spot pixel in the dividing unit, that is, remove the noise pixel, thereby improving the accuracy of the calculation result.
  • the depth value standard ranges from 0.5 to 1.3 times the depth reference value.
  • adjusting a depth value of each pixel to a depth value standard range corresponding to the dividing unit where each pixel is located includes: traversing the traversing in the set direction The pixel of the depth image, in the traversal process, if the depth value of the current pixel exceeds the standard range of the depth value corresponding to the segmentation unit, the depth values of the peripheral pixels centered on the current pixel in the depth image are sequentially read according to the set direction and Expand the loop to set the number of times; If the current read pixel is located in the splitting unit and the depth value is in a range of depth values corresponding to the partitioning unit, jumping out of the loop and adjusting the depth value of the current pixel to a depth value of the currently read pixel; After the number of loops is set and the depth value of the current pixel is not adjusted, the depth value of the current pixel is adjusted to a depth reference value corresponding to the partitioning unit.
  • the depth value transition between the adjusted pixel and the surrounding pixels can be smoothed, thereby improving the quality of the processed depth image.
  • the peripheral pixels are located in a cross direction of the current pixel.
  • the above embodiment can increase the calculation processing speed, thereby improving the speed of denoising processing of the depth image.
  • the cycle setting times are five times; the setting direction is a row direction from left to right, or a row direction from right to left, or a column direction from top to bottom, or a column direction from bottom to top .
  • the above embodiment can effectively fill the pixels in a short time with a small calculation amount, thereby facilitating the speed of denoising the depth image.
  • an apparatus for processing a depth image includes: an acquisition module configured to acquire a depth image, and a captured image corresponding to the depth image; a segmentation module configured to segment the depth image to obtain a plurality of segmentation units; and a calculation module Configuring a corresponding depth reference value for each of the segmentation units; a determination module configured to determine a depth value standard range for each of the segmentation units based on the corresponding depth reference value; and an adjustment module that is And configured to adjust a depth value of each pixel of the depth image to a depth value standard range corresponding to the dividing unit where each pixel is located.
  • the segmentation module is further configured to perform contour segmentation on the captured image to obtain contour segmentation information; and segment the depth image according to the contour segmentation information to obtain a plurality of segmentation units.
  • the segmentation module is further configured to perform grayscale processing on the captured image to obtain a grayscale image; perform edge extraction on the grayscale image to obtain a contour image; and the contour image Performing a contour expansion process to obtain a contour expansion image;
  • the expanded image is inverted to obtain an inverted image; the watershed algorithm is used to calculate the contour segmentation information of the inverted image.
  • the calculation module is further configured to remove black point pixels and bright point pixels in the dividing unit; for dividing the black point pixels and the bright point pixels, the number of pixels of different depth values is counted; The most depth value is determined as the depth reference value of the segmentation unit.
  • the depth value standard ranges from 0.5 to 1.3 times the depth reference value.
  • the adjustment module is further configured to traverse the pixels of the depth image in a set direction. If the depth value of the current pixel exceeds the standard range of the depth value corresponding to the segmentation unit during the traversal process, The depth values of the peripheral pixels centered on the current pixel in the depth image are sequentially read and sequentially set to the number of times; if the current read pixel is located in the segmentation unit and the depth value is located at a depth value corresponding to the segmentation unit a standard range, the loop is skipped and the depth value of the current pixel is adjusted to a depth value of the currently read pixel; if the cycle set number of times ends and the depth value of the current pixel is not adjusted, the current The depth value of the pixel is adjusted to the depth reference value corresponding to the segmentation unit.
  • the peripheral pixels are located in a cross direction of the current pixel.
  • the cycle setting times are five times; the setting direction is a row direction from left to right, or a row direction from right to left, or a column direction from top to bottom, or a column direction from bottom to top .
  • the depth image can be denoised, and the processed depth image has a clear outline and is easy to recognize. Therefore, the quality of the depth image can be improved.
  • FIG. 1 shows a flow chart of a method for processing a depth image according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram showing contour segmentation of an image taken in accordance with an embodiment of the present invention
  • FIG. 3 illustrates a histogram of depth values and pixel numbers of a segmentation unit according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram showing a principle of pixel filling according to an embodiment of the present invention.
  • FIG. 5 illustrates a comparison diagram before and after depth image denoising processing according to an embodiment of the present invention
  • FIG. 6 shows a block diagram of an apparatus for processing a depth image in accordance with an embodiment of the present invention.
  • the depth image may be acquired from any other suitable imaging device, or a depth image generated by other methods, and the like.
  • Embodiments of the present invention provide a method and an apparatus for processing a depth image.
  • the present invention will be further described in detail below.
  • a method for processing a depth image includes the following steps:
  • Step S101 acquiring a depth image, and a captured image corresponding to the depth image
  • Step S102 dividing the depth image to obtain a plurality of segmentation units
  • Step S103 calculating a corresponding depth reference value for each segmentation unit
  • Step S104 determining a standard value of the depth value of each segmentation unit according to the corresponding depth reference value.
  • Step S105 the depth value of each pixel of the depth image is adjusted to a depth value standard range corresponding to the dividing unit where each pixel is located.
  • the depth image and the taken image may be acquired from a binocular camera, or obtained by other means, such as a depth image and a taken image that have been obtained by any other suitable solution.
  • the binocular camera includes a main camera and a secondary camera.
  • the coordinate system of the secondary camera and the main camera has a certain positional deviation.
  • the control chip of the binocular camera calculates the depth information image of the target body in space, that is, the depth image.
  • the depth image acquired by the binocular camera uses a coordinate system that is consistent with the image captured by the main camera.
  • the images taken by the two cameras of the binocular camera are typically RGB color images, and can be any other suitable color image.
  • step S102 may include:
  • the depth image is segmented based on the contour segmentation information to obtain a plurality of segmentation units.
  • step S105 For the captured image of the main camera, if each small contour range corresponds to the same object or the same part of an object in reality, the depth values within the contour range should be relatively close. Therefore, the captured image of the main camera can be subjected to contour segmentation, and the depth image is segmented based on the obtained contour segmentation information to obtain a plurality of segmentation units.
  • "adjusting" may include re-filling the pixels of the depth image with depth values.
  • contour segmentation of the captured image of the main camera is not limited.
  • a pyramid segmentation algorithm a mean shift segmentation algorithm, a watershed segmentation algorithm, and the like may be employed.
  • a watershed algorithm may be employed in view of the fact that the somatosensory device needs to process the real-time depth image in a short period of time (contour segmentation and pixel padding need to be completed within 30 ms).
  • step S102 includes the following sub-steps:
  • Contour expansion processing of the contour image to obtain a contour expansion image ie, the fourth image of Figure 2) image
  • the watershed algorithm is used to calculate the contour segmentation information of the inverted image (ie, the sixth image of FIG. 2).
  • the above method can calculate the contour segmentation information of the real-time image taken by the main camera in a short time, thereby facilitating the speed of denoising the depth image.
  • the contour splitting information may include contour position information and encoding information of the splitting unit.
  • the canny edge extraction algorithm can be used.
  • the canny edge extraction algorithm is a fast algorithm for image edge detection.
  • the result obtained by the canny edge extraction algorithm is a binary image of a white outline line and a black background.
  • step S103 may include the following sub-steps:
  • the segmentation unit that removes the black dot pixel and the bright spot pixel For the segmentation unit that removes the black dot pixel and the bright spot pixel, the number of pixels of different depth values is counted, and the depth value with the largest number of pixels is used as the depth reference value of the segmentation unit.
  • the black point pixels for example, the depth value is 0
  • the bright point pixels for example, the depth value is 255
  • the noise pixels are removed, and the reliability of the pixels is not high, and the calculation result can be improved after the removal. degree.
  • the depth reference value may be determined by dividing the segmentation unit of the black dot pixel and the bright spot pixel by a depth value as an abscissa, and the number of pixels of the depth value is vertical.
  • the coordinates are used as a histogram, and the depth value with the largest number of pixels (ie, the peak value of the histogram) is taken as the depth reference value depth(i) corresponding to the dividing unit.
  • the depth value standard range of each segmentation unit may be determined according to the corresponding depth reference value depth(i). Specifically, for each segmentation unit, the depth value standard range is 0.5 to 1.3 times depth(i).
  • the inventors of the present application obtained a large number of statistical analysis, and the effective depth values of the same segmentation unit are usually concentrated in the above range. Therefore, the pixels of the segmentation unit are filled by using the above range as the standard range of the depth value, and the processing result of the depth image can be made. More accurate.
  • step S105 may include:
  • the depth values of the peripheral pixels centered on the current pixel in the depth image are sequentially read according to the set direction and the number of times is set to be expanded outward;
  • the loop is jumped out and the depth value of the current pixel is adjusted to the depth value of the currently read pixel;
  • the depth value of the current pixel is adjusted to the depth reference value corresponding to the segmentation unit.
  • the depth image is processed to obtain the processed depth image, which can be implemented in the following three ways:
  • the processed depth image information is re-determined point by point according to step S105, and stored in the memory component, such as memory, and the entire depth image pixel traversal is finished, and then the memory is stored again.
  • the determined information covers the depth image and finally the processed depth image.
  • the peripheral pixels may be located in an oblique direction of the current pixel, for example, 30°, 45°, or the like. In one embodiment, the peripheral pixels are located in the cross direction of the current pixel, which increases the computational processing speed, thereby increasing the speed at which the depth image is denoised.
  • the number of loop settings is five times in consideration of the calculation processing speed of the device.
  • the setting direction followed by the traversal can be from left to right in the row direction, from right to left in the row direction, from column top to bottom in the column direction, from bottom to top in the column direction, etc.
  • the order in which pixels are read in the cross direction is not limited, and can be read clockwise or counterclockwise.
  • the setting direction is the row direction from left to right, and the depth values of the pixels in the cross direction centered on the current pixel are read in the order of right, bottom, left, and top.
  • the binocular cameras are sequentially accessed from left to right in the row direction.
  • Each pixel of the set depth image continues to access the next pixel when the depth value of the currently accessed pixel is within the standard range of the depth value corresponding to the dividing unit.
  • the direction of the right pixel is centered in the direction of the right, the left, the left, and the upper.
  • the depth value of the pixel is expanded five times, that is, B1, B2, B3, and B4 are read for the first time, C1, C2, C3, and C4 are read for the second time, and so on.
  • the loop is jumped out and the depth value of the current pixel is adjusted to be the depth value of the currently read pixel.
  • C2 is located outside the dividing unit, so the condition is not satisfied
  • C3 is located in the dividing unit and the depth value is within the standard range of the depth value corresponding to the dividing unit, and therefore, reading is performed.
  • C3 jump out of the loop and adjust the depth value of A1 to the depth value of C3. If the pixel that meets the above conditions is not read after the end of the 5th cycle, the depth value of the current pixel is adjusted to be the depth reference value depth(i) corresponding to the segmentation unit.
  • the depth value transition in the entire segmentation unit can be smoothed and the cavity can be reduced, thereby facilitating the improvement of the quality of the depth image.
  • the depth image is segmented to obtain a plurality of segmentation units; the corresponding depth reference value and the depth value standard range are calculated for each segmentation unit; and then the pixels of the depth image are traversed, The depth value of the pixel is adjusted to the depth value standard range corresponding to the segmentation unit.
  • the depth image can be denoised, and the processed depth image has a clear outline (as shown in FIG. 5), which is easy to recognize. Therefore, the quality of the depth image acquired, for example, by the binocular camera can be improved.
  • an embodiment of the present invention also provides an apparatus for processing a depth image.
  • the apparatus will be described below with reference to Fig. 6, and the same portions or functions that have been described in the above embodiments are omitted for brevity.
  • the apparatus includes:
  • the obtaining module 21 is configured to acquire a depth image and a captured image corresponding to the depth image;
  • the segmentation module 22 is configured to segment the depth image to obtain a plurality of segmentation units
  • the calculating module 23 is configured to calculate a corresponding depth reference value for each segmentation unit
  • the determining module 24 is configured to determine a depth value standard range of each of the dividing units according to the corresponding depth reference value
  • the adjustment module 25 is configured to adjust the depth value of each pixel of the depth image to a depth value standard range corresponding to the division unit where each pixel is located.
  • the segmentation module 22 is further configured to perform contour segmentation on the taken image to obtain contour segmentation information; and segment the depth image according to the contour segmentation information to obtain a plurality of segmentation units.
  • the segmentation module 22 is further configured to perform grayscale processing on the captured image to obtain a grayscale image; perform edge extraction on the grayscale image to obtain a contour image; and contourize the contour image Processing, obtaining a contour expansion image; inverting the contour expansion image to obtain a reverse image; using a watershed algorithm to calculate contour segmentation information of the inverse image.
  • the calculation module 23 is further configured to remove the black point pixel and the bright point pixel in the dividing unit; for the dividing unit that removes the black point pixel and the bright point pixel, count the number of pixels of different depth values; The maximum number of depth values is determined as the depth reference value of the segmentation unit.
  • the depth value standard ranges from 0.5 to 1.3 times the depth reference value for each segmentation unit.
  • the adjustment module 25 is further configured to traverse the pixels of the depth image in the set direction. During the traversal process, if the depth value of the current pixel exceeds the standard range of the depth value corresponding to the segmentation unit, Set the direction to sequentially read the depth value of the peripheral pixels centered on the current pixel in the depth image and expand the loop to set the number of times; if the current read pixel is located in the split unit and the depth value is in the depth value standard corresponding to the split unit The range jumps out of the loop and adjusts the depth value of the current pixel to the depth value of the currently read pixel; if the loop set number of times ends and the depth value of the current pixel is not adjusted, the depth value of the current pixel is adjusted to the partition unit Corresponding depth reference value.
  • the peripheral pixels are located in the cross direction of the current pixel.
  • the number of cycles is set five times; the direction of the setting is from left to right, or the direction of the line is from right to left, or the direction of the column is from top to bottom, or the direction of the column is from below. Supreme.
  • the depth image can be denoised, and the processed depth image contour is clear and easy to recognize. Therefore, the quality of the depth image acquired, for example, by the binocular camera can be improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

提供了一种用于处理深度图像的方法及装置。所述方法包括:获取深度图像,以及与所述深度图像对应的摄取图像;对所述深度图像进行分割,得到多个分割单元;针对每个分割单元计算出对应的深度参考值;根据所述对应的深度参考值确定每个分割单元的深度值标准范围;以及将所述深度图像的每个像素的深度值调整到所述每个像素所在分割单元对应的深度值标准范围内。本发明实施例的方法和装置可以提升深度图像的质量,使深度图像更易被识别。

Description

一种用于处理深度图像的方法及装置
本申请要求于2015年5月11日递交的中国专利申请第201510236717.0号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本发明涉及一种用于处理深度图像的方法及装置。
背景技术
目前,市面上主流的体感设备,例如Kinect、Prime Sence通常使用TOF(Time of Flight,飞行时间法3D成像)原理的深度摄像头成像,然后再把获取的深度图像传输给中间件,例如NITE(NITE可实现手势的识别和运动捕获等功能),通过软件的计算获取人体的骨架及关节点信息,从而可以进行体感操控。该现有技术由于采用红外线装置而导致成本较高。
除了利用TOF原理之外,还存在一些其它技术来获取深度图像,例如利用双目摄像头来获取深度图像是一种价格较为低廉的解决方案。然而,双目摄像头由于其成像原理的局限,在获取深度图像时会有一定的信息缺失,图像噪声较大,从而导致深度图像的质量不佳。因此,存在对改进的深度图像处理的解决方案的需求。
发明内容
根据本公开的第一方面,提供了一种用于处理深度图像的方法。所述方法包括:获取深度图像,以及与所述深度图像对应的摄取图像;对所述深度图像进行分割,得到多个分割单元;针对每个分割单元计算出对应的深度参考值;根据所述对应的深度参考值确定每个分割单元的深度值标准范围;以及将所述深度图像的每个像素的深度值调整到所述每个像素所在分割单元对应的深度值标准范围内。
本发明实施例的方法,对深度图像进行分割,得到多个分割单元;针对每个分割单元计算出对应的深度参考值和深度值标准范围;之后,再遍历深度图像的像素,将像素的深度值调整为分割单元对应的深度值标准范围内。通过该方法,可以对深度图像进行去噪处理,处理后的深度图像轮廓较为清晰,容易识别。因此,可以提升深度图像的质量。
根据一个实施例,所述对所述深度图像进行分割,得到多个分割单元,包括:对所述摄取图像进行轮廓分割,得到轮廓分割信息;根据所述轮廓分割信息分割所述深度图像,得到多个分割单元。
根据一个实施例,所述对所述摄取图像进行轮廓分割,得到轮廓分割信息,包括:对所述摄取图像进行灰度化处理,得到灰度化图像;对所述灰度化图像进行边缘提取,得到轮廓图像;对所述轮廓图像进行轮廓膨胀处理,得到轮廓膨胀图像;对所述轮廓膨胀图像进行取反,得到取反图像;采用分水岭算法计算所述取反图像的轮廓分割信息。
上述实施例可以在较短的时间内计算出实时摄取图像的轮廓分割信息,从而有利于提高对深度图像进行去噪处理的速度。
根据一个实施例,所述针对每个分割单元计算出对应的深度参考值,包括:去除所述分割单元中的黑点像素和亮点像素;针对去除黑点像素和亮点像素的分割单元,统计不同深度值的像素数量;将像素数量最多的深度值确定为该所述分割单元的深度参考值。
上述实施例可以去除分割单元中的黑点像素和亮点像素,即去除噪点像素,从而可以提高计算结果的准确度。
根据一个实施例,针对每个分割单元,所述深度值标准范围为0.5~1.3倍的深度参考值。
根据一个实施例,针对所述深度图像的每个像素,将所述每个像素的深度值调整为所述每个像素所在分割单元对应的深度值标准范围,包括:沿设定方向遍历所述深度图像的像素,在遍历过程中,若当前像素的深度值超出所在分割单元对应的深度值标准范围,则按照设定方向顺序读取深度图像中以当前像素为中心的周边像素的深度值并向外展开循环设定次数; 若当前读取像素位于所述分割单元内并且深度值位于所在分割单元对应的深度值标准范围,则跳出循环并将所述当前像素的深度值调整为当前读取像素的深度值;若所述循环设定次数结束且所述当前像素的深度值未被调整,则将所述当前像素的深度值调整为所在分割单元对应的深度参考值。
上述实施例可以使被调整像素与周围像素的深度值过渡较为平滑,从而有利于提高处理后的深度图像的质量。
根据一个实施例,所述周边像素位于所述当前像素的十字方向上。
上述实施例可提高计算处理速度,从而提高对深度图像进行去噪处理的速度。
根据一个实施例,所述循环设定次数为五次;所述设定方向为行方向从左至右,或者行方向从右至左,或者列方向从上至下,或者列方向从下至上。
上述实施例可以以较小的计算量,在较短的时间内对像素进行有效填充,从而有利于提高对深度图像进行去噪处理的速度。
根据本公开的第二方面,提供了一种用于处理深度图像的装置。所述装置包括:获取模块,其被配置为获取深度图像,以及与所述深度图像对应的摄取图像;分割模块,其被配置为对所述深度图像进行分割,得到多个分割单元;计算模块,其被配置为针对每个分割单元计算出对应的深度参考值;确定模块,其被配置为根据所述对应的深度参考值确定每个分割单元的深度值标准范围;以及调整模块,其被配置为将所述深度图像的每个像素的深度值调整到所述每个像素所在分割单元对应的深度值标准范围内。
根据一个实施例,所述分割模块还被配置为对所述摄取图像进行轮廓分割,得到轮廓分割信息;及根据所述轮廓分割信息分割所述深度图像,得到多个分割单元。
根据一个实施例,所述分割模块还被配置为对所述摄取图像进行灰度化处理,得到灰度化图像;对所述灰度化图像进行边缘提取,得到轮廓图像;对所述轮廓图像进行轮廓膨胀处理,得到轮廓膨胀图像;对所述轮廓 膨胀图像进行取反,得到取反图像;采用分水岭算法计算所述取反图像的轮廓分割信息。
根据一个实施例,所述计算模块还被配置为去除所述分割单元中的黑点像素和亮点像素;针对去除黑点像素和亮点像素的分割单元,统计不同深度值的像素数量;将像素数量最多的深度值确定为所述分割单元的深度参考值。
根据一个实施例,针对每个分割单元,所述深度值标准范围为0.5~1.3倍的深度参考值。
根据一个实施例,所述调整模块还被配置为沿设定方向遍历所述深度图像的像素,在遍历过程中,若当前像素的深度值超出所在分割单元对应的深度值标准范围,则按照设定方向顺序读取深度图像中以当前像素为中心的周边像素的深度值并向外展开循环设定次数;若当前读取像素位于所述分割单元内并且深度值位于所在分割单元对应的深度值标准范围,则跳出循环并将所述当前像素的深度值调整为当前读取像素的深度值;若所述循环设定次数结束且所述当前像素的深度值未被调整,则将所述当前像素的深度值调整为所在分割单元对应的深度参考值。
根据一个实施例,所述周边像素位于所述当前像素的十字方向上。
根据一个实施例,所述循环设定次数为五次;所述设定方向为行方向从左至右,或者行方向从右至左,或者列方向从上至下,或者列方向从下至上。
采用本发明实施例提供的处理装置,可以对深度图像进行去噪处理,处理后的深度图像轮廓较为清晰,容易识别。因此,可以提升深度图像的质量。
附图说明
为了更清楚地说明本公开的实施例的技术方案,下面将对实施例的附图作简单地介绍。显而易见地,下面描述中的附图仅仅涉及本公开的一些实施例,而非对本公开的限制。
图1示出了根据本发明实施例的用于处理深度图像的方法的流程图;
图2示出了根据本发明实施例的对摄取图像进行轮廓分割示意图;
图3示出了根据本发明实施例的分割单元的深度值与像素数量直方图;
图4示出了根据本发明实施例的像素填充原理示意图;
图5示出了根据本发明实施例的深度图像去噪处理前后对比图;以及
图6示出了根据本发明实施例的用于处理深度图像的装置的框图。
具体实施方式
为了使本领域技术人员更好地理解本公开实施例的目的、技术方案和优点,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
尽管主要在处理双目摄像头的深度图像的上下文中来论述以下实施例,但是本领域的技术人员将理解的是,本公开并不局限于此。实际上,本公开的各种实施例可以应用于对任何合适的深度图像进行处理。例如,深度图像可以获取自任何其它合适的成像设备,或通过其它方法所生成的深度图像等。
本发明实施例提供了一种深度图像的处理方法及装置。为使本发明的目的、技术方案和优点更加清楚,以下对本发明作进一步详细说明。
如图1所示,本发明实施例提供的用于处理深度图像的方法,包括以下步骤:
步骤S101,获取深度图像,以及与深度图像对应的摄取图像;
步骤S102,对深度图像进行分割,得到多个分割单元;
步骤S103,针对每个分割单元计算出对应的深度参考值;
步骤S104,根据对应的深度参考值确定每个分割单元的深度值标准范 围;
步骤S105,将深度图像的每个像素的深度值调整到每个像素所在分割单元对应的深度值标准范围内。
在步骤S101中,深度图像和摄取图像可以获取自双目摄像头,或通过其它方式来获取,例如通过其它任何合适的解决方案已经获得的深度图像和摄取图像。双目摄像头包括主摄像头和辅摄像头,辅摄像头与主摄像头的坐标系有一定的位置偏差,双目摄像头的控制芯片通过计算得到目标体在空间中的深度信息图像,即深度图像。通常,双目摄像头所采集的深度图像采用与主摄像头摄取图像一致的坐标系。双目摄像头的两个摄像头所摄取的图像通常为RGB彩色图像,也可以是任何其它合适的彩色图像。
对深度图像进行分割,可以通过多种方式来实现。在一个实施例中,步骤S102可以包括:
对摄取图像进行轮廓分割,得到轮廓分割信息;
根据轮廓分割信息分割深度图像,得到多个分割单元。
对于主摄像头的摄取图像,如果每一个小轮廓范围在现实中对应同一物体或者某个物体的同一部位,则该轮廓范围内的深度值应当比较接近。因此,可以对主摄像头的摄取图像进行轮廓分割,并根据得到的轮廓分割信息,对深度图像进行分割,得到多个分割单元。在步骤S105中,“调整”可以包括对深度图像的像素重新进行深度值填充。
对主摄像头的摄取图像进行轮廓分割所采用的具体算法不限,例如可以采用金字塔分割算法、均值漂移分割算法、分水岭分割算法等。在一个实施例中,考虑到体感设备需要在较短的时间内对实时深度图像进行处理(轮廓分割与像素填充需要在30ms内完成),可以采用分水岭算法。
结合图2所示,在本发明的一个实施例中,步骤S102,包括以下子步骤:
对摄取图像进行灰度化处理,得到灰度化图像(即图2的第二幅图片);
对灰度化图像进行边缘提取,得到轮廓图像(即图2的第三幅图片);
对轮廓图像进行轮廓膨胀处理,得到轮廓膨胀图像(即图2的第四幅 图片);
对轮廓膨胀图像进行取反,得到取反图像(即图2的第五幅图片);
采用分水岭算法计算取反图像的轮廓分割信息(即图2的第六幅图片)。
采用上述方法可以在较短的时间内计算出主摄像头实时摄取图像的轮廓分割信息,从而有利于提高对深度图像进行去噪处理的速度。其中,轮廓分割信息可以包括分割单元的轮廓位置信息和编码信息。对灰度化图像进行边缘提取,可以采用canny边缘提取算法。canny边缘提取算法是一种快速实现图像边缘检测的算法,通过canny边缘提取算法所获得的结果为白色轮廓线条和黑底背景的二值图像。
在本发明的一个实施例中,步骤S103,可以包括以下子步骤:
去除分割单元中的黑点像素和亮点像素;
针对去除黑点像素和亮点像素的分割单元,统计不同深度值的像素数量,将像素数量最多的深度值作为该分割单元的深度参考值。
去除分割单元中的黑点像素(例如,深度值为0)和亮点像素(例如,深度值为255),即去除噪点像素,这些像素的可信度不高,去除后可以提高计算结果的准确度。
如图3所示,在本发明的一个实施例中,深度参考值可通过如下方式确定:针对去除黑点像素和亮点像素的分割单元,以深度值为横坐标,深度值的像素数量为纵坐标作直方图,将像素数量最多的深度值(即直方图的峰值)作为该分割单元所对应的深度参考值depth(i)。
在步骤S104中,在计算出分割单元所对应的深度参考值depth(i)后,可以根据对应的深度参考值depth(i)来确定每个分割单元的深度值标准范围。具体的,针对每个分割单元,深度值标准范围为0.5~1.3倍的depth(i)。本申请的发明人经过大量统计分析后得到,同一分割单元的有效深度值通常集中于上述范围,因此,采用上述范围作为深度值标准范围对分割单元的像素进行填充,可以使得深度图像的处理结果更为准确。
在本发明的一个实施例中,步骤S105可以包括:
沿设定方向遍历深度图像的像素,在遍历过程中:
若当前像素的深度值超出所在分割单元对应的深度值标准范围,则按照设定方向顺序读取深度图像中以当前像素为中心的周边像素的深度值并向外展开循环设定次数;
若当前读取像素位于分割单元内并且深度值位于所在分割单元对应的深度值标准范围,则跳出循环并将当前像素的深度值调整为当前读取像素的深度值;
若循环设定次数结束且当前像素的深度值未被调整,则将当前像素的深度值调整为所在分割单元对应的深度参考值。
对深度图像进行处理,得到处理后的深度图像,可通过如下三种方式实现:
一、建立一空白图像,根据深度图像并按照步骤S105逐点对应填充空图像的像素,最终得到处理后的深度图像;
二、建立一深度图像的复制图像,根据深度图像并按照步骤S105逐点对应刷新复制图像的像素,最终得到处理后的深度图像;
三、根据深度图像的像素深度值信息按照步骤S105逐点重新确定处理后的深度图像信息,并存储于记忆元件中,例如内存,整幅深度图像像素遍历结束后,再将内存中存储的重新确定后的信息,覆盖深度图像,最终得到处理后的深度图像。
深度图像中,周边像素可以位于当前像素的斜向方向上,例如30°、45°等斜向。在一个实施例中,周边像素位于当前像素的十字方向上,这样可提高计算处理速度,从而提高对深度图像进行去噪处理的速度。
在一个实施例中,考虑到设备的计算处理速度,循环设定次数为五次。遍历所遵循的设定方向可以为行方向从左至右,也可以为行方向从右至左,可以为列方向从上至下,也可以为列方向从下至上,等等,此外,在十字方向上读取像素的顺序不限,可以顺时针读取或逆时针读取。在本发明的一个实施例中,设定方向为行方向从左至右,读取以当前像素为中心的十字方向上的像素的深度值按照右、下、左、上的顺序。
具体的,结合图4所示,沿行方向从左至右依次访问双目摄像头所采 集深度图像的每个像素,当当前访问像素的深度值位于所在分割单元对应的深度值标准范围内时,继续访问下一像素。当当前像素的深度值超出所在分割单元对应的深度值标准范围时(如图中的A1像素),按照右、下、左、上的顺序读取深度图像中以当前像素为中心的十字方向上的像素的深度值并向外展开循环5次,即第一次读取B1、B2、B3、B4,第二次读取C1、C2、C3、C4,以此类推。在循环读取过程中,当当前读取像素位于分割单元内并且深度值位于所在分割单元对应的深度值标准范围内时,跳出循环并调整当前像素的深度值为当前读取像素的深度值。如图4所示,在循环读取过程中,C2位于分割单元外,因此不满足条件,而C3位于分割单元内并且深度值位于所在分割单元对应的深度值标准范围内,因此,在读取到C3之后跳出循环,并调整A1的深度值为C3的深度值。假如5次循环结束后未读取到符合上述条件的像素,则调整当前像素的深度值为所在分割单元对应的深度参考值depth(i)。
采用上述方案,可以使整个分割单元内,深度值过渡较为平滑,减少空洞,从而有利于提高深度图像的质量。
在本发明实施例的技术方案中,对深度图像进行分割,得到多个分割单元;针对每个分割单元计算出对应的深度参考值和深度值标准范围;之后,再遍历深度图像的像素,将像素的深度值调整到分割单元对应的深度值标准范围内。通过该方案,可以对深度图像进行去噪处理,处理后的深度图像轮廓较为清晰(如图5所示),容易识别。因此,可以提升例如通过双目摄像头所获取的深度图像的质量。
基于相同的发明构思,本发明实施例还提供了一种用于处理深度图像的装置。下面参照图6来描述该装置,对于与上述实施例已描述的相同的部分或功能,出于简洁而省略对它们的描述。参照图6,该装置包括:
获取模块21,被配置为获取深度图像,以及与深度图像对应的摄取图像;
分割模块22,被配置为对深度图像进行分割,得到多个分割单元;
计算模块23,被配置为针对每个分割单元计算出对应的深度参考值;
确定模块24,被配置为根据对应的深度参考值确定每个分割单元的深度值标准范围;
调整模块25,被配置为将深度图像的每个像素的深度值调整到每个像素所在分割单元对应的深度值标准范围内。
在一个实施例中,分割模块22还被配置为对摄取图像进行轮廓分割,得到轮廓分割信息;及根据轮廓分割信息分割深度图像,得到多个分割单元。
在本发明的一个实施例中,分割模块22还被配置为对摄取图像进行灰度化处理,得到灰度化图像;对灰度化图像进行边缘提取,得到轮廓图像;对轮廓图像进行轮廓膨胀处理,得到轮廓膨胀图像;对轮廓膨胀图像进行取反,得到取反图像;采用分水岭算法计算取反图像的轮廓分割信息。
在本发明的一个实施例中,计算模块23还被配置为去除分割单元中的黑点像素和亮点像素;针对去除黑点像素和亮点像素的分割单元,统计不同深度值的像素数量;将像素数量最多的深度值确定为该分割单元的深度参考值。
在本发明的一个实施例中,针对每个分割单元,深度值标准范围为0.5~1.3倍的深度参考值。
在本发明的一个实施例中,调整模块25还被配置为沿设定方向遍历深度图像的像素,在遍历过程中:若当前像素的深度值超出所在分割单元对应的深度值标准范围,则按照设定方向顺序读取深度图像中以当前像素为中心的周边像素的深度值并向外展开循环设定次数;若当前读取像素位于分割单元内并且深度值位于所在分割单元对应的深度值标准范围,则跳出循环并将当前像素的深度值调整为当前读取像素的深度值;若循环设定次数结束且当前像素的深度值未被调整,则将当前像素的深度值调整为所在分割单元对应的深度参考值。
在本发明的一个实施例中,周边像素位于当前像素的十字方向上。
在本发明的一个实施例中,循环设定次数为五次;设定方向为行方向从左至右,或者行方向从右至左,或者列方向从上至下,或者列方向从下 至上。
根据上述实施例的用于处理深度图像的装置,可以对深度图像进行去噪处理,处理后的深度图像轮廓较为清晰,容易识别。因此,可以提升例如通过双目摄像头所获取的深度图像的质量。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。以上所述仅是本发明的示范性实施方式,而非用于限制本发明的保护范围,本发明的保护范围由所附的权利要求书确定。

Claims (16)

  1. 一种用于处理深度图像的方法,包括:
    获取深度图像,以及与所述深度图像对应的摄取图像;
    对所述深度图像进行分割,得到多个分割单元;
    针对每个分割单元计算出对应的深度参考值;
    根据所述对应的深度参考值确定每个分割单元的深度值标准范围;以及
    将所述深度图像的每个像素的深度值调整到所述每个像素所在分割单元对应的深度值标准范围内。
  2. 如权利要求1所述的方法,其中,所述对所述深度图像进行分割,得到多个分割单元,包括:
    对所述摄取图像进行轮廓分割,得到轮廓分割信息;
    根据所述轮廓分割信息分割所述深度图像,得到多个分割单元。
  3. 如权利要求2所述的方法,其中,所述对所述摄取图像进行轮廓分割,得到轮廓分割信息,包括:
    对所述摄取图像进行灰度化处理,得到灰度化图像;
    对所述灰度化图像进行边缘提取,得到轮廓图像;
    对所述轮廓图像进行轮廓膨胀处理,得到轮廓膨胀图像;
    对所述轮廓膨胀图像进行取反,得到取反图像;
    采用分水岭算法计算所述取反图像的轮廓分割信息。
  4. 如权利要求1-3中的任何一项所述的方法,其中,所述针对每个分割单元计算出对应的深度参考值,包括:
    去除所述分割单元中的黑点像素和亮点像素;
    针对去除黑点像素和亮点像素的分割单元,统计不同深度值的像素数量;
    将像素数量最多的深度值确定为所述分割单元的深度参考值。
  5. 如权利要求1-4中的任何一项所述的方法,其中,针对每个分割单 元,所述深度值标准范围为0.5~1.3倍的深度参考值。
  6. 如权利要求1-5中的任何一项所述的方法,其中,针对所述深度图像的每个像素,将所述每个像素的深度值调整为所述每个像素所在分割单元对应的深度值标准范围包括:
    沿设定方向遍历所述深度图像的像素,在遍历过程中:
    若当前像素的深度值超出所在分割单元对应的深度值标准范围,则按照设定方向顺序读取深度图像中以当前像素为中心的周边像素的深度值并向外展开循环设定次数;
    若当前读取像素位于所述分割单元内并且深度值位于所在分割单元对应的深度值标准范围,则跳出循环并将所述当前像素的深度值调整为当前读取像素的深度值;
    若所述循环设定次数结束且所述当前像素的深度值未被调整,则将所述当前像素的深度值调整为所在分割单元对应的深度参考值。
  7. 如权利要求6所述的方法,其中,所述周边像素位于所述当前像素的十字方向上。
  8. 如权利要求6或7所述的方法,其中,所述循环设定次数为五次;所述设定方向为行方向从左至右,或者行方向从右至左,或者列方向从上至下,或者列方向从下至上。
  9. 一种用于处理深度图像的装置,包括:
    获取模块,其被配置为获取深度图像,以及与所述深度图像对应的摄取图像;
    分割模块,其被配置为对所述深度图像进行分割,得到多个分割单元;
    计算模块,其被配置为针对每个分割单元计算出对应的深度参考值;
    确定模块,其被配置为根据所述对应的深度参考值确定每个分割单元的深度值标准范围;以及
    调整模块,其被配置为将所述深度图像的每个像素的深度值调整到所述每个像素所在分割单元对应的深度值标准范围内。
  10. 如权利要求9所述的装置,其中,所述分割模块还被配置为对所 述摄取图像进行轮廓分割,得到轮廓分割信息;根据所述轮廓分割信息分割所述深度图像,得到多个分割单元。
  11. 如权利要求10所述的装置,其中,所述分割模块还被配置为对所述摄取图像进行灰度化处理,得到灰度化图像;对所述灰度化图像进行边缘提取,得到轮廓图像;对所述轮廓图像进行轮廓膨胀处理,得到轮廓膨胀图像;对所述轮廓膨胀图像进行取反,得到取反图像;采用分水岭算法计算所述取反图像的轮廓分割信息。
  12. 如权利要求9-11中的任何一项所述的装置,其中,所述计算模块还被配置为去除所述分割单元中的黑点像素和亮点像素;针对去除黑点像素和亮点像素的分割单元,统计不同深度值的像素数量;将像素数量最多的深度值确定为所述分割单元的深度参考值。
  13. 如权利要求9-12中的任何一项所述的装置,其中,针对每个分割单元,所述深度值标准范围为0.5~1.3倍的深度参考值。
  14. 如权利要求9-13中的任何一项所述的装置,其中,所述调整模块还被配置为沿设定方向遍历所述深度图像的像素,在遍历过程中:
    若当前像素的深度值超出所在分割单元对应的深度值标准范围,则按照设定方向顺序读取深度图像中以当前像素为中心的周边像素的深度值并向外展开循环设定次数;
    若当前读取像素位于所述分割单元内并且深度值位于所在分割单元对应的深度值标准范围,则跳出循环并将所述当前像素的深度值调整为当前读取像素的深度值;
    若所述循环设定次数结束且所述当前像素的深度值未被调整,则将所述当前像素的深度值调整为所在分割单元对应的深度参考值。
  15. 如权利要求14所述的装置,其中,所述周边像素位于所述当前像素的十字方向上。
  16. 如权利要求14或15所述的装置,其中,所述循环设定次数为五次;所述设定方向为行方向从左至右,或者行方向从右至左,或者列方向从上至下,或者列方向从下至上。
PCT/CN2015/093867 2015-05-11 2015-11-05 一种用于处理深度图像的方法及装置 WO2016179979A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US15/026,870 US9811921B2 (en) 2015-05-11 2015-11-05 Apparatus and method for processing a depth image
JP2017500972A JP6577565B2 (ja) 2015-05-11 2015-11-05 深度画像処理用の方法及び装置
KR1020167031512A KR101881243B1 (ko) 2015-05-11 2015-11-05 깊이 이미지를 프로세싱하기 위한 방법 및 장치
EP15839095.5A EP3296953B1 (en) 2015-05-11 2015-11-05 Method and device for processing depth images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510236717.0 2015-05-11
CN201510236717.0A CN104835164B (zh) 2015-05-11 2015-05-11 一种双目摄像头深度图像的处理方法及装置

Publications (1)

Publication Number Publication Date
WO2016179979A1 true WO2016179979A1 (zh) 2016-11-17

Family

ID=53813029

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/093867 WO2016179979A1 (zh) 2015-05-11 2015-11-05 一种用于处理深度图像的方法及装置

Country Status (6)

Country Link
US (1) US9811921B2 (zh)
EP (1) EP3296953B1 (zh)
JP (1) JP6577565B2 (zh)
KR (1) KR101881243B1 (zh)
CN (1) CN104835164B (zh)
WO (1) WO2016179979A1 (zh)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835164B (zh) * 2015-05-11 2017-07-28 京东方科技集团股份有限公司 一种双目摄像头深度图像的处理方法及装置
CN105825494B (zh) * 2015-08-31 2019-01-29 维沃移动通信有限公司 一种图像处理方法及移动终端
US10558855B2 (en) * 2016-08-17 2020-02-11 Technologies Holdings Corp. Vision system with teat detection
CN106506969B (zh) 2016-11-29 2019-07-19 Oppo广东移动通信有限公司 摄像模组、通过其进行人像追踪的方法以及电子设备
CN106713890A (zh) * 2016-12-09 2017-05-24 宇龙计算机通信科技(深圳)有限公司 一种图像处理方法及其装置
CN106919928A (zh) * 2017-03-08 2017-07-04 京东方科技集团股份有限公司 手势识别系统、方法及显示设备
CN107891813B (zh) * 2017-10-31 2020-04-24 北京新能源汽车股份有限公司 一种车辆的控制方法、装置、汽车及计算机可读存储介质
CN108195305B (zh) * 2018-02-09 2020-03-31 京东方科技集团股份有限公司 一种双目检测系统及其深度检测方法
KR102158034B1 (ko) * 2018-05-24 2020-09-22 주식회사 탑 엔지니어링 기판 절단 장치 및 기판 절단 방법
CN108765482B (zh) * 2018-05-31 2021-07-13 长春博立电子科技有限公司 一种基于硬件加速的低功耗实时双目摄像头及使用方法
TWI670467B (zh) * 2018-10-15 2019-09-01 立普思股份有限公司 使用深度影像偵測的加工方法
CN110390681B (zh) * 2019-07-17 2023-04-11 海伯森技术(深圳)有限公司 一种基于深度相机的深度图物体轮廓快速提取方法及装置
CN111242137B (zh) * 2020-01-13 2023-05-26 江西理工大学 一种基于形态成分分析的椒盐噪声滤波方法、装置
CN112085675B (zh) * 2020-08-31 2023-07-04 四川大学 深度图像去噪方法、前景分割方法及人体运动监测方法
KR102599855B1 (ko) * 2020-12-22 2023-11-07 주식회사 포스코디엑스 철강제품용 태그 부착 시스템 및 방법
CN112819878B (zh) * 2021-01-28 2023-01-31 北京市商汤科技开发有限公司 一种深度检测方法、装置、计算机设备和存储介质
CN114956287B (zh) * 2022-06-14 2023-08-29 西安清源盈科环保科技有限公司 一种污水除磷方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101465004A (zh) * 2007-12-21 2009-06-24 三星电子株式会社 表示3d深度图像的自适应信息的方法、介质和设备
WO2010138434A2 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Environment and/or target segmentation
CN104835164A (zh) * 2015-05-11 2015-08-12 京东方科技集团股份有限公司 一种双目摄像头深度图像的处理方法及装置

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3850541B2 (ja) * 1998-02-18 2006-11-29 富士重工業株式会社 高度計測装置
JP2007048006A (ja) * 2005-08-09 2007-02-22 Olympus Corp 画像処理装置および画像処理プログラム
KR101175196B1 (ko) * 2010-12-28 2012-08-20 강원대학교산학협력단 모바일 환경의 입체영상 생성방법 및 장치
KR101978172B1 (ko) 2011-11-29 2019-05-15 삼성전자주식회사 깊이 영상을 고해상도로 변환하는 방법 및 장치
US8873835B2 (en) * 2011-11-30 2014-10-28 Adobe Systems Incorporated Methods and apparatus for correcting disparity maps using statistical analysis on local neighborhoods
JP5951367B2 (ja) * 2012-01-17 2016-07-13 シャープ株式会社 撮像装置、撮像画像処理システム、プログラムおよび記録媒体
CN102750694B (zh) * 2012-06-04 2014-09-10 清华大学 基于局部最优置信传播算法的双目视频深度图求取方法
CN103546736B (zh) * 2012-07-12 2016-12-28 三星电子株式会社 图像处理设备和方法
CN103136775A (zh) * 2013-03-19 2013-06-05 武汉大学 基于局部约束重建的kinect深度图空洞填充方法
CN103455984B (zh) * 2013-09-02 2016-08-31 清华大学深圳研究生院 一种Kinect深度图像获取方法与装置
CN104574342B (zh) * 2013-10-14 2017-06-23 株式会社理光 视差深度图像的噪声识别方法和噪声识别装置
US9630318B2 (en) * 2014-10-02 2017-04-25 Brain Corporation Feature detection apparatus and methods for training of robotic navigation
TWI558167B (zh) * 2014-12-30 2016-11-11 友達光電股份有限公司 立體影像顯示系統與顯示方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101465004A (zh) * 2007-12-21 2009-06-24 三星电子株式会社 表示3d深度图像的自适应信息的方法、介质和设备
WO2010138434A2 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Environment and/or target segmentation
CN104835164A (zh) * 2015-05-11 2015-08-12 京东方科技集团股份有限公司 一种双目摄像头深度图像的处理方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3296953A4 *

Also Published As

Publication number Publication date
EP3296953B1 (en) 2021-09-15
EP3296953A4 (en) 2018-11-21
CN104835164A (zh) 2015-08-12
EP3296953A1 (en) 2018-03-21
JP6577565B2 (ja) 2019-09-18
US9811921B2 (en) 2017-11-07
US20170132803A1 (en) 2017-05-11
KR101881243B1 (ko) 2018-07-23
CN104835164B (zh) 2017-07-28
KR20160148577A (ko) 2016-12-26
JP2018515743A (ja) 2018-06-14

Similar Documents

Publication Publication Date Title
WO2016179979A1 (zh) 一种用于处理深度图像的方法及装置
US10573018B2 (en) Three dimensional scene reconstruction based on contextual analysis
TWI489418B (zh) Parallax Estimation Depth Generation
AU2022203854A1 (en) Methods and systems for large-scale determination of RGBD camera poses
CN106023303B (zh) 一种基于轮廓有效性提高三维重建点云稠密程度的方法
WO2016176840A1 (zh) 深度图/视差图的后处理方法和装置
WO2016034059A1 (zh) 基于颜色-结构特征的目标对象跟踪方法
US20130004079A1 (en) Image processing apparatus, image processing method, and program thereof
KR20170008638A (ko) 3차원 컨텐츠 생성 장치 및 그 3차원 컨텐츠 생성 방법
US20210185274A1 (en) Video decoding method and camera
WO2020037881A1 (zh) 运动轨迹绘制方法、装置、设备和存储介质
WO2020119467A1 (zh) 高精度稠密深度图像的生成方法和装置
CN111080776B (zh) 人体动作三维数据采集和复现的处理方法及系统
US20150131853A1 (en) Stereo matching system and method for generating disparity map using same
TW201432620A (zh) 具有邊緣選擇功能性之影像處理器
US10593044B2 (en) Information processing apparatus, information processing method, and storage medium
KR20140134090A (ko) 이미지 센서와 대상 객체 사이의 상대적인 각도를 이용하는 깊이 영상 처리 장치 및 방법
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
JP2013185905A (ja) 情報処理装置及び方法、並びにプログラム
US20200202495A1 (en) Apparatus and method for dynamically adjusting depth resolution
US10397540B2 (en) Method for obtaining and merging multi-resolution data
CN110009683B (zh) 基于MaskRCNN的实时平面上物体检测方法
JP2016156702A (ja) 撮像装置および撮像方法
US9721151B2 (en) Method and apparatus for detecting interfacing region in depth image
US20210042947A1 (en) Method and apparatus for processing data, electronic device and storage medium

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 15026870

Country of ref document: US

ENP Entry into the national phase

Ref document number: 20167031512

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15839095

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017500972

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2015839095

Country of ref document: EP