CN104299220A - Method for filling cavity in Kinect depth image in real time - Google Patents

Method for filling cavity in Kinect depth image in real time Download PDF

Info

Publication number
CN104299220A
CN104299220A CN201410327220.5A CN201410327220A CN104299220A CN 104299220 A CN104299220 A CN 104299220A CN 201410327220 A CN201410327220 A CN 201410327220A CN 104299220 A CN104299220 A CN 104299220A
Authority
CN
China
Prior art keywords
image
cavity
depth image
pixel
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410327220.5A
Other languages
Chinese (zh)
Other versions
CN104299220B (en
Inventor
安平
王健鑫
尤志翔
张兆扬
尚峰
范金慧
施剑平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI MEDIA GROUP Inc
University of Shanghai for Science and Technology
Original Assignee
SHANGHAI MEDIA GROUP Inc
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI MEDIA GROUP Inc, University of Shanghai for Science and Technology filed Critical SHANGHAI MEDIA GROUP Inc
Priority to CN201410327220.5A priority Critical patent/CN104299220B/en
Publication of CN104299220A publication Critical patent/CN104299220A/en
Application granted granted Critical
Publication of CN104299220B publication Critical patent/CN104299220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Disclosed in the invention is a method for filling a cavity in a Kinect depth image in real time. The method comprises the following steps: step 1, obtaining a color image of a shooting scene and a corresponding depth image by using a Kinect camera and determining a foreground image position and a background image position by using an operating-period Gauss average method and a background differencing method; step 2, constructing a deepest image by using a Kinect depth image and filling a cavity in the deepest image; step 3, replacing the pixel of the deepest image with a background pixel of the depth image, thereby realizing cavity filling of the depth image background; and step 4, with the foreground image position obtained by the step 1, marking the cavity existed in the foreground position of the Kinect depth image and filling the marked cavity in real time. According to the invention, the object edge in the depth video with the filled cavity becomes stable; and the no flicker phenomenon occurs in the depth video. Meanwhile, the cavity filling speed in the depth image and the image stability can be improved effectively.

Description

A kind of method of the cavity in Kinect depth image being carried out to fill in real time
Technical field
The invention belongs to depth image processing technology field, more specifically say, relate to a kind of method of the cavity in Kinect depth image being carried out to fill in real time.
Background technology
Kinect be Microsoft in June, 2010 (see K. Khoshelham, Sander Oude Elberink. Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. Sensors 2012, 2012 (12), pp. 1437-1454) a kind of Microsoft Kinect 3D stereocamera of putting on display, it can obtain the coloured image of photographed scene and the depth image corresponding with it simultaneously, due to Kinect video camera can be comparatively easy and cheap the depth image of Real-time Obtaining scene, Kinect video camera is at 3 D scene rebuilding, be used widely in the fields such as object segmentation.
The capture frequency of the depth image of Kinect is 30fps, there is very large temporal correlation in actual deep video between frame and frame, this for compression of images and image procossing significant.But image affects by the time dependant conditions of hot spot, Kinect may cannot catch the change of reflected light between the generation of depth image, or correctly cannot identify the change of hot spot, make each pixel of depth image obtained be change, the deep video causing Kinect to catch produces flickering flicker.Kinect video camera obtains depth image by the mapping of different hot spot, and this depth image affects by optical condition and imaging pattern, and object edge place can be caused to produce depth image and do not mate, cavity appears in depth image.
At present, the method that the cavity in Kinect depth image carries out filling can be divided into two classes:
First kind method, time domain information is utilized to carry out the method for Kinect depth image cavity filling in real time, such as, the people such as Matyunin are (see Sergey Matyunin, Dmitriy Vatolin, Yury Berdnikov, Maxim Smirnov. Temporal filtering for depth maps generated by Kinect depth camera. 3DTV Conference, 2011. pp. 1-4.) utilize continuous a few frame Kinect coloured image in deep video and the Kinect depth image corresponding with it, obtain the pixel value change of each location of pixels in the movable information of object and Kinect depth image, the pixel value in cavity to be filled in Kinect depth image is set to continuous a few frame, the intermediate value of all pixel values identical with position in depth image, cavity to be filled, this method can realize filling in real time the cavity of Kinect depth image, and filling effect is better.Utilizing time domain information to carry out the Kinect depth image method that cavity is filled in real time can utilize depth image to carry out application and development in real time.Although the problem of filling in real time is carried out in the method cavity that can solve in Kinect depth image that above-mentioned cavity is filled, can not keep that the edge of object in depth image is complete, accuracy, the flicker problem in its depth image obtained is unresolved.
Equations of The Second Kind method utilizes spatial information to carry out the method for Kinect depth image cavity filling, the method utilizes the neighborhood territory pixel of depth image to carry out cavity and fills (see Na-Eun Yang, Seoul, Yong-Gon Kim. Depth hole filling using the depth distribution of neighboring regions of depth holes in the Kinect sensor. Signal Processing:Communication and Computing (ICSPCC), 2012 IEEE International Conference on, 2011 (8), pp. 658-661.), the cavity of polygon filtering to depth image is utilized to fill (see M. Camplani and L. Salgado. Efficient spatio-temporal hole filling strategy for kinect depth maps. Three-Dimensional Image Processing (3DIP) and Applications, 2012.).The depth value correlativity of said method by utilizing Kinect to obtain depth image cavity pixel and its neighborhood territory pixel, and the color correlation in the coloured image of the Kinect acquisition corresponding with Kinect depth image, the cavity in Kinect depth image is filled.The method utilizes spatial information to fill the cavity in Kinect depth image, the accuracy that the accuracy that its cavity is filled utilizes time domain information to fill the cavity in Kinect depth image lower than the first kind, and the time complexity of these class methods is too high, carry out filling time cost more, cannot reach and cavity filling is in real time carried out to Kinect depth image, constrain the application of Kinect.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, the present invention proposes a kind of cavity and fill the method that accuracy is higher, picture steadiness better carries out filling in real time to the cavity in Kinect depth image.
Cavity in depth image described in the present invention refers to: Kinect video camera catches the depth image of a certain position, the pixel of setting pixel value is less than in its depth image obtained, such as, assuming that the cavity existed in depth image, the pixel value in this depth image in cavity refers to the pixel that in depth image, pixel value is less than 10.
For achieving the above object, a kind of method of the cavity in Kinect depth image being carried out to fill in real time of the present invention, the method comprises the following steps:
Step 1, the coloured image of photographed scene and the depth image corresponding with it is obtained with Kinect video camera, runtime Gauss's method of average is adopted to extract color picture background, obtain background image, background subtraction is adopted to split Kinect coloured image again, obtain foreground image, determine the position of foreground image and the position of background image;
Step 2, utilizes Kinect depth image to build the darkest depth image (deepest depth image, DDI), then, fills the cavity in this darkest depth image, generates the darkest depth image after the filling of cavity;
Step 3, the background image position utilizing step 1 to obtain, this background image position is set to depth image background image position, and the pixel of the darkest depth image after filling with above-mentioned cavity replaces the background pixel of depth image, thus realizes filling the cavity of depth image background;
Step 4, the foreground image position utilizing step 1 to obtain, marks the cavity existed in the foreground location of Kinect depth image, then fills in real time the cavity after mark.
The coloured image with Kinect video camera acquisition photographed scene described in above-mentioned steps 1 and the depth image corresponding with it, runtime Gauss's method of average is adopted to extract color picture background, obtain background image, background subtraction is adopted to split Kinect coloured image again, obtain foreground image, determine the position of foreground image and the position of background image, its concrete steps are as follows:
1.1 obtain the coloured image of photographed scene and the depth image corresponding with it with Kinect video camera;
1.2 adopt runtime Gauss's method of average to extract Kinect color picture background, and obtain background image, its expression formula is:
(1)
Wherein, (x, y) represents xth row in coloured image, y row place coordinate, represent the pixel value at the i-th color image frame (x, y) place, represent the pixel value at background image (x, the y) place of the i-th frame, according to the parameter value that different shooting environmental sets;
1.3 adopt background subtraction to split Kinect coloured image, and obtain foreground image, determine the position of foreground image and the position of background image, its expression formula is:
(2)
Wherein, represent the i-th color image frame (x, y) place pixel and background image (x, y) place pixel difference value, represent the pixel value at the i-th color image frame (x, y) place, represent the pixel value at background image (x, the y) place of the i-th frame;
1.4 when this difference value is greater than the threshold value T of setting, then judge this place as the position of foreground image or the location formula of background image as follows:
(3)
Wherein, described in for to the position of foreground image that obtains or the position of background image after the i-th color image frame segmentation obtained, represent the pixel value at the i-th color image frame (x, y) place, represent the pixel value at background image (x, the y) place of the i-th frame; =1 to represent coloured image (x, y) place be foreground location, =0 to represent coloured image (x, y) place be background positions.
The Kinect depth image that utilizes described in above-mentioned steps 2 builds the darkest depth image (deepest depth image, DDI), then, the cavity in this darkest depth image is filled, generate the darkest depth image after the filling of cavity, its concrete steps are as follows:
2.1 the darkest depth images (deepest depth image, DDI) initialization: using the first amplitude deepness image pixel value of Kinect depth image sequence as the darkest depth image DDI initial depth value;
2.2 from the second width, often obtains an amplitude deepness image, upgrades, determine the non-empty pixel value of the darkest depth image and empty pixel value according to the following initial depth value of the most deep angle value update method to the darkest depth image DDI, specific as follows:
The most deep described angle value update method is as follows:
2.2.1, when the darkest depth image is at the pixel value at its (x, y) place for in the darkest depth image during non-empty pixel value:
Judge whether the pixel value of the darkest depth image at its (x, y) place meets following calculating formula (4):
(4)
If the darkest depth image is meet following calculating formula (4), then by the pixel value of the darkest depth image at its (x, y) place at the pixel value at its (x, y) place be defined as non-empty pixel value;
Wherein it is the pixel value at the i-th frame depth image (x, y) place; for the darkest depth image is at the pixel value at its (x, y) place; for the pixel value that depth image (x, y) the place frequency of occurrences obtained is the highest;
2.2.2, when the darkest depth image is at the pixel value at its (x, y) place for in the darkest depth image during empty pixel value:
Judge whether the pixel value of the darkest depth image at its (x, y) place meets following calculating formula (5):
(5)
If the darkest depth image is meet following calculating formula (5), then by the pixel value of the darkest depth image at its (x, y) place at the pixel value at its (x, y) place be defined as empty pixel value;
Wherein, the pixel value of the darkest depth image of representative at its (x, y) place; represent the pixel value that in the depth image obtained, (x, y) place frequency of occurrences is the highest;
2.3 set up with the pixel spent the most deeply in image cavity search window centered by position, its size is pixel, by the pixel in cavity non-in search window, is designated as , its value of color is designated as if spend the pixel in image cavity the most deeply value of color, be designated as , when if the error between the value of color of spending pixel in the value of color of pixel in image cavity and non-cavity in search window is the most deeply minimum, that is, value minimum time, then assert the pixel color value in search window in non-cavity and the pixel in cavity value of color close;
In statistics search window in non-cavity with the pixel in cavity value of color close pixel, the number obtaining close pixel after statistics is n, and calculate above-mentioned close pixel weighted mean pixel value, filled the corresponding cavity in the darkest depth image by the weighted mean pixel value calculated, its expression formula is:
(6)
Weights in formula (6) calculating formula be:
(7)
Wherein, described i-th frame the most deeply the pixel spent in image cavity be , the pixel in non-cavity is , pixel corresponding in coloured image is , represent the scope in search window, representative is in search window, and the pixel count that in non-cavity, color pixel values corresponding to empty pixel is close, accounts for the ratio of the close pixel count of all color pixel values corresponding to pixel in cavity.
The foreground image position utilizing step 1 to obtain described in above-mentioned steps 4, marks the cavity existed in the foreground location of Kinect depth image, then fills in real time the cavity after mark, is specially:
Step 4.1: the foreground image position utilizing step 1 to obtain, the foreground image detecting current Kinect depth image is
Whether deep degree image cavity exists cavity before filling, if there is cavity, then marks the cavity of the foreground image of depth image;
Step 4.2: contraposition is set to foreground image, and the pixel being non-cavity in depth image, carry out quantity (m) and pixel value statistics, obtain the average pixel value of these pixels, fill with this average pixel value to mark, its expression formula is:
(8)
Wherein, described in for the depth value of mark, for foreground image position, m is the number of pixels for non-cavity in foreground image.
The technology of the present invention effect is as follows:
A kind of method of carrying out filling in real time to the cavity in Kinect depth image of the present invention adopts carries out foreground image and background image cutting techniques based on runtime Gauss's method of average and background subtraction to Kinect coloured image, and cavity filling is separately carried out to foreground image and background image, relative to traditional Kinect depth image gap filling method, the present invention can keep the integrality of foreground image in depth image, improves integrality and the accuracy of object edge degree of depth pixel in depth image after filling cavity.
The present invention adopts structure to spend image technique the most deeply, the darkest depth image utilizing every frame depth image to construct is filled depth image background cavity, relative to traditional Kinect depth image gap filling method, the present invention can make object edge in the deep video after filling cavity stablize, deep video flicker free occurs, meanwhile, the filling speed in cavity in depth image is effectively improved.
The empty filling technique that the present invention adopts Kinect depth image to combine with corresponding Kinect coloured image, namely utilize the Color matching degree of pixel and surrounding pixel correspondence position in coloured image in depth image to carry out cavity again to the darkest depth image to fill, depth image edge and the matching degree of corresponding Color Image Edge can be improved like this, obtain more accurate empty filling effect.
Accompanying drawing explanation
Fig. 1 be of the present invention a kind of cavity in Kinect depth image is carried out to the method for filling in real time realize block diagram.
Fig. 2 utilizes the design sketch of spending image technique the most deeply and carrying out Kinect depth image filling in cavity in the present invention.In figure, the original depth-map that (a) obtains for Kinect, (b) utilizes the design sketch of spending image technique the most deeply and carrying out Kinect depth image filling in cavity.
Fig. 3 be the present invention utilize the darkest depth image carry out background cavity fill after, to current Kinect depth image be present in foreground image position cavity mark schematic diagram.In figure, a original depth-map that () obtains for Kinect, (b) for utilizing the design sketch of spending image technique the most deeply and carrying out Kinect depth image filling in cavity, (c) for current Kinect depth image, the empty marking image that is present in foreground image position.
Fig. 4 is the design sketch that the present invention carries out the original depth-map that Kinect obtains filling in cavity.In figure, the original depth-map that (a) obtains for Kinect, (b), for utilizing the depth map after filling cavity of the present invention, (c) is for utilizing the transparent hybrid processing figure of the depth map after filling cavity of the present invention and coloured image.
Fig. 5 is the design sketch that the present invention carries out the original depth-map that Kinect obtains filling in cavity.In figure, the original depth-map that (a) obtains for Kinect, (b) is for utilizing the depth map after filling cavity of the present invention.
Fig. 6 is the effect that the present invention and polygon filtering method carry out the original depth-map that Kinect obtains filling in cavity.In figure, the original depth-map that (a) obtains for Kinect, (b), for utilizing the depth map after polygon filtering method filling cavity, (c) is for utilizing the depth map after filling cavity of the present invention.
Embodiment
A kind of method of the cavity in Kinect depth image being carried out to fill in real time of the present invention, as shown in Figure 1, concrete implementation step is as follows:
Step 1: obtain the coloured image of photographed scene and the depth image corresponding with it with Kinect video camera, runtime Gauss's method of average is adopted to extract color picture background, obtain background image, background subtraction is adopted to split Kinect coloured image again, obtain foreground image, determine the position of foreground image and the position of background image, its concrete steps are as follows:
1.1 obtain the coloured image of photographed scene and the depth image corresponding with it with Kinect video camera;
1.2 adopt runtime Gauss's method of average to extract Kinect color picture background, and obtain background image, its expression formula is:
(1)
Wherein, (x, y) represents xth row in coloured image, y row place coordinate, represent the pixel value at the i-th color image frame (x, y) place, represent the pixel value at background image (x, the y) place of the i-th frame, according to the parameter value that different shooting environmental sets;
1.3 adopt background subtraction to split Kinect coloured image, and obtain foreground image, determine the position of foreground image and the position of background image, its expression formula is:
(2)
Wherein, represent the i-th color image frame (x, y) place pixel and background image (x, y) place pixel difference value, represent the pixel value at the i-th color image frame (x, y) place, represent the pixel value at background image (x, the y) place of the i-th frame;
1.4 when this difference value is greater than the threshold value T of setting, then judge the position of this place as foreground image or the position of background image, formula is as follows:
(3)
Wherein, described in for to the position of foreground image that obtains or the position of background image after the i-th color image frame segmentation obtained, represent the pixel value at the i-th color image frame (x, y) place, represent the pixel value at background image (x, the y) place of the i-th frame; =1 to represent coloured image (x, y) place be foreground location, =0 to represent coloured image (x, y) place be background positions;
Step 2: utilize Kinect depth image to build the darkest depth image (deepest depth image, DDI), then, the cavity in this darkest depth image is filled, generate the darkest depth image after the filling of cavity, its concrete steps are as follows:
2.1 the darkest depth images (deepest depth image, DDI) initialization: using the first amplitude deepness image pixel value of Kinect depth image sequence as the darkest depth image DDI initial depth value;
2.2 from the second width, often obtains an amplitude deepness image, upgrades, determine the non-empty pixel value of the darkest depth image and empty pixel value according to the following initial depth value of the most deep angle value update method to the darkest depth image DDI, specific as follows:
The most deep described angle value update method is as follows:
2.2.1, when the darkest depth image is at the pixel value at its (x, y) place for in the darkest depth image during non-empty pixel value:
Judge whether the pixel value of the darkest depth image at its (x, y) place meets following calculating formula (4):
(4)
If the darkest depth image is meet following calculating formula (4), then by the pixel value of the darkest depth image at its (x, y) place at the pixel value at its (x, y) place be defined as non-empty pixel value;
Wherein it is the pixel value at the i-th frame depth image (x, y) place; for the darkest depth image is at the pixel value at its (x, y) place; for the pixel value that depth image (x, y) the place frequency of occurrences obtained is the highest;
2.2.2, when the darkest depth image is at the pixel value at its (x, y) place for in the darkest depth image during empty pixel value:
Judge whether the pixel value of the darkest depth image at its (x, y) place meets following calculating formula (5):
(5)
If the darkest depth image is meet following calculating formula (5), then by the pixel value of the darkest depth image at its (x, y) place at the pixel value at its (x, y) place be defined as empty pixel value;
Wherein, the pixel value of the darkest depth image of representative at its (x, y) place; represent the pixel value that in the depth image obtained, (x, y) place frequency of occurrences is the highest;
2.3 set up with the pixel spent the most deeply in image cavity search window centered by position, its size is pixel, by the pixel in cavity non-in search window, is designated as , its value of color is designated as if spend the pixel in image cavity the most deeply value of color, be designated as , when if the error between the value of color of spending pixel in value of color in image cavity and non-cavity in search window is the most deeply minimum, that is, value minimum time, then assert the pixel color value in search window in non-cavity and the pixel in cavity value of color it is close,
In statistics search window in non-cavity with the pixel in cavity value of color close pixel, the number obtaining close pixel after statistics is n, calculates the weighted mean pixel value of above-mentioned close pixel, and filled the corresponding cavity in the darkest depth image by the weighted mean pixel value calculated, its expression formula is:
(6)
Weights in formula (6) calculating formula be:
(7)
Wherein, described i-th frame the most deeply the pixel spent in image cavity be , the pixel in non-cavity is , pixel corresponding in coloured image is , represent the scope in search window, representative is in search window, and the pixel count that in non-cavity, color pixel values corresponding to empty pixel is close, accounts for the ratio of the close pixel count of all color pixel values corresponding to pixel in cavity;
Step 3: the background image position utilizing step 1 to obtain, this background image position is set to depth image background image position, the pixel of the darkest depth image of the darkest depth image after filling with above-mentioned cavity replaces the background pixel of depth image, thus realize filling the cavity in depth image background, as Fig. 2 (b), shown in Fig. 3 (b), Fig. 2 (b) is the design sketch of filling cavity in Fig. 2 (a) background, and Fig. 3 (b) is the design sketch of filling cavity in Fig. 3 (a) background;
Step 4: the foreground image position utilizing step 1 to obtain, marks the cavity existed in Kinect depth image is to the foreground location in Kinect depth image, then the cavity after mark is filled in real time, its step is as follows:
Step 4.1: the foreground image position utilizing step 1 to obtain, the foreground image detecting current Kinect depth image is
Whether deep degree image cavity exists cavity before filling, if there is cavity, then marks the cavity of the foreground image of depth image;
Step 4.2: contraposition is set to foreground image, and be the pixel in non-cavity in depth image, carry out quantity (m) and pixel value statistics, obtain the average pixel value of these pixels, with this average pixel value, mark is filled, as shown in Fig. 3 (c), mark in figure with red pixel value, its expression formula is:
(8)
Wherein, described in for the depth value of mark, for foreground image position, m is the number of pixels for non-cavity in foreground image.
Adopt the inventive method to carry out cavity to the depth image in Fig. 4 (a), Fig. 5 (a), Fig. 6 (a) to fill, filling the results are shown in Figure 4 (b), Fig. 5 (b), Fig. 6 (c), as can be seen from coloured image and the depth image hybrid processing of Fig. 4 (c), the present invention can keep the integrality of depth map edge region better and improve the matching degree of fringe region and fringe region in coloured image in depth image, while the present invention can carry out the filling of high-quality, pinpoint accuracy to the cavity of smooth region in depth image.
Fig. 6 (b) is the image after prior art adopts polygon filtering to fill Kinect depth image cavity.
As shown in Fig. 6 (b), in image after Kinect depth image cavity being filled with polygon filter method, the coloured image that the fringe region of people and object is corresponding with it does not extremely mate, part cavity simultaneously in depth image cannot correctly be filled, entire image is fuzzyyer, and empty filling quality is lower.In addition, polygon filter method complexity is higher, and filling speed is comparatively slow, the depth image as shown in Fig. 6 (a), and it is about 195ms that polygon filter method carries out cavity filling required time to it, adopts the present invention then only to need about 100ms.
Specific embodiment described herein is only to the explanation for example of the present invention's spirit.Those skilled in the art can make various amendment or supplement or adopt similar mode to substitute to described specific embodiment, but can't depart from spirit of the present invention or surmount the scope that appended claims defines.

Claims (4)

1. the cavity in Kinect depth image is carried out to a method of filling in real time, it is characterized in that, the method comprises the following steps:
Step 1, the coloured image of photographed scene and the depth image corresponding with it is obtained with Kinect video camera, runtime Gauss's method of average is adopted to extract color picture background, obtain background image, background subtraction is adopted to split Kinect coloured image again, obtain foreground image, determine the position of foreground image and the position of background image;
Step 2, utilizes Kinect depth image to build the darkest depth image (deepest depth image, DDI), then, fills the cavity in this darkest depth image, generates the darkest depth image after the filling of cavity;
Step 3, the background image position utilizing step 1 to obtain, this background image position is set to depth image background, and the pixel of the darkest depth image after filling with above-mentioned cavity replaces the background pixel of depth image, thus realizes filling the cavity of depth image background;
Step 4, the foreground image position utilizing step 1 to obtain, marks the cavity existed in the foreground location of Kinect depth image, then fills in real time the cavity after mark.
2. a kind of method of the cavity in Kinect depth image being carried out to fill in real time according to claim 1, it is characterized in that, the coloured image with Kinect video camera acquisition photographed scene described in above-mentioned steps 1 and the depth image corresponding with it, runtime Gauss's method of average is adopted to extract color picture background, obtain background image, background subtraction is adopted to split Kinect coloured image again, obtain foreground image, determine the position of foreground image and the position of background image, its concrete steps are as follows:
1.1 obtain the coloured image of photographed scene and the depth image corresponding with it with Kinect video camera;
1.2 adopt runtime Gauss's method of average to extract Kinect color picture background, and obtain background image, its expression formula is:
(1)
Wherein, (x, y) represents xth row in coloured image, y row place coordinate, represent the pixel value at the i-th color image frame (x, y) place, represent the pixel value at background image (x, the y) place of the i-th frame, according to the parameter value that different shooting environmental sets;
1.3 adopt background subtraction to split Kinect coloured image, and obtain foreground image, determine the position of foreground image and the position of background image, its expression formula is:
(2)
Wherein, represent the i-th color image frame (x, y) place pixel and background image (x, y) place pixel difference value, represent the pixel value at the i-th color image frame (x, y) place, represent the pixel value at background image (x, the y) place of the i-th frame;
1.4 when this difference value is greater than the threshold value T of setting, then judge the position of this place as foreground image or the position of background image, formula is as follows:
(3)
Wherein, described in for to the position of foreground image that obtains or the position of background image after the i-th color image frame segmentation obtained, represent the pixel value at the i-th color image frame (x, y) place, represent the pixel value at background image (x, the y) place of the i-th frame; =1 to represent coloured image (x, y) place be foreground location, =0 to represent coloured image (x, y) place be background positions.
3. a kind of method of the cavity in Kinect depth image being carried out to fill in real time according to claim 1, it is characterized in that, the Kinect depth image that utilizes described in above-mentioned steps 2 builds the darkest depth image (deepest depth image, DDI), then the cavity in this darkest depth image is filled, generate the darkest depth image after the filling of cavity, its concrete steps are as follows:
2.1 the darkest depth images (deepest depth image, DDI) initialization: using the first amplitude deepness image pixel value of Kinect depth image sequence as the darkest depth image DDI initial depth value;
2.2 from the second width, often obtains an amplitude deepness image, upgrades, determine the non-empty pixel value of the darkest depth image and empty pixel value according to the following initial depth value of the most deep angle value update method to the darkest depth image DDI, specific as follows:
The most deep described angle value update method is as follows:
2.2.1, when the darkest depth image is at the pixel value at its (x, y) place for in the darkest depth image during non-empty pixel value:
Judge whether the pixel value of the darkest depth image at its (x, y) place meets following calculating formula (4):
(4)
If the darkest depth image is meet following calculating formula (4), then by the pixel value of the darkest depth image at its (x, y) place at the pixel value at its (x, y) place be defined as non-empty pixel value;
Wherein, it is the pixel value at the i-th frame depth image (x, y) place; for the darkest depth image is at the pixel value at its (x, y) place; for the pixel value that depth image (x, y) the place frequency of occurrences obtained is the highest;
2.2.2, when the darkest depth image is at the pixel value at its (x, y) place for in the darkest depth image during empty pixel value:
Judge whether the pixel value of the darkest depth image at its (x, y) place meets following calculating formula (5):
(5)
If the darkest depth image is meet following calculating formula (5), then by the pixel value of the darkest depth image at its (x, y) place at the pixel value at its (x, y) place be defined as empty pixel value;
Wherein, the pixel value of the darkest depth image of representative at its (x, y) place; represent the pixel value that in the depth image obtained, (x, y) place frequency of occurrences is the highest;
2.3 set up with the pixel spent the most deeply in image cavity search window centered by position, its size is pixel, by the pixel in cavity non-in search window, is designated as , its value of color is designated as if spend the pixel in image cavity the most deeply value of color, be designated as , when if the error between the value of color of spending pixel in the value of color of pixel in image cavity and non-cavity in search window is the most deeply minimum, that is, value minimum time, then assert the pixel color value in search window in non-cavity and the pixel in cavity value of color it is close,
In statistics search window in non-cavity with the pixel in cavity value of color close pixel, the number obtaining close pixel after statistics is n, and calculate above-mentioned close pixel weighted mean pixel value, filled the corresponding cavity in the darkest depth image by the weighted mean pixel value calculated, its expression formula is:
(6)
Weights in formula (6) calculating formula be:
(7)
Wherein, described i-th frame the most deeply the pixel spent in image cavity be , the pixel in non-cavity is , pixel corresponding in coloured image is , represent the scope in search window, representative is in search window, and the pixel count that in non-cavity, color pixel values corresponding to empty pixel is close, accounts for the ratio of the close pixel count of all color pixel values corresponding to pixel in cavity.
4. a kind of method of the cavity in Kinect depth image being carried out to fill in real time according to claim 1, it is characterized in that, the foreground image position utilizing step 1 to obtain described in above-mentioned steps 4, the cavity existed in the foreground location of Kinect depth image is marked, again the cavity after mark is filled in real time, is specially:
Step 4.1: the foreground image position utilizing step 1 to obtain, the foreground image detecting current Kinect depth image is
Whether deep degree image cavity exists cavity before filling, if there is cavity, then marks the cavity of the foreground image of depth image;
Step 4.2: contraposition is set to foreground image, and the pixel being non-cavity in depth image, carry out quantity (m) and pixel value statistics, obtain the average pixel value of these pixels, fill with this average pixel value to mark, its expression formula is:
(8)
Wherein, described in for the depth value of mark, for foreground image position, m is the number of pixels for non-cavity in foreground image.
CN201410327220.5A 2014-07-10 2014-07-10 A kind of method that cavity in Kinect depth image carries out real-time filling Active CN104299220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410327220.5A CN104299220B (en) 2014-07-10 2014-07-10 A kind of method that cavity in Kinect depth image carries out real-time filling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410327220.5A CN104299220B (en) 2014-07-10 2014-07-10 A kind of method that cavity in Kinect depth image carries out real-time filling

Publications (2)

Publication Number Publication Date
CN104299220A true CN104299220A (en) 2015-01-21
CN104299220B CN104299220B (en) 2017-05-31

Family

ID=52318942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410327220.5A Active CN104299220B (en) 2014-07-10 2014-07-10 A kind of method that cavity in Kinect depth image carries out real-time filling

Country Status (1)

Country Link
CN (1) CN104299220B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778673A (en) * 2015-04-23 2015-07-15 上海师范大学 Improved depth image enhancing algorithm based on Gaussian mixed model
CN105701820A (en) * 2016-01-14 2016-06-22 上海大学 Point cloud registration method based on matching area
CN106651871A (en) * 2016-11-18 2017-05-10 华东师范大学 Automatic filling method for cavities in depth image
CN106846324A (en) * 2017-01-16 2017-06-13 河海大学常州校区 A kind of irregular object height measurement method based on Kinect
CN107450896A (en) * 2016-06-01 2017-12-08 上海东方传媒技术有限公司 A kind of method using OpenCV display images
CN107665493A (en) * 2017-08-29 2018-02-06 成都西纬科技有限公司 A kind of image processing method and system based on super-pixel segmentation
CN108399610A (en) * 2018-03-20 2018-08-14 上海应用技术大学 A kind of depth image enhancement method of fusion RGB image information
CN111080640A (en) * 2019-12-30 2020-04-28 广东博智林机器人有限公司 Hole detection method, device, equipment and medium
CN111107337A (en) * 2018-10-29 2020-05-05 曜科智能科技(上海)有限公司 Depth information complementing method and device, monitoring system and storage medium
CN112950484A (en) * 2019-12-11 2021-06-11 鸣医(上海)生物科技有限公司 Method for removing color pollution of photographic image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120001902A1 (en) * 2010-07-02 2012-01-05 Samsung Electronics Co., Ltd. Apparatus and method for bidirectionally inpainting occlusion area based on predicted volume
US20120306876A1 (en) * 2011-06-06 2012-12-06 Microsoft Corporation Generating computer models of 3d objects
CN103455984A (en) * 2013-09-02 2013-12-18 清华大学深圳研究生院 Method and device for acquiring Kinect depth image
CN103561258A (en) * 2013-09-25 2014-02-05 同济大学 Kinect depth video spatio-temporal union restoration method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120001902A1 (en) * 2010-07-02 2012-01-05 Samsung Electronics Co., Ltd. Apparatus and method for bidirectionally inpainting occlusion area based on predicted volume
US20120306876A1 (en) * 2011-06-06 2012-12-06 Microsoft Corporation Generating computer models of 3d objects
CN103455984A (en) * 2013-09-02 2013-12-18 清华大学深圳研究生院 Method and device for acquiring Kinect depth image
CN103561258A (en) * 2013-09-25 2014-02-05 同济大学 Kinect depth video spatio-temporal union restoration method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KANG XU 等: "A Method of Hole-filling for the Depth Map Generated by Kinect with Moving Objects Detection", 《IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA A SYSTEMS AND BROADCASTING》 *
MASSIMO CAMPLANI 等: "Efficient Spatio-Temporal Hole Filling Strategy for Kinect Depth Maps", 《THREE-DIMENSIONAL IMAGE PROCESSING(3DIP) AND APPLICATION II》 *
NA-EUN YANG 等: "Depth Hole Filling Using the Depth Distribution of Neighboring Regions of Depth Holes in the Kinect Sensor", 《2012 IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING,COMMUNIACATION AND COMPUTING(ICSPCC)》 *
王奎 等: "基于Kinect的实时深度提取与多视绘制算法", 《光电子.激光》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778673A (en) * 2015-04-23 2015-07-15 上海师范大学 Improved depth image enhancing algorithm based on Gaussian mixed model
CN105701820B (en) * 2016-01-14 2018-10-30 上海大学 A kind of point cloud registration method based on matching area
CN105701820A (en) * 2016-01-14 2016-06-22 上海大学 Point cloud registration method based on matching area
CN107450896B (en) * 2016-06-01 2021-11-30 上海东方传媒技术有限公司 Method for displaying image by using OpenCV (open circuit vehicle vision library)
CN107450896A (en) * 2016-06-01 2017-12-08 上海东方传媒技术有限公司 A kind of method using OpenCV display images
CN106651871B (en) * 2016-11-18 2019-12-17 华东师范大学 Automatic filling method for depth image hole
CN106651871A (en) * 2016-11-18 2017-05-10 华东师范大学 Automatic filling method for cavities in depth image
CN106846324A (en) * 2017-01-16 2017-06-13 河海大学常州校区 A kind of irregular object height measurement method based on Kinect
CN106846324B (en) * 2017-01-16 2020-05-01 河海大学常州校区 Irregular object height measuring method based on Kinect
CN107665493A (en) * 2017-08-29 2018-02-06 成都西纬科技有限公司 A kind of image processing method and system based on super-pixel segmentation
CN107665493B (en) * 2017-08-29 2020-07-14 成都西纬科技有限公司 Image processing method and system based on superpixel segmentation
CN108399610A (en) * 2018-03-20 2018-08-14 上海应用技术大学 A kind of depth image enhancement method of fusion RGB image information
CN111107337A (en) * 2018-10-29 2020-05-05 曜科智能科技(上海)有限公司 Depth information complementing method and device, monitoring system and storage medium
CN111107337B (en) * 2018-10-29 2021-08-27 曜科智能科技(上海)有限公司 Depth information complementing method and device, monitoring system and storage medium
CN112950484A (en) * 2019-12-11 2021-06-11 鸣医(上海)生物科技有限公司 Method for removing color pollution of photographic image
CN111080640A (en) * 2019-12-30 2020-04-28 广东博智林机器人有限公司 Hole detection method, device, equipment and medium
CN111080640B (en) * 2019-12-30 2023-07-14 广东博智林机器人有限公司 Hole detection method, device, equipment and medium

Also Published As

Publication number Publication date
CN104299220B (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN104299220A (en) Method for filling cavity in Kinect depth image in real time
CN103868460B (en) Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN105956539B (en) A kind of Human Height measurement method of application background modeling and Binocular Vision Principle
CN104680496B (en) A kind of Kinect depth map restorative procedures based on color images
CN106091984B (en) A kind of three dimensional point cloud acquisition methods based on line laser
Čech et al. Scene flow estimation by growing correspondence seeds
CN103337094B (en) A kind of method of applying binocular camera and realizing motion three-dimensional reconstruction
Maimone et al. Real-time volumetric 3D capture of room-sized scenes for telepresence
CN104504688A (en) Method and system based on binocular stereoscopic vision for passenger flow density estimation
CN110009672A (en) Promote ToF depth image processing method, 3D rendering imaging method and electronic equipment
CN105528785A (en) Binocular visual image stereo matching method
CN107507235A (en) A kind of method for registering of coloured image and depth image based on the collection of RGB D equipment
CN105184857A (en) Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
CN109308719A (en) A kind of binocular parallax estimation method based on Three dimensional convolution
CN102609941A (en) Three-dimensional registering method based on ToF (Time-of-Flight) depth camera
CN104079800A (en) Shaking preventing method for video image in video surveillance
CN106875437A (en) A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN104424640A (en) Method and device for carrying out blurring processing on images
CN103968782A (en) Real-time three-dimensional measurement method based on color sine structured light coding
CN103208110B (en) The conversion method and device of video image
CN107560592A (en) A kind of precision ranging method for optronic tracker linkage target
CN110516639B (en) Real-time figure three-dimensional position calculation method based on video stream natural scene
CN108629756A (en) A kind of Kinect v2 depth images Null Spot restorative procedure
CN106203429A (en) Based on the shelter target detection method under binocular stereo vision complex background
Li et al. HDRFusion: HDR SLAM using a low-cost auto-exposure RGB-D sensor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant