CN102831582A - Method for enhancing depth image of Microsoft somatosensory device - Google Patents

Method for enhancing depth image of Microsoft somatosensory device Download PDF

Info

Publication number
CN102831582A
CN102831582A CN2012102653728A CN201210265372A CN102831582A CN 102831582 A CN102831582 A CN 102831582A CN 2012102653728 A CN2012102653728 A CN 2012102653728A CN 201210265372 A CN201210265372 A CN 201210265372A CN 102831582 A CN102831582 A CN 102831582A
Authority
CN
China
Prior art keywords
depth image
pixel
depth
edge
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102653728A
Other languages
Chinese (zh)
Other versions
CN102831582B (en
Inventor
李树涛
陈理
卢婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN201210265372.8A priority Critical patent/CN102831582B/en
Publication of CN102831582A publication Critical patent/CN102831582A/en
Application granted granted Critical
Publication of CN102831582B publication Critical patent/CN102831582B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for enhancing a depth image of a Microsoft somatosensory device. The method comprises the following steps of: carrying out an edge detection on a color image and a depth image respectively, and obtaining a region at which an error pixel is located by using a region growing method in a mode of inputting two edge images; moving an error pixel depth value; constructing a smooth region around an invalid pixel by using the region growing method; estimating an invalid pixel depth value in the smooth region by using a bilateral filtering method; and estimating remaining invalid pixel depth values by using the bilateral filtering method so as to obtain the enhanced depth image. The invention points out that the problem that edges of the depth image and the corresponding color image are not matched is caused by error pixels for the first time, then provides a detection method for the error pixels, and can effectively fill cavity of a Kinect depth image; the problem that the edges of depth images are not matched is solved well, and quality of the Kinect depth image is improved greatly.

Description

A kind of Microsoft body induction device depth image Enhancement Method
Technical field
The present invention relates to a kind of depth image Enhancement Method, a kind of Microsoft body induction device depth image Enhancement Method of saying so more specifically.
Background technology
Kinect is that the depth image of a cheapness of Microsoft's issue obtains equipment.It can produce size simultaneously with the speed of 30fps is 640 * 480 coloured image and depth image.Because this cheapness and real-time characteristic, the Kinect issue just extensively has been used in interactive places such as hospital, library, Conference Hall soon.
Because the restriction of measuring principle, the depth image of Kinect can produce the cavity with the relatively poor surface of reflectivity near the edge of object, and the edge of depth image does not often mate with the edge of the coloured image of correspondence.
In order to solve the hole-filling problem, the researchist has attempted some complementing methods.Traditional method mainly is divided into based on the method for pixel with based on the method for a cloud.Thought based on the pixel method is to regard depth image as common gray level image, regards the cavity as zone to be repaired.Like this, the problem of hole-filling has changed into traditional image and has repaired problem.These class methods mainly utilize chromatic information to coach, through interpolation, repair fast and the method for image repair such as confidence spread is estimated the depth value of Null Spot.But because the edge of the edge of depth image and coloured image and not matching, so the depth information of object edge is insecure, the depth value that estimates is often also inaccurate.
Thought based on a cloud method is that depth image is used as the data of describing body surface, and like this, the problem of hole-filling just is converted into the problem of body surface completion.These class methods are cloud data with the depth image data conversion at first, reconstruct the 3D surface through a cloud, and the characteristic (like the similarity of shape, the relation between the surface normal or the like) according to surface structure finds the image block that matees most with the cavity then.These class methods have relaxed the inaccurate problem of depth value that estimates in the first kind method, but do not settle the matter once and for all.And these class methods need reconstruct the 3D surface, for the application that does not need 3D reconstruct, have increased unnecessary calculated amount.
For depth image edge and the unmatched problem of Color Image Edge, existing method mainly is the information of excavating range image sequence, obtains stable depth image edge with long time window filtering.This method need be carried out estimation to adjacent image, because the influence of factors such as picture noise, the estimation of image sequence is very inaccurate, and calculated amount is also bigger.
Summary of the invention
In order to solve the problems referred to above that exist in the Kinect depth image, the invention provides a kind of Microsoft body induction device depth image Enhancement Method.The present invention can be used as the pre-treatment of Kinect depth data is applied in the various kinect real systems widely.
The technical scheme that the present invention solves the problems of the technologies described above may further comprise the steps:
1) Kinect coloured image and depth image are made rim detection respectively, obtain Color Image Edge and depth image edge;
2) be input with two edge images, adopt region-growing method to obtain the middle zone of two edge images, i.e. the zone at erroneous pixel place;
3) remove the erroneous pixel depth value;
4) around inactive pixels, make up smooth region with region-growing method;
5) estimate the depth value of inactive pixels in the smooth region with the bilateral filtering method.
6), obtain the depth image in the breadths edge nothing cavity consistent with Color Image Edge with bilateral filtering method estimated remaining inactive pixels depth value;
In above-mentioned Microsoft's body induction device depth image Enhancement Method, said step 1) is:
To transfer 8 gray level images respectively to from coloured image and the depth image that Kinect gathers, and adopt the Canny edge detection algorithm to carry out rim detection respectively to two 8 gray level images then.Wherein, the upper limit threshold of Canny rim detection and lower threshold are respectively 200 and 100.
In above-mentioned Microsoft's body induction device depth image Enhancement Method, said step 2) may further comprise the steps:
A) make up the zone in Color Image Edge and depth image edge respectively with region-growing method, form mask images mask1 and mask images mask2.
Wherein the method in depth image edge structure zone is: with all pixels on the depth image edge is that seed carries out region growing, till running into Color Image Edge or reaching distance to a declared goal.
Wherein the method in Color Image Edge structure zone is: with all pixels on the Color Image Edge is that seed carries out region growing, till running into the depth image edge or reaching distance to a declared goal.
B) the depth edge image is carried out the morphology expansive working.
C) mask images mask1 and mask images mask2 are according to pixels asked with operation obtain mask images mask4; Then mask images mask4 and mask images mask3 are according to pixels asked or operate and obtain mask images mask5; This is the result that erroneous pixel detects, and wherein non-zero pixels is represented erroneous pixel.
In above-mentioned Microsoft's body induction device depth image Enhancement Method, said step 4) is: with each inactive pixels P iCarry out region growing in 5 * 5 windows for the center, and around it, make up smooth region.
In above-mentioned Microsoft's body induction device depth image Enhancement Method, the bilateral filtering method is in the said step 5):
D i E = &Sigma; j &Element; &Omega; D j &NotElement; 0 C i - C j < T G s ( i - j ) G c ( C i - C j ) D j G s ( i - j ) G c ( C i - C j ) - - - ( 1 )
Wherein, Ω is P iSmooth region on every side,
Figure BDA00001943061300032
Be pixel P iThe estimation of Depth value, D jBe pixel P jDepth value, G sAnd G cFor average is 0, variance is 1.5 and 3 Gaussian function.I-j is pixel P iWith P jEuclidean distance, C i-C jBe pixel P iWith P jEuclidean distance in the color space.T is a given threshold value, and its value is 40.And the number of pixels of participating in calculating reaches 3 o'clock estimated values and is just adopted.
Repeat bilateral filtering, though not having inactive pixels or having its estimated value of inactive pixels all not to be adopted as in smooth region ended.
In above-mentioned Microsoft's body induction device depth image Enhancement Method, the bilateral filtering method that adopts for the residue inactive pixels in the said step 6) is:
D i E = &Sigma; j &Element; &Omega; D j &NotElement; 0 G s ( i - j ) G c ( C i - C j ) D j G s ( i - j ) G c ( C i - C j ) - - - ( 2 )
Wherein, P iFor the outer inactive pixels of smooth region, promptly remain inactive pixels; Ω is pixel P iA neighborhood, size is 5 * 5, Be pixel P iThe estimation of Depth value, D jBe pixel P jDepth value, G sAnd G cFor average is 0, variance is 1.5 and 3 Gaussian function; I-j is pixel P iWith P jEuclidean distance; C i-C jBe pixel P iWith P jEuclidean distance in the color space; T is a given threshold value, and its value is 40; And the number of pixels of participating in calculating reaches at 3 o'clock, and estimated value is just adopted; Repeat bilateral filtering, though end until not remaining inactive pixels or having its estimated value of inactive pixels all not to be adopted as.
Because adopt technique scheme, technique effect of the present invention is: the present invention adopts the way of removing erroneous pixel to avoid estimating with wrong depth value the depth value of Null Spot, and it is more accurate to make depth value estimate.In addition, owing to removed erroneous pixel, make the depth image edge be complementary with corresponding Color Image Edge.In order to estimate the depth value of Null Spot more accurately; Near Null Spot, make up smooth region with region-growing method; And with in the smooth region effectively pixel estimate the depth value of Null Spot; It is minimum that the error of the feasible depth value that estimates reaches, thereby obtain the depth image of a complete pin-point accuracy.The present invention effectively fills up the cavity of Kinect depth image, and has solved the unmatched problem in depth image edge well, has greatly improved the quality of Kinect depth image, and the subsequent treatment of depth image is significant and practical value.
Below in conjunction with accompanying drawing the present invention is further described.
Description of drawings
Fig. 1 is a process flow diagram of the present invention.
Fig. 2 is a faults pixel region synoptic diagram in the embodiment of the invention.
Fig. 3 is that synoptic diagram is filled in the depth image cavity in the embodiment of the invention.
Fig. 4 is figure image intensifying instance, and wherein (a) is bilateral filtering method gained image, (b) is the inventive method gained image.
Embodiment
Referring to Fig. 1, Fig. 1 is a process flow diagram of the present invention, and its practical implementation step is following:
The 1 pair of Kinect coloured image and depth image are made rim detection respectively, obtain Color Image Edge and depth image edge.
To transfer 8 gray level images respectively to from coloured image and the depth image that Kinect gathers, then these two 8 gray level images carried out rim detection respectively, obtain colour edging image and depth edge image.Edge detection method specifically adopts Canny edge detection algorithm (the paper John Canny that the practical implementation details reference of Canny edge detection algorithm was published on the IEEE Transactions on Pattern Analysis and Machine Intelligence in 1986; " A computational approach to edge detection " .IEEE Trans.Pattern Analysis and Machine Intelligence; Vol.8; No.6, pp.679-714.).Wherein, the upper limit threshold of Canny rim detection and lower threshold are respectively 200 and 100.
2 is input with two edge images, adopts region-growing method to obtain the zone in the middle of two edge images, and promptly the erroneous pixel region is of Fig. 2, specifically comprises:
1) makes up the zone in Color Image Edge and depth image edge respectively with region-growing method, form mask images mask1 and mask images mask2.
Wherein, the method that makes up the zone at the depth image edge is: for the pixel on each depth image edge, be that seed carries out region growing with this pixel, till running into Color Image Edge or reaching window edge.Concrete steps are:
Step 1: for each pixel on the depth image edge, if it not on Color Image Edge, is then put into it and wait to investigate set of pixels A;
Step 2: to each the pixel P among the A; Investigate its neighbours territory respectively,, then this point is put into and waited to investigate set of pixels A if investigated a little not on the Color Image Edge and be within 9 * 9 the investigation window at center with P; Then P is removed from A, till A is empty set.
The method that Color Image Edge makes up the zone is: with all pixels on the Color Image Edge is that seed carries out region growing, till running into the depth image edge or reaching the distance of appointment.
2) the depth edge image is carried out the morphology expansive working with 3 * 3 template, obtain mask images mask3.
3) mask images mask1 and mask images mask2 are according to pixels asked with operation obtain mask images mask4; Then mask images mask4 and mask images mask3 are according to pixels asked or operate and obtain mask images mask5; This is the result that erroneous pixel detects, and wherein non-zero pixels is represented erroneous pixel.
3 remove the erroneous pixel depth value.
The method of 4 usefulness region growings makes up smooth region around inactive pixels.
As shown in Figure 3, for each inactive pixels P i, in 5 * 5 windows that with this pixel are the center, carry out region growing, and around it, make up smooth region.
5 usefulness bilateral filtering methods are estimated the depth value of inactive pixels in the smooth region.
As shown in Figure 3, adopt following bilateral filtering method to estimate the depth value of this inactive pixels:
D i E = &Sigma; j &Element; &Omega; D j &NotElement; 0 C i - C j < T G s ( i - j ) G c ( C i - C j ) D j G s ( i - j ) G c ( C i - C j ) - - - ( 1 )
Wherein Ω is P iSmooth region on every side,
Figure BDA00001943061300062
Be pixel P iThe estimation of Depth value, D jBe pixel P jDepth value, G sAnd G cFor average is 0, variance is 1.5 and 3 Gaussian function.I-j is pixel P iWith P jAt the Euclidean distance of image space, C i-C jBe pixel P iWith P jEuclidean distance in the color space.T is a given threshold value, and its value is 40.
In order accurately to estimate the depth value of Null Spot, to be adopted Cai have only the number of pixels of participating in calculating to reach 3 o'clock estimated values.In order to fill up bigger cavity, take the round-robin method to implement bilateral filtering, though not having inactive pixels or having its estimated value of inactive pixels all not to be adopted as in smooth region ended.
The depth value of 6 usefulness bilateral filtering method estimated remaining inactive pixels obtains the empty depth image of the breadths edge nothing consistent with Color Image Edge.
As shown in Figure 3, adopt following bilateral filtering to estimate its depth value:
D i E = &Sigma; j &Element; &Omega; D j &NotElement; 0 G s ( i - j ) G c ( C i - C j ) D j G s ( i - j ) G c ( C i - C j ) - - - ( 2 )
Wherein, P iFor the outer inactive pixels of smooth region, promptly remain inactive pixels; Ω is pixel P iA neighborhood, size is 5 * 5;
Figure BDA00001943061300072
Be pixel P iThe estimation of Depth value, D jBe pixel P jDepth value, G sAnd G cFor average is 0, variance is 1.5 and 3 Gaussian function; I-j is pixel P iWith P jEuclidean distance; C i-C jBe pixel P iWith P jEuclidean distance in the color space; T is a given threshold value, and its value is 40; And the number of pixels of participating in calculating reaches at 3 o'clock, and estimated value is just adopted; Repeat bilateral filtering, though end until not remaining inactive pixels or having its estimated value of residue inactive pixels all not to be adopted as.
Method provided by the present invention compares with general bilateral filtering method, and is as shown in Figure 4.As can be seen from Figure 4, this method had both effectively been filled up the cavity, had also greatly strengthened the stability at edge, made the edge coupling of depth image and coloured image good.

Claims (8)

1. Microsoft's body induction device depth image Enhancement Method may further comprise the steps:
1) Kinect coloured image and depth image are made rim detection respectively, obtain Color Image Edge and depth image edge;
2) be input with two edge images, adopt region-growing method to obtain the middle zone of two edge images, i.e. the zone at erroneous pixel place;
3) remove the erroneous pixel depth value;
4) around inactive pixels, make up smooth region with region-growing method;
5) estimate the depth value of inactive pixels in the smooth region with the bilateral filtering method;
6), obtain the depth image in the breadths edge nothing cavity consistent with Color Image Edge with bilateral filtering method estimated remaining inactive pixels depth value.
2. Microsoft according to claim 1 body induction device depth image Enhancement Method, said step 1) is:
To transfer 8 gray level images respectively to from coloured image and the depth image that Kinect gathers; Adopt the Canny edge detection algorithm to carry out rim detection respectively to two 8 gray level images then; Wherein, the upper limit threshold of Canny rim detection and lower threshold are respectively 200 and 100.
3. Microsoft according to claim 1 body induction device depth image Enhancement Method is characterized in that said step 2) be:
A) make up the zone in Color Image Edge and depth image edge respectively with region-growing method, form mask images mask1 and mask images mask2;
B) the depth edge image is carried out the morphology expansive working;
C) mask images mask1 and mask images mask2 are according to pixels asked with operation obtain mask images mask4; Then mask images mask4 and mask images mask3 are according to pixels asked or operate and obtain mask images mask5; This is the result that erroneous pixel detects, and wherein non-zero pixels is represented erroneous pixel.
4. Microsoft according to claim 3 body induction device depth image Enhancement Method; The method that said depth image edge makes up the zone is: with all pixels on the depth image edge is that seed carries out region growing, till running into Color Image Edge or reaching distance to a declared goal.
5. Microsoft according to claim 3 body induction device depth image Enhancement Method; The method that said Color Image Edge makes up the zone is: with all pixels on the Color Image Edge is that seed carries out region growing, till running into the depth image edge or reaching distance to a declared goal.
6. Microsoft according to claim 1 body induction device depth image Enhancement Method, said step 4) is: with each inactive pixels P iCarry out region growing in 5 * 5 windows for the center, and around it, make up smooth region.
7. Microsoft according to claim 1 body induction device depth image Enhancement Method, the bilateral filtering method is in the said step 5):
D i E = &Sigma; j &Element; &Omega; D j &NotElement; 0 C i - C j < T G s ( i - j ) G c ( C i - C j ) D j G s ( i - j ) G c ( C i - C j ) - - - ( 1 )
Wherein Ω is P iSmooth region on every side;
Figure FDA00001943061200022
Be pixel P iThe estimation of Depth value, D jBe pixel P jDepth value; G sAnd G cFor average is 0, variance is 1.5 and 3 Gaussian function; I-j is pixel P iWith P jEuclidean distance, C i-C jBe pixel P iWith P jEuclidean distance in the color space; T is a given threshold value, and its value is 40; And when the number of pixels of participating in calculating reached 3, estimated value was just adopted; Repeat bilateral filtering, though not having inactive pixels or having its estimated value of inactive pixels all not to be adopted as in smooth region ended.
8. Microsoft according to claim 1 body induction device depth image Enhancement Method, the bilateral filtering method that adopts for the residue inactive pixels in the said step 6) is:
D i E = &Sigma; j &Element; &Omega; D j &NotElement; 0 G s ( i - j ) G c ( C i - C j ) D j G s ( i - j ) G c ( C i - C j ) - - - ( 2 )
Wherein, P iFor the outer inactive pixels of smooth region, promptly remain inactive pixels; Ω is pixel P iA neighborhood, size is 5 * 5; Be pixel P iThe estimation of Depth value, D jBe pixel P jDepth value, G sAnd G cFor average is 0, variance is 1.5 and 3 Gaussian function; I-j is pixel P iWith P jEuclidean distance; C i-C jBe pixel P iWith P jEuclidean distance in the color space; T is a given threshold value, and its value is 40; And the number of pixels of participating in calculating reaches at 3 o'clock, and estimated value is just adopted; Repeat bilateral filtering, though end until not remaining inactive pixels or having its estimated value of residue inactive pixels all not to be adopted as.
CN201210265372.8A 2012-07-27 2012-07-27 A kind of depth image of Microsoft somatosensory device Enhancement Method Expired - Fee Related CN102831582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210265372.8A CN102831582B (en) 2012-07-27 2012-07-27 A kind of depth image of Microsoft somatosensory device Enhancement Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210265372.8A CN102831582B (en) 2012-07-27 2012-07-27 A kind of depth image of Microsoft somatosensory device Enhancement Method

Publications (2)

Publication Number Publication Date
CN102831582A true CN102831582A (en) 2012-12-19
CN102831582B CN102831582B (en) 2015-08-12

Family

ID=47334699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210265372.8A Expired - Fee Related CN102831582B (en) 2012-07-27 2012-07-27 A kind of depth image of Microsoft somatosensory device Enhancement Method

Country Status (1)

Country Link
CN (1) CN102831582B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198486A (en) * 2013-04-10 2013-07-10 浙江大学 Depth image enhancement method based on anisotropic diffusion
CN103413276A (en) * 2013-08-07 2013-11-27 清华大学深圳研究生院 Depth enhancing method based on texture distribution characteristics
CN103578113A (en) * 2013-11-19 2014-02-12 汕头大学 Method for extracting foreground images
CN103942756A (en) * 2014-03-13 2014-07-23 华中科技大学 Post-processing filtering method for depth map
CN104036472A (en) * 2013-03-06 2014-09-10 北京三星通信技术研究有限公司 Method and device for enhancing quality of 3D image
CN104320649A (en) * 2014-11-04 2015-01-28 北京邮电大学 Multi-view depth map enhancing system based on total probabilistic models
CN104680496A (en) * 2015-03-17 2015-06-03 山东大学 Kinect deep image remediation method based on colorful image segmentation
CN104778673A (en) * 2015-04-23 2015-07-15 上海师范大学 Improved depth image enhancing algorithm based on Gaussian mixed model
CN105096259A (en) * 2014-05-09 2015-11-25 株式会社理光 Depth value restoration method and system for depth image
CN105678765A (en) * 2016-01-07 2016-06-15 深圳市未来媒体技术研究院 Texture-based depth boundary correction method
CN106162147A (en) * 2016-07-28 2016-11-23 天津大学 Depth recovery method based on binocular Kinect depth camera system
US9530215B2 (en) 2015-03-20 2016-12-27 Qualcomm Incorporated Systems and methods for enhanced depth map retrieval for moving objects using active sensing technology
CN106462745A (en) * 2014-06-19 2017-02-22 高通股份有限公司 Structured light three-dimensional (3D) depth map based on content filtering
CN106504294A (en) * 2016-10-17 2017-03-15 浙江工业大学 RGBD image vector methods based on diffusion profile
US9635339B2 (en) 2015-08-14 2017-04-25 Qualcomm Incorporated Memory-efficient coded light error correction
CN107133956A (en) * 2016-02-29 2017-09-05 汤姆逊许可公司 Adaptive depth guiding feeling of unreality rendering method and equipment
CN107330893A (en) * 2017-08-23 2017-11-07 无锡北斗星通信息科技有限公司 Canned vehicle image recognition system
CN107358680A (en) * 2017-08-29 2017-11-17 无锡北斗星通信息科技有限公司 A kind of personnel characteristics' deep treatment method
US9846943B2 (en) 2015-08-31 2017-12-19 Qualcomm Incorporated Code domain power control for structured light
CN107516081A (en) * 2017-08-23 2017-12-26 无锡北斗星通信息科技有限公司 A kind of canned vehicle image recognition method
US9948920B2 (en) 2015-02-27 2018-04-17 Qualcomm Incorporated Systems and methods for error correction in structured light
CN107993201A (en) * 2017-11-24 2018-05-04 北京理工大学 A kind of depth image enhancement method for retaining boundary characteristic
US10068338B2 (en) 2015-03-12 2018-09-04 Qualcomm Incorporated Active sensing spatial resolution improvement through multiple receivers and code reuse
CN108629756A (en) * 2018-04-28 2018-10-09 东北大学 A kind of Kinect v2 depth images Null Spot restorative procedure
CN108806121A (en) * 2017-05-04 2018-11-13 上海弘视通信技术有限公司 Active ATM in bank guard method and its device
CN109598736A (en) * 2018-11-30 2019-04-09 深圳奥比中光科技有限公司 The method for registering and device of depth image and color image
CN109961406A (en) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 Image processing method and device and terminal equipment
CN110097590A (en) * 2019-04-24 2019-08-06 成都理工大学 Color depth image repair method based on depth adaptive filtering
CN110322411A (en) * 2019-06-27 2019-10-11 Oppo广东移动通信有限公司 Optimization method, terminal and the storage medium of depth image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938670A (en) * 2009-06-26 2011-01-05 Lg电子株式会社 Image display device and method of operation thereof
JP4670994B2 (en) * 2010-04-05 2011-04-13 オムロン株式会社 Color image processing method and image processing apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938670A (en) * 2009-06-26 2011-01-05 Lg电子株式会社 Image display device and method of operation thereof
JP4670994B2 (en) * 2010-04-05 2011-04-13 オムロン株式会社 Color image processing method and image processing apparatus

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KANG XU等: "A Method of Hole-filling for the Depth Map Generated by Kinect with Moving Objects Detection", 《2012 IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING(BMSB)》, 29 June 2012 (2012-06-29), pages 1 - 5, XP032222275, DOI: 10.1109/BMSB.2012.6264232 *
MASSIMO 等: "Efficient Spatio-Temporal Hole Filling Strategy for Kinect Depth Maps", 《PROCESSING SPIE 8290,THREE-DIMENSIONAL IMAGE PROCESSING(3DIP) AND APPLICATION Ⅱ》, vol. 8290, 9 February 2012 (2012-02-09), pages 1 - 10 *
史延新: "结合边缘检测和区域方法的医学图像分割算法", 《西安工程大学学报》, vol. 24, no. 3, 25 June 2010 (2010-06-25), pages 320 - 329 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036472A (en) * 2013-03-06 2014-09-10 北京三星通信技术研究有限公司 Method and device for enhancing quality of 3D image
CN103198486B (en) * 2013-04-10 2015-09-09 浙江大学 A kind of depth image enhancement method based on anisotropy parameter
CN103198486A (en) * 2013-04-10 2013-07-10 浙江大学 Depth image enhancement method based on anisotropic diffusion
CN103413276B (en) * 2013-08-07 2015-11-25 清华大学深圳研究生院 A kind of degree of depth Enhancement Method based on grain distribution feature
CN103413276A (en) * 2013-08-07 2013-11-27 清华大学深圳研究生院 Depth enhancing method based on texture distribution characteristics
CN103578113A (en) * 2013-11-19 2014-02-12 汕头大学 Method for extracting foreground images
CN103942756B (en) * 2014-03-13 2017-03-29 华中科技大学 A kind of method of depth map post processing and filtering
CN103942756A (en) * 2014-03-13 2014-07-23 华中科技大学 Post-processing filtering method for depth map
CN105096259A (en) * 2014-05-09 2015-11-25 株式会社理光 Depth value restoration method and system for depth image
CN105096259B (en) * 2014-05-09 2018-01-09 株式会社理光 The depth value restoration methods and system of depth image
CN106462745A (en) * 2014-06-19 2017-02-22 高通股份有限公司 Structured light three-dimensional (3D) depth map based on content filtering
CN104320649A (en) * 2014-11-04 2015-01-28 北京邮电大学 Multi-view depth map enhancing system based on total probabilistic models
US9948920B2 (en) 2015-02-27 2018-04-17 Qualcomm Incorporated Systems and methods for error correction in structured light
US10068338B2 (en) 2015-03-12 2018-09-04 Qualcomm Incorporated Active sensing spatial resolution improvement through multiple receivers and code reuse
CN104680496A (en) * 2015-03-17 2015-06-03 山东大学 Kinect deep image remediation method based on colorful image segmentation
CN104680496B (en) * 2015-03-17 2018-01-05 山东大学 A kind of Kinect depth map restorative procedures based on color images
US9530215B2 (en) 2015-03-20 2016-12-27 Qualcomm Incorporated Systems and methods for enhanced depth map retrieval for moving objects using active sensing technology
CN104778673A (en) * 2015-04-23 2015-07-15 上海师范大学 Improved depth image enhancing algorithm based on Gaussian mixed model
US9635339B2 (en) 2015-08-14 2017-04-25 Qualcomm Incorporated Memory-efficient coded light error correction
US9846943B2 (en) 2015-08-31 2017-12-19 Qualcomm Incorporated Code domain power control for structured light
US10223801B2 (en) 2015-08-31 2019-03-05 Qualcomm Incorporated Code domain power control for structured light
CN105678765A (en) * 2016-01-07 2016-06-15 深圳市未来媒体技术研究院 Texture-based depth boundary correction method
US11176728B2 (en) 2016-02-29 2021-11-16 Interdigital Ce Patent Holdings, Sas Adaptive depth-guided non-photorealistic rendering method and device
CN107133956A (en) * 2016-02-29 2017-09-05 汤姆逊许可公司 Adaptive depth guiding feeling of unreality rendering method and equipment
CN106162147B (en) * 2016-07-28 2018-10-16 天津大学 Depth recovery method based on binocular Kinect depth camera systems
CN106162147A (en) * 2016-07-28 2016-11-23 天津大学 Depth recovery method based on binocular Kinect depth camera system
CN106504294A (en) * 2016-10-17 2017-03-15 浙江工业大学 RGBD image vector methods based on diffusion profile
CN106504294B (en) * 2016-10-17 2019-04-26 浙江工业大学 RGBD image vector method based on diffusion profile
CN108806121A (en) * 2017-05-04 2018-11-13 上海弘视通信技术有限公司 Active ATM in bank guard method and its device
CN107330893A (en) * 2017-08-23 2017-11-07 无锡北斗星通信息科技有限公司 Canned vehicle image recognition system
CN107516081A (en) * 2017-08-23 2017-12-26 无锡北斗星通信息科技有限公司 A kind of canned vehicle image recognition method
CN107516081B (en) * 2017-08-23 2018-05-18 赵志坚 A kind of canned vehicle image recognition method
CN107358680A (en) * 2017-08-29 2017-11-17 无锡北斗星通信息科技有限公司 A kind of personnel characteristics' deep treatment method
CN107358680B (en) * 2017-08-29 2019-07-23 上海旗沃信息技术有限公司 A kind of personnel characteristics' deep treatment method
CN107993201A (en) * 2017-11-24 2018-05-04 北京理工大学 A kind of depth image enhancement method for retaining boundary characteristic
CN107993201B (en) * 2017-11-24 2021-11-16 北京理工大学 Depth image enhancement method with retained boundary characteristics
CN109961406A (en) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 Image processing method and device and terminal equipment
CN109961406B (en) * 2017-12-25 2021-06-25 深圳市优必选科技有限公司 Image processing method and device and terminal equipment
CN108629756A (en) * 2018-04-28 2018-10-09 东北大学 A kind of Kinect v2 depth images Null Spot restorative procedure
CN108629756B (en) * 2018-04-28 2021-06-25 东北大学 Kinectv2 depth image invalid point repairing method
CN109598736A (en) * 2018-11-30 2019-04-09 深圳奥比中光科技有限公司 The method for registering and device of depth image and color image
CN110097590A (en) * 2019-04-24 2019-08-06 成都理工大学 Color depth image repair method based on depth adaptive filtering
CN110322411A (en) * 2019-06-27 2019-10-11 Oppo广东移动通信有限公司 Optimization method, terminal and the storage medium of depth image

Also Published As

Publication number Publication date
CN102831582B (en) 2015-08-12

Similar Documents

Publication Publication Date Title
CN102831582A (en) Method for enhancing depth image of Microsoft somatosensory device
CN106846359B (en) Moving target rapid detection method based on video sequence
CN105528785B (en) A kind of binocular vision image solid matching method
CN103116875B (en) Self-adaptation bilateral filtering image de-noising method
CN102074014B (en) Stereo matching method by utilizing graph theory-based image segmentation algorithm
CN102034101B (en) Method for quickly positioning circular mark in PCB visual detection
CN102609941A (en) Three-dimensional registering method based on ToF (Time-of-Flight) depth camera
CN108932734B (en) Monocular image depth recovery method and device and computer equipment
CN103927717A (en) Depth image recovery method based on improved bilateral filters
CN104680496A (en) Kinect deep image remediation method based on colorful image segmentation
CN102113900B (en) Relevant method and device for color blood flow dynamic frame
CN105809715B (en) A kind of visual movement object detection method adding up transformation matrices based on interframe
CN104408741A (en) Video global motion estimation method with sequential consistency constraint
CN104867137A (en) Improved RANSAC algorithm-based image registration method
CN105069751A (en) Depth image missing data interpolation method
CN105139355A (en) Method for enhancing depth images
CN104036480A (en) Surf algorithm based quick mismatching point eliminating method
CN107146217A (en) A kind of image detecting method and device
CN103957397A (en) Method for achieving up-sampling of low-resolution depth image based on image features
CN104408772A (en) Grid projection-based three-dimensional reconstructing method for free-form surface
CN103679193A (en) FREAK-based high-speed high-density packaging component rapid location method
CN103458261A (en) Video scene variation detection method based on stereoscopic vision
CN105023263B (en) A kind of method of occlusion detection and parallax correction based on region growing
CN102609945A (en) Automatic registration method of visible light and thermal infrared image sequences
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150812

Termination date: 20170727

CF01 Termination of patent right due to non-payment of annual fee