CN104537627B - A kind of post-processing approach of depth image - Google Patents

A kind of post-processing approach of depth image Download PDF

Info

Publication number
CN104537627B
CN104537627B CN201510009690.1A CN201510009690A CN104537627B CN 104537627 B CN104537627 B CN 104537627B CN 201510009690 A CN201510009690 A CN 201510009690A CN 104537627 B CN104537627 B CN 104537627B
Authority
CN
China
Prior art keywords
depth image
point
points
depth
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510009690.1A
Other languages
Chinese (zh)
Other versions
CN104537627A (en
Inventor
林春雨
袁艺天
赵耀
刘美琴
白慧慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN201510009690.1A priority Critical patent/CN104537627B/en
Publication of CN104537627A publication Critical patent/CN104537627A/en
Application granted granted Critical
Publication of CN104537627B publication Critical patent/CN104537627B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention relates to digital image processing field, a kind of post-processing approach of depth image is disclosed.The present invention is up-sampled by using the relatively low classical interpolation algorithm of time complexity to depth image first, then the correction of depth image progress edge line, the borderline region reparation after up-sampling are post-processed twice.Can be while raising up-sampling depth image good subjective vision effect using technical scheme, pass through edge correction, make depth image border consistent with the geometrical boundary of corresponding color image, the quality of synthesis viewpoint is effectively improved, the obscurity boundary phenomenon that up-sampling is brought is eliminated.

Description

A kind of post-processing approach of depth image
Technical field
The present invention relates to digital image processing field, more particularly to a kind of post-processing approach of depth image.
Background technology
It is one of most basic content, scene in computer vision research that three-dimensional scenic, which is obtained, relative to the distance of video camera Middle each point can be represented relative to the distance of video camera with depth image, i.e., the pixel value of certain point is represented pair in depth image Answer distance of the certain point relative to video camera in scene.Depth image characterizes the third dimension information of object, therefore in three-dimensional There is important application in the research such as reconstruction, pattern-recognition, man-machine interaction.
The technology that current NI Vision Builder for Automated Inspection obtains scene depth image can be divided into passive light method and active light method two Major class.The general principle of passive light method is from two same scenery of (or multiple) viewing point, to obtain under different visual angles Perceptual image, by principle of triangulation the position deviation (i.e. parallax) between image pixel is calculated to obtain the three-dimensional letter of scenery Breath, generates depth image.But passive light method needs strict constraints and accurately corrected, and time complexity also compared with Height, it is less in actual applications to use.Active light method refers to that vision system to scene emitted energy, then receives scene first To the reflected energy of institute's emitted energy, by calculating, the depth information of scene is obtained.Designed at present according to such method Kinect video camera is exactly by projecting infrared light to measurement space, so that infrared speckle is formed in body surface, by dissipating Spot obtains the depth information of object with the demarcation of video camera distance.Depth image can promptly be generated by such a mode, and Cost is relatively low, but the depth image obtained has the problem of resolution ratio is relatively low compared with the color image obtained simultaneously, unfavorable In the progress of follow-up work, its application in practice is limited.So needing to handle depth image again, to improve depth The resolution ratio of image is spent, makes it that there is the up-sampling of same size, i.e. depth image with corresponding color image.
In the prior art, the up-sampling of depth image has some classical interpolation algorithms, such as closest interpolation algorithm (nearest), bilinear interpolation algorithm (bilinear) and bicubic interpolation algorithm (bicubic) etc..Closest interpolation algorithm The gray value of the input pixel of the position be mappeding to from it recently is selected as interpolation result:Though this algorithm amount of calculation It is small, but obvious mosaic phenomenon and sawtooth effect can be produced.Bilinear interpolation algorithm is then that closest interpolation algorithm is changed Enter, it first carries out first-order linear interpolation to horizontal direction, then carries out first-order linear interpolation in vertical direction again, integrates the two End product is just obtained through third time interpolation.The crenellated phenomena of the up-sampling result images obtained through such a method is difficult to discover, But the edge of image can produce blooming.Bicubic interpolation algorithm is the improvement to bilinear interpolation algorithm again, and it is not only examined The gray value for having considered direct adjoint point treats the influence of sampled point, it is also contemplated that the influence of pixel value rate of change between adjoint point, therefore institute The to be sampled pixel value tried to achieve is more accurate.Though this algorithm overcomes the shortcoming of first two method, edge blurry phenomenon is also obtained To improvement, but the loss in detail of original image can be caused.
It can be seen that, it is badly in need of wanting a kind of post-processing approach of depth image at present, in the case where saving time and cost, not only The blooming at edge can be eliminated, and does not result in the loss in detail of original image.
The content of the invention
It is an object of the invention to:A kind of post-processing approach of depth image is provided, it has not only been saved processing time, and And, by edge correction, depth image side can be made while raising up-sampling depth image good subjective vision effect Boundary is consistent with the border of corresponding color image, effectively improves the quality of synthesis viewpoint, eliminates the module of boundary that up-sampling is brought Paste phenomenon.
The present invention provides a kind of post-processing approach of depth image, including:
Original depth image d is up-sampled by image capturing system, the depth of depth image D' after up-sampling is obtained The gradient value information of angle value information and color image C;
In response to the depth value information of depth image D' after the up-sampling, the initial edge of depth image after the up-sampling is obtained Boundary E';
In response to color image C gradient value information, school is carried out to the initial boundary E' of depth image after the up-sampling Just, make the border of its depth image consistent with the border of corresponding color image, obtain depth image boundary graph;
Repaired using the depth image boundary graph of acquisition as with reference to the border to depth image after up-sampling.
Further, the initial boundary of depth image is corrected after the described pair of up-sampling, including:
To the depth value and color image of corresponding points of the every bit P after up-sampling on depth image D' on initial boundary C Grad is compared, if Grad is larger and depth value is smaller, and boundary point P is geometrical boundary;Otherwise intersected Correction is examined, correct boundary point is searched in the neighbouring region of boundary point to be corrected.
Further, the crosscheck correction includes:
Judge the boundary point P left and right sides and the gradient difference value of both sides pixel up and down, be referred to as differential horizontal and Vertical difference, if its differential horizontal is more than or equal to vertical difference, the upward position correction of water-filling square is entered to P points, otherwise, Carry out the position correction on vertical direction;
Wherein, the position correction in horizontal direction is:
Find corresponding points P of the P points in color image CC, with PCCentered on point, left and right respectively takes the continuous pixels of r to be Point to be investigated, all waits that investigating point constitutes region R to be investigated;If M is a certain point to be investigated, and its horizontal gradient value is more than PC The horizontal gradient value of point, then find corresponding points M of the M points in D'dIf, MdThe horizontal gradient value of point is more than a certain threshold value thresh, It is correct geometrical boundary point then to think M points, by the correspondence position of position correction of the P points in E' to M points;
Position correction on vertical direction is:
Find corresponding points P of the P points in color image CC, with PCCentered on point, the continuous pixels of r are respectively taken to be up and down Point to be investigated, all waits that investigating point constitutes region R to be investigated;If M is a certain point to be investigated, and its vertical gradient value is more than PC The vertical gradient value of point, then find corresponding points M of the M points in D'dIf, MdThe vertical gradient value of point is more than a certain threshold value thresh, It is correct geometrical boundary point then to think M points, by the correspondence position of position correction of the P points in E' to M points.
Further, in the horizontal direction in position correction or the position correction in vertical direction, if there is multiple M points to expire Sufficient condition, then it is check point to take the maximum point of gradient on color diagram region R to be investigated.
Further, the depth value information in response to the depth image, i.e., judge to adopt on described with the change of depth value The initial boundary of depth image after sample.
Further, depth image is up-sampled, including:Using the relatively low classical interpolation algorithm of time complexity to depth Degree image is up-sampled.
Further, the classical interpolation algorithm includes:Closest interpolation algorithm or bilinear interpolation algorithm or bicubic are inserted Value-based algorithm.
Therefore, using technical solution of the present invention, due to it is determined that during boundary point, being adopted by comparing the boundary point upper The Grad of the depth value of corresponding points after sample on depth image D' and color image C is compared, and excludes texture region side The interference of this process of bound pair, and crosscheck correction is carried out, search correct in the neighbouring region of boundary point to be corrected Boundary point so that the position of boundary point is more accurate, so that the edge blurry for effectively eliminating depth image after up-sampling shows As, and ensure that its edge details.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, constitutes the part of the application, not Inappropriate limitation of the present invention is constituted, in the accompanying drawings:
Fig. 1 is a kind of main flow schematic diagram of the post-processing approach for depth image that the embodiment of the present invention 1 is provided;
Fig. 2 is a kind of boundary line correcting process signal of the post-processing approach for depth image that the embodiment of the present invention 2 is provided Figure.
Embodiment
Describe the present invention in detail below in conjunction with accompanying drawing and specific embodiment, herein illustrative examples of the invention And explanation is used for explaining the present invention, but it is not as a limitation of the invention.
Embodiment 1:
A kind of main flow schematic diagram of the post-processing approach for depth image that Fig. 1 provides for the present embodiment.
Shown in Figure 1, the post-processing approach for the depth image that the present embodiment is provided mainly is walked including following flow Suddenly.
Step 1:Original depth image d is obtained by NI Vision Builder for Automated Inspection.
Step 2:Original depth image d is up-sampled using simple difference arithmetic, and obtains depth map after up-sampling As D' depth value information and color image C gradient value information.The simple difference arithmetic can be that time complexity is relatively low Classical interpolation algorithm, for example:Closest interpolation algorithm or bilinear interpolation algorithm or bicubic interpolation algorithm etc..
Step 3:According to the depth value information of depth image D' after the up-sampling, rim detection is carried out to the point in D', such as Really the depth value mutation of the point, then illustrate the point for boundary point, so as to obtain the initial boundary E' of depth image after the up-sampling. Basis for estimation herein is:It is generally consistent in view of depth information of the same object in depth image, and different objects Just there is the mutation of depth value in intersection, thus depth image have border sharp and the characteristics of local smoothing method.
Step 4:With reference to color image C gradient value information, the initial boundary E' of depth image after the up-sampling is entered Row correction, makes the border of its depth image consistent with the border of corresponding color image, obtains depth image boundary graph.
Step 5:Repair, fill out using the depth image boundary graph of acquisition as with reference to the border to depth image after up-sampling Fill obscurity boundary region.
Embodiment 2:
Fig. 2 illustrates for a kind of boundary line correcting process of post-processing approach of depth image provided in an embodiment of the present invention Figure.
The present embodiment is a kind of preferred scheme of embodiment 1, and due to the influence of spatial domain and pixel codomain, D' has border The blooming in region, when to detection edge, there are a variety of possibilities the position of boundary point, and many uncertainties are carried out to detection band Factor, and the edge extracted with and its Geometry edge common point of color image for matching it is seldom, illustrate the side extracted The position of boundary's point is very inaccurate, so obtained boundary graph E' is also required to further correction.Make its edge with and it match The Geometry edge of color image coincide, and this will instruct this trimming process using the gradient value information of color image.
Image Edge-Detection mainly uses the gradient information of image certain point whether to judge this point as boundary point, and one As be to think that Grad is bigger, this point for boundary point possibility it is bigger.Based on this principle, we are from color image Gradient information come instruct correction process.But the larger point of some gradient magnitudes is not necessarily marginal point sometimes, such as to color Image carries out rim detection, and there are two kinds of possibilities on the border extracted, and a kind of is the geometrical boundary of object, a kind of then be image line Manage the border in region.And in depth image, even if the corresponding color image in a certain region is texture image, but as long as this area Domain is in same depth, then this region is smooth in depth image.Its each site depth value of same object is past Toward difference less, the only intersection in object, or foreground object and the intersection of background, just occurs the prominent of depth value Become, so the border of detection depth image, what is obtained is usually the geometrical boundary of object.We only need to be extracted with color image The geometrical boundary gone out come instruct correction up-sample after depth image boundary point, therefore in timing, to exclude texture region Interference of the border to this process.Although the position of the boundary point in E' is not very accurate, each border to be corrected Distance between point and correct boundary point can't be too big, so during correction, we are only needed in boundary point to be corrected Correct boundary point is searched in neighbouring region and is recorded.
The process to correction does a specific description below.To each boundary point in E' boundary images, do as follows Processing:If a certain boundary point is P in E', we are only corrected in horizontal or vertical to P points on some direction, the correction Direction should be consistent with the depth value mutation direction that P points correspond to d borderline regions.The specific decision procedure of orientation is as follows:It is inverse To corresponding points p of the P points on original depth-map d is found, the left and right sides of p points and the gray scale difference of both sides pixel up and down are obtained Value, is referred to as differential horizontal and vertical difference, if its differential horizontal is more than or equal to vertical difference, then think P point needs The position correction in horizontal direction is carried out, otherwise, the position correction on vertical direction is carried out.
If desired the position correction in horizontal direction is carried out, then finds corresponding points P of the P points in CC, with PCCentered on point, Left and right respectively takes r continuous picture elements to be point to be investigated, and all waits that investigating point constitutes region R to be investigated.If M needs checking to be a certain Examine a little, and its horizontal gradient value is more than PCThe horizontal gradient value of point, then M is possible to be correct boundary point, but considers Even in same depth in color image, due to the interference of color and vein information, the larger feelings of horizontal gradient value also occur There is no texture information interference in condition, but depth map, so we need the depth information using D', cross validation M points whether be Correct boundary point.Find corresponding points M of the M points in D'dIf, MdThe horizontal gradient value of point is more than a certain threshold value thresh, then recognizes It is correct geometrical boundary point for M points, by the correspondence position of position correction of the P points in E' to M points.If there is multiple M points to meet Condition, then it is check point to take the maximum point of horizontal gradient on color diagram region to be investigated.Position correction and water on vertical direction Square to similar.
If position of the P points in E' is (x0,y0), the position of P points is (x after correction onceg,yg)
Work as gradx(p)≥grady(p):
(xg,yg)=(xM,yM){M|M∈R,gradx(M) > gradx(PC),gradx(Md) > thresh
Work as gradx(p) < grady(p):
(xg,yg)=(xM,yM){M|M∈R,grady(M) > grady(PC),grady(Md) > thresh
Corrected repeatedly according to upper type, untill the front and rear boundary image of correction does not change, the side finally obtained Boundary's image is E.
As seen from the above-described embodiment, using technical solution of the present invention, due to it is determined that during boundary point, by comparing the border The Grad of the depth value and color image C of corresponding points of the point after up-sampling on depth image D' is compared, and excludes line Interference of the zone boundary to this process is managed, and carries out crosscheck correction, is searched in the neighbouring region of boundary point to be corrected Correct boundary point is sought so that the position of boundary point is more accurate, so as to effectively eliminate the side of depth image after up-sampling Edge blooming, and ensure that its edge details.
Embodiments described above, does not constitute the restriction to the technical scheme protection domain.It is any in above-mentioned implementation Modifications, equivalent substitutions and improvements made within the spirit and principle of mode etc., should be included in the protection model of the technical scheme Within enclosing.

Claims (4)

1. a kind of post-processing approach of depth image, it is characterised in that including:
Original depth image d is up-sampled by image capturing system, the depth value of depth image D' after up-sampling is obtained The gradient value information of information and color image C;
In response to the depth value information of depth image D' after the up-sampling, the initial boundary of depth image D' after the up-sampling is obtained E';
In response to color image C gradient value information, the initial boundary E' of depth image D' after the up-sampling is corrected, Make depth image D' border after the up-sampling consistent with the border of corresponding color image, obtain depth image boundary graph;
Repaired using the depth image boundary graph of acquisition as with reference to the border to depth image after up-sampling;
The initial boundary of depth image is corrected after the described pair of up-sampling, including:
To the depth value and color image C of corresponding points of the boundary point P after up-sampling on depth image D' on initial boundary E' Grad be compared, if color image C Grad be more than the corresponding points depth value, boundary point P be geometry Border;Otherwise crosscheck correction is carried out, correct boundary point is searched in the neighbouring regions of boundary point P;
The crosscheck correction includes:
Judge the boundary point P left and right sides and the gradient difference value of both sides pixel up and down, be referred to as differential horizontal and vertical Difference, if its differential horizontal is more than or equal to vertical difference, the upward position correction of water-filling square is entered to P points, otherwise, is carried out Position correction on vertical direction;
Wherein, the position correction in horizontal direction is:
Find corresponding points P of the P points in color image CC, with PCCentered on point, left and right respectively takes r continuous pixels to need checking Examine a little, all waits that investigating point constitutes region R to be investigated;If M is a certain point to be investigated, and its horizontal gradient value is more than PCPoint Horizontal gradient value, then find corresponding points M of the M points in D'dIf, MdThe horizontal gradient value of point is more than a certain threshold value thresh, then recognizes It is correct geometrical boundary point for M points, by the correspondence position of position correction of the P points in E' to M points;
Position correction on vertical direction is:
Find corresponding points P of the P points in color image CC, with PCCentered on point, r continuous pixels are respectively taken up and down to need checking Examine a little, all waits that investigating point constitutes region R to be investigated;If M is a certain point to be investigated, and its vertical gradient value is more than PCPoint Vertical gradient value, then find corresponding points M of the M points in D'dIf, MdThe vertical gradient value of point is more than a certain threshold value thresh, then recognizes It is correct geometrical boundary point for M points, by the correspondence position of position correction of the P points in E' to M points.
2. the post-processing approach of depth image according to claim 1, it is characterised in that
In the position correction in position correction or vertical direction in the horizontal direction, if there are multiple M points to meet condition, take The maximum point of gradient is check point on the region R to be investigated of color image.
3. the post-processing approach of depth image according to claim 1, it is characterised in that it is described in response to the up-sampling after Depth image D' depth value information, obtains the initial boundary of depth image after the up-sampling, including:With depth after the up-sampling The change of image D' depth value judges the initial boundary of depth image D' after the up-sampling.
4. the post-processing approach of depth image according to claim 1, it is characterised in that
Depth image is up-sampled, including:Depth image is carried out using time complexity relatively low interpolation algorithm to adopt Sample, the interpolation algorithm includes:Closest interpolation algorithm or bilinear interpolation algorithm or bicubic interpolation algorithm.
CN201510009690.1A 2015-01-08 2015-01-08 A kind of post-processing approach of depth image Expired - Fee Related CN104537627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510009690.1A CN104537627B (en) 2015-01-08 2015-01-08 A kind of post-processing approach of depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510009690.1A CN104537627B (en) 2015-01-08 2015-01-08 A kind of post-processing approach of depth image

Publications (2)

Publication Number Publication Date
CN104537627A CN104537627A (en) 2015-04-22
CN104537627B true CN104537627B (en) 2017-11-07

Family

ID=52853146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510009690.1A Expired - Fee Related CN104537627B (en) 2015-01-08 2015-01-08 A kind of post-processing approach of depth image

Country Status (1)

Country Link
CN (1) CN104537627B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105517677B (en) * 2015-05-06 2018-10-12 北京大学深圳研究生院 The post-processing approach and device of depth map/disparity map
CN105163129B (en) * 2015-09-22 2018-01-23 杭州电子科技大学 Gradient map guiding based on depth resampling 3D HEVC decoding methods
CN107292826B (en) * 2016-03-31 2021-01-22 富士通株式会社 Image processing apparatus, image processing method, and image processing device
CN109242901B (en) 2017-07-11 2021-10-22 深圳市道通智能航空技术股份有限公司 Image calibration method and device applied to three-dimensional camera
CN108550167B (en) * 2018-04-18 2022-05-24 北京航空航天大学青岛研究院 Depth image generation method and device and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6563513B1 (en) * 2000-05-11 2003-05-13 Eastman Kodak Company Image processing method and apparatus for generating low resolution, low bit depth images
CN103455984A (en) * 2013-09-02 2013-12-18 清华大学深圳研究生院 Method and device for acquiring Kinect depth image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6563513B1 (en) * 2000-05-11 2003-05-13 Eastman Kodak Company Image processing method and apparatus for generating low resolution, low bit depth images
CN103455984A (en) * 2013-09-02 2013-12-18 清华大学深圳研究生院 Method and device for acquiring Kinect depth image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Depth map post-processing for 3D-TV;Om Prakash Gangwal等;《Consumer Electronics, 2009. ICCE "09. Digest of Technical Papers International Conference on》;20090114;第1-2页 *
基于联合双边滤波的深度图像增强算法;刘金荣等;《计算机工程》;20140331;第40卷(第3期);第249-252和257页 *

Also Published As

Publication number Publication date
CN104537627A (en) 2015-04-22

Similar Documents

Publication Publication Date Title
US9214013B2 (en) Systems and methods for correcting user identified artifacts in light field images
Yang et al. Color-guided depth recovery from RGB-D data using an adaptive autoregressive model
CN104537627B (en) A kind of post-processing approach of depth image
JP6142611B2 (en) Method for stereo matching and system for stereo matching
US9684964B2 (en) Image processing apparatus and image processing method for determining disparity
US9070042B2 (en) Image processing apparatus, image processing method, and program thereof
EP3311361B1 (en) Method and apparatus for determining a depth map for an image
US9332247B2 (en) Image processing device, non-transitory computer readable recording medium, and image processing method
CN102892021B (en) New method for synthesizing virtual viewpoint image
EP2887310A1 (en) Method and apparatus for processing light-field image
CN104574331A (en) Data processing method, device, computer storage medium and user terminal
JP2017142613A (en) Information processing device, information processing system, information processing method and information processing program
WO2017096814A1 (en) Image processing method and apparatus
CN106408596A (en) Edge-based local stereo matching method
KR20180040846A (en) Setting method of edge blur for edge modeling
DE112014006493T5 (en) Determine a scale of three-dimensional information
CN109218706B (en) Method for generating stereoscopic vision image from single image
KR20170025214A (en) Method for Multi-view Depth Map Generation
CN109493293A (en) A kind of image processing method and device, display equipment
JP6139141B2 (en) Appearance image generation method and appearance image generation device
KR101733028B1 (en) Method For Estimating Edge Displacement Againt Brightness
KR101841750B1 (en) Apparatus and Method for correcting 3D contents by using matching information among images
JP2020201823A (en) Image processing device, image processing method, and program
JP2006023133A (en) Instrument and method for measuring three-dimensional shape
JP6351364B2 (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171107

Termination date: 20220108

CF01 Termination of patent right due to non-payment of annual fee