CN111489383B - Depth image up-sampling method and system based on depth marginal point and color image - Google Patents

Depth image up-sampling method and system based on depth marginal point and color image Download PDF

Info

Publication number
CN111489383B
CN111489383B CN202010280991.9A CN202010280991A CN111489383B CN 111489383 B CN111489383 B CN 111489383B CN 202010280991 A CN202010280991 A CN 202010280991A CN 111489383 B CN111489383 B CN 111489383B
Authority
CN
China
Prior art keywords
depth
pixel
edge
resolution
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202010280991.9A
Other languages
Chinese (zh)
Other versions
CN111489383A (en
Inventor
王春兴
祖兰晶
万文博
任艳楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN202010280991.9A priority Critical patent/CN111489383B/en
Publication of CN111489383A publication Critical patent/CN111489383A/en
Application granted granted Critical
Publication of CN111489383B publication Critical patent/CN111489383B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/529Depth or shape recovery from texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Abstract

The disclosure discloses a depth image up-sampling method and system based on depth edge points and color image guidance, which obtains a low-resolution depth image and a high-resolution color image; performing edge detection on the low-resolution depth map, and dividing a pixel area of the low-resolution depth map into a flat area and an edge area according to the low-resolution depth edge map; marking unreliable pixel points of the low-resolution depth map; correcting unreliable pixel points of the low-resolution depth map, initializing the edge-enhanced low-resolution depth map, judging the structural consistency of the initialized depth map and the high-resolution color map, finishing the classification of the pixel points in the initialized depth map, and searching real edge pixel points of a depth-reliable pixel region; mapping real edge pixel points in the initialized depth map to the edge-enhanced low-resolution depth map, and finishing the depth value correction of a depth reliable pixel area of the initialized depth map based on the influence factors; and obtaining a high-resolution depth map.

Description

Depth image up-sampling method and system based on depth edge point and color image
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a depth image upsampling method and system based on depth edge points and color images.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
As 3DTV and 3D movies are more and more widely introduced into human life, high quality visual perception can provide consumers with sufficient mental support, and thus, the visual quality requirements of people for images and videos are higher and higher. The 3DTV system needs to input 2D color video and 2D depth data from the same scene at the same time. Wherein, the depth data can explain the position of the scene, and 3D stereoscopic vision can be provided for consumers by means of stereoscopic display technology. Depth data is therefore an important basis for 3DTV systems and the acquisition of high quality depth information is of great interest.
The depth information may be obtained in a direct manner and an indirect manner. In the direct method, there are great limitations on hardware devices used for capturing depth information, for example, noise interference cannot be effectively suppressed, the price is high, and the like, and thus, the requirement of consumers for directly acquiring depth information cannot be met. Therefore, a method for indirectly acquiring depth information, namely a depth up-sampling algorithm, is becoming an increasingly hot research solution.
In the course of implementing the present disclosure, the inventors found that the following technical problems exist in the prior art:
in recent years, the deep upsampling algorithm is widely concerned by scholars at home and abroad. Among them, Kopf et al propose a Joint Bilateral depth Upsampling (JBU) algorithm based on Bilateral filtering, which ignores the problem of mismatch between two image pairs. Yang et al propose joint bilateral filtering with depth hypothesis to perfect its output of high resolution depth maps. Liu et al propose to use geodesic distances instead of euclidean distances in the filtering kernel to obtain accurate depth edges, but this approach may still result in erroneous depth information output due to color or structural discontinuities in the color image, and thus how to suppress texture copy artifacts remains a challenging problem. Gu et al propose a weighted analytical representation model for guiding depth image enhancement, with dynamic adjustment guidance to update the depth image. Yang et al propose an adaptive high-resolution color image-guided auto-regression (AR) model. Diebel et al solve the multi-label optimization problem with Markov Random Fields (MRF), which determines the consistency between depth resolutions as data items, while smoothing terms produce similar depth values for neighboring pixels with similar colors. Park expands the cost of smoothness by using non-local mean regularization, semi-local neighborhood information, and an edge weighting scheme that enforces refinement of color details. In addition to MRF, other learning optimization-based strategies, such as Ferstl, propose to implement depth upsampling using a Total Generalized Variation (TGV) model, which is considered a convex optimization problem with high-order regularization. In the super-resolution reconstruction of images, Yang et al introduced the principle of sparse representation for the first time. Ren et al propose an Edge-defined with Gradient-associated Depth Upsampling algorithm (EGDU), which only uses information of the vertical direction in the high-resolution and low-resolution space to complete the structure consistency judgment of the Depth image pair and complete the Depth redistribution, but the method does not effectively use the correlation between pixels, and the calculated Depth data generates a large error in the aspects of the structure consistency judgment and the Depth redistribution of the Depth image and the color image.
Disclosure of Invention
In order to solve the defects of the prior art, the disclosure provides a depth image up-sampling method and system based on a depth edge point and a color image; and obtaining a high-resolution depth image with enhanced depth discontinuous region and clear edge structure.
In a first aspect, the present disclosure provides a depth image upsampling method based on depth edge points and color images;
the depth image up-sampling method based on the depth marginal point and color image guidance comprises the following steps:
acquiring a low-resolution depth map and a high-resolution color map in the same scene; carrying out edge detection on the low-resolution depth map to obtain a low-resolution depth edge map; dividing a pixel area of the low-resolution depth map into a flat area and an edge area according to the low-resolution depth edge map; marking unreliable pixel points of the low-resolution depth map based on the flat region and the edge region; correcting unreliable pixel points of the low-resolution depth map to obtain an edge-enhanced low-resolution depth map;
initializing the edge-enhanced low-resolution depth map to obtain an initialized depth map; performing structural consistency judgment on the initialized depth map and the high-resolution color map, and completing classification of pixel points in the initialized depth map to obtain a depth-reliable pixel area and a depth-unreliable pixel area;
searching real edge pixel points of a depth-reliable pixel area from the initialized depth map according to a high-resolution gradient matrix corresponding to the high-resolution color image; mapping real edge pixel points in the initialized depth map into an edge-enhanced low-resolution depth map, and setting influence factors generated due to space position constraint on pixel points at different positions in a pixel block;
completing the depth value correction of the depth reliable pixel area of the initialized depth map based on the influence factor; completing the correction of the depth value of the depth unreliable pixel area of the initialized depth map; and obtaining a corrected high-resolution depth map.
In a second aspect, the present disclosure also provides a depth image upsampling system based on depth edge points and color images;
a depth image upsampling system based on depth edge points and color image guidance, comprising:
an acquisition module configured to: acquiring a low-resolution depth map and a high-resolution color map in the same scene; carrying out edge detection on the low-resolution depth map to obtain a low-resolution depth edge map; dividing a pixel area of the low-resolution depth map into a flat area and an edge area according to the low-resolution depth edge map; marking unreliable pixel points of the low-resolution depth map based on the flat region and the edge region; correcting unreliable pixel points of the low-resolution depth map to obtain an edge-enhanced low-resolution depth map;
a determination module configured to: initializing the edge-enhanced low-resolution depth map to obtain an initialized depth map; carrying out structural consistency judgment on the initialized depth map and the high-resolution color map, finishing the classification of pixel points in the initialized depth map, and obtaining a depth reliable pixel area and a depth unreliable pixel area;
a setup module configured to: according to a high-resolution gradient matrix corresponding to the high-resolution color image, searching real edge pixel points of a depth-reliable pixel region from the initialized depth map; mapping real edge pixel points in the initialized depth map into an edge-enhanced low-resolution depth map, and setting influence factors generated by space position constraint on pixel points at different positions in a pixel block;
a correction module configured to: completing the depth value correction of the depth reliable pixel area of the initialized depth map based on the influence factor; completing the correction of the depth value of the depth unreliable pixel area of the initialized depth map; and obtaining a corrected high-resolution depth map.
In a third aspect, the present disclosure also provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of the method of the first aspect.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the steps of the method of the first aspect.
Compared with the prior art, the beneficial effect of this disclosure is:
1. because the low-resolution depth image may have a low quality problem, pixels with wrong depth and holes exist, and if the low-resolution depth image is directly used for depth upsampling, more pixels with wrong depth can be generated in the high-resolution depth image, so that the unreliable pixels are marked by the low-resolution depth image, and the unreliable pixels are corrected, so that the accuracy of the depth in the low-resolution depth image is ensured.
2. The scenes seen by the two eyes of the human are divided into far and near scenes, the scenes seen by the left eye and the right eye are greatly different, the sense of distance in the front-back direction and the difference of the scenes in the left-right direction are shown in the 8 field of any pixel point in the depth map, wherein a large variation between depth values of pixel points may reflect a difference in the position of the object, so the present disclosure proposes to select a 3 x 3 pixel block as a unit, combine the edge point distribution in the high resolution depth map, the low resolution depth map and the gradient map of the color image, the consistency judgment is carried out on the structures of the edge regions of the depth space and the color space, and the pixel points of the edge regions are divided into two types, namely effective pixel points and unreliable pixel points, the region is divided into a depth effective region and a depth unreliable region, so that the phenomenon of generating artifacts in the output depth map caused by the inconsistency of the depth map and the color map can be effectively avoided.
3. Considering that the correlation among pixels and the difference of the positions of the pixels have different influences on other pixels, the central pixel can be constrained by using neighborhood pixels in 8 directions with a certain pixel as the center, and therefore the influence factor of space position constraint is set for the influence among the pixels. The depth correction of the depth effective area is completed by utilizing the influence factors generated by the 3 x 3 pixel space position constraint, and the accuracy of the depth can be effectively improved.
4. Because certain correlation exists among pixels, if only the pixel depth in the vertical direction is adopted, the effective depth for depth correction has uniqueness, and the characteristic that the positions of the pixels have different regional characteristics is easy to ignore, so that the range of the available effective pixels is expanded, the correction of the depth value is completed by utilizing the mutual influence of 8 neighborhood pixels taking a certain pixel as the center, and the accuracy of the depth can be effectively improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate exemplary embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a general flow chart of an implementation of the present disclosure;
FIG. 2 is a graph of the relationship of pixel points between different images for determining the structural consistency of a depth map and a color map in the present disclosure;
FIG. 3(a) is a high resolution color map of an Art image;
FIG. 3(b) is a high resolution depth map of an Art image;
FIG. 3(c) is a high resolution color map of a Reindeer image;
FIG. 3(d) is a high resolution depth map of a Reindeer image;
fig. 4(a) is a high-resolution depth image obtained by performing 4-fold upsampling by the Bicubic method;
fig. 4(b) is a high-resolution depth image obtained by up-sampling 4 times by the JBU method;
fig. 4(c) is a high resolution depth image obtained by up-sampling 4 times by TGV method;
fig. 4(d) is a high resolution depth image obtained by 4 times up-sampling by the EGDU method;
FIG. 4(e) is a high resolution depth image obtained by 4-fold upsampling by the disclosed method;
fig. 5(a) is a high-resolution depth image obtained by up-sampling 4 times by the Bicubic method;
fig. 5(b) is a high-resolution depth image obtained by up-sampling 4 times by the JBU method;
fig. 5(c) is a high resolution depth image obtained by 4 times up-sampling by the TGV method;
fig. 5(d) is a high resolution depth image obtained by 4 times up-sampling by the EGDU method;
fig. 5(e) is a high resolution depth image obtained by 4-fold upsampling by the method of the present disclosure.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular is intended to include the plural unless the context clearly dictates otherwise, and it should be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of features, steps, operations, devices, components, and/or combinations thereof.
In the first embodiment, the present embodiment provides a depth image upsampling method based on depth edge points and color images;
as shown in fig. 1, the depth image upsampling method based on depth edge point and color image guide includes:
s100: acquiring a low-resolution depth map and a high-resolution color map in the same scene;
s101: carrying out edge detection on the low-resolution depth map to obtain a low-resolution depth edge map;
s102: dividing a pixel area of the low-resolution depth map into a flat area and an edge area according to the low-resolution depth edge map;
s103: marking unreliable pixel points of the low-resolution depth map based on the flat region and the edge region;
s104: correcting unreliable pixel points of the low-resolution depth map to obtain an edge-enhanced low-resolution depth map;
s105: initializing the edge-enhanced low-resolution depth map to obtain an initialized depth map;
s106: carrying out structural consistency judgment on the initialized depth map and the high-resolution color map, and finishing classification of pixel points in the initialized depth map to obtain a depth reliable pixel area and a depth unreliable pixel area;
s107: according to a high-resolution gradient matrix corresponding to the high-resolution color image, searching real edge pixel points of a depth-reliable pixel region from the initialized depth map;
s108: mapping real edge pixel points in the initialized depth map into an edge-enhanced low-resolution depth map, and setting influence factors generated by space position constraint on pixel points at different positions in a pixel block;
s109: completing the depth value correction of the depth reliable pixel area of the initialized depth map based on the influence factor; completing the correction of the depth value of the depth unreliable pixel area of the initialized depth map; and obtaining the corrected high-resolution depth map.
As one or more embodiments, in S100, the low-resolution depth map refers to: the low resolution depth map is obtained from the true high resolution depth map 1110 × 1370 down-sampled by a sampling factor of 2, 4, or 8, etc.
The high-resolution color map refers to: a high resolution color image corresponding to a true high resolution depth map, 1110 x 1370 in size.
As one or more embodiments, in S101, performing edge detection on the low-resolution depth map to obtain a low-resolution depth edge map; the method comprises the following specific steps:
and (4) extracting edge points of the low-resolution depth map by using a Sobel operator to obtain the low-resolution depth edge map.
It will be appreciated that for the low resolution depth map DLExtracting edge points by using Sobel operator to obtain a low-resolution depth edge map
Figure BDA0002446566310000081
As one or more embodiments, in S102, the low-resolution depth edge map is divided into a flat area and an edge area; the method comprises the following specific steps: and extracting Sobel edge points of the low-resolution depth map, wherein the extracted edge points are edge areas, and the points which are not extracted are flat areas.
As one or more embodiments, in S103, based on the flat region and the edge region, the unreliable pixel points of the low-resolution depth map are marked; the method comprises the following specific steps:
firstly, marking the pixel point with the depth value of 0 as an unreliable pixel point;
second, at low resolution depth map DLTaking a 3 x 3 image block, when the image block is in a flat area, if the number of times that the depth value difference between a central pixel point and a neighborhood pixel point is not less than 3 exceeds t1Put t at1If the number is 3, marking the central pixel point as an unreliable pixel point;
when the image block is in the edge area, if the number of times that the depth value difference between the central pixel point and the neighborhood pixel point is not less than 3 exceeds t1Put t at1If the number is 3, marking the central pixel point as an unreliable pixel point;
when the image block is in the flat area and the edge area at the same time, the low-resolution depth edge image is used to correspond to the edge area in the image block one by one, and the image block is processedComparing the pixel point in the edge region with the pixel point adjacent to the edge region, and if the times that the depth value difference is not less than 3 is greater than t2Put t at2If the number is 2, marking as an unreliable pixel point; comparing the pixel point in the flat area with the adjacent pixel point, if the times that the depth value difference is not less than 3 is more than t2Put t at2And if the number is 2, marking as an unreliable pixel point.
In the above manner, the whole low-resolution depth map D is completed in sequenceLOf the unreliable pixel points.
It should be understood that the depth image DLIn the interpolation process, more error pixel points can be generated due to the fact that the depth value is lost or pixel points with error depth values exist, so that the edge of an interpolated image is fuzzy, and the sawtooth effect is obvious. Therefore, unreliable pixels are marked first.
As one or more embodiments, in S104, the unreliable pixel point of the low-resolution depth map is corrected to obtain an edge-enhanced low-resolution depth map; the method comprises the following specific steps:
for unreliable pixel points in a flat area or an edge area, according to the edge distribution of a low-resolution depth edge image, completing filling of depth values by utilizing bicubic interpolation of reliable pixel points in 8 neighborhoods;
and for the pixel points of which the 8 neighborhoods are not in the flat area or the edge area at the same time, filling the pixel points by using the average value of the depth values of the adjacent reliable pixel points in the corresponding area to obtain an edge-enhanced low-resolution depth map.
It should be understood that the edge enhanced and depth information complete low resolution depth map is
Figure BDA0002446566310000091
As one or more embodiments, in S105, initializing the edge-enhanced low-resolution depth map to obtain an initialized depth map; the method comprises the following specific steps:
and carrying out bicubic interpolation on the edge-enhanced low-resolution depth map to obtain an initialized depth map.
It should be appreciated that since the depth image upsampling method based on depth edge points and color image guidance corrects for pseudo depth matrices, the initialization of the low resolution depth map is achieved first.
As one or more embodiments, in S106, the structural consistency of the initialized depth map and the high-resolution color map is determined, and classification of the pixels in the initialized depth map is completed to obtain a depth-reliable pixel region and a depth-unreliable pixel region; the method comprises the following specific steps:
firstly, selecting a 3 × 3 pixel block a with a certain pixel point O as a center in a high resolution depth space, and respectively determining the belonging area of each pixel point in the pixel block a, wherein the belonging area comprises: an edge region and a flat region;
secondly, finding a pixel block B corresponding to the pixel block A in the high-resolution gradient space, similarly determining regions where 9 pixel points of the pixel block B are located, performing consistency judgment on the region characteristics of the pixel points in the two pixel blocks, and judging as a depth-reliable pixel point if the region characteristics of the pixel points in the two pixel blocks are the same; otherwise, projecting the pixel points with inconsistent area characteristics in the two pixel blocks into the low-resolution depth map, and judging.
The criterion for judging again includes: suppose pixel block A is a pixel (x)hJ) is a 3 × 3 pixel block centered on the pixel block, and a pixel block satisfying 8 conditions in the formula (1) is a reliable pixel, and a pixel block not satisfying the conditions is a depth unreliable pixel.
Figure BDA0002446566310000111
Wherein T ═ { T ═ TiI is not less than 1 and not more than 8, and Q is a threshold value set by experimentiI is more than or equal to 1 and less than or equal to 8 represents 8 conditions required to be satisfied,
Figure BDA0002446566310000112
representing blocks of pixelsCentral pixel point of A (x)hJ) the depth of the pixel points mapped to the low resolution depth map for which the depth correction has been completed,
Figure BDA0002446566310000113
Figure BDA0002446566310000114
respectively indicate at pixel points
Figure BDA0002446566310000115
And 8 neighborhood pixels.
The structure consistency of the initialized depth map and the high-resolution color map is judged to complete the initialized depth map D0Classification of middle pixel points and initialization of depth map D0The region to which the middle pixel point belongs is also divided into a depth-reliable pixel region and a depth-unreliable pixel region.
It should be understood that in high resolution space, a portion of the image is selected for illustration, and as shown in FIG. 2, it is assumed that the position of the pixel point being captured is at x1To xmAnd lines from j-1 column to j +1 column, wherein any pixel point is expressed by a formula as:
P={(x,y)|x1≤x≤xm,j-1≤y≤j+1} (2)
wherein, the pixel point (x)hJ) and (x)m-1J +1) has the largest gradient value in the j-th column and the j + 1-th column, these pixel points being projected in the low resolution depth space
Figure BDA0002446566310000116
And
Figure BDA0002446566310000117
it should be understood that the consistency determination is in units of 3 × 3 pixel blocks. Initializing a depth map to D0
As one or more embodiments, in S107, according to a high-resolution gradient matrix corresponding to the high-resolution color image, a true edge pixel point of a depth-reliable pixel region is found from the initialized depth map; the method comprises the following specific steps:
in initializing depth map D0At least two effective depth values generally exist in the edge region with the consistent structure, and a high-resolution gradient matrix G corresponding to the high-resolution color image is used for determining a real edge pixel point;
in high resolution gradient space, pixel points (x) are assumedhY) has the largest gradient value and is a real edge point or a pixel point in the neighborhood of the real edge point;
the set of effective depth edge pixels is:
ΩG={(x,y)||G(x,y)-G(xh,y)|<TG,(x,y)∈P} (3)
wherein P represents a set of pixel points satisfying formula (2), G (x, y) represents a gradient value of a gradient matrix of the color map at coordinates (x, y), G (x)hY) is expressed in coordinates (x)hY) gradient. T isGRepresenting any pixel point in the set P and G (x)hY) threshold value for the absolute value of the difference.
Calculating the difference between the pixel values of the pixel point (x, y) in the set and the pixel values of the pixel points in the neighborhood, and assuming that the abscissa of the pixel point with the largest difference between the pixel values of the pixel point (x, y) in the set is xeThe formula is as follows:
Figure BDA0002446566310000121
wherein, the abscissa of the real edge pixel point is defined as xfThe abscissa of the true edge pixel is xeAnd xhThe vertical coordinate of the real edge pixel point is y; the value ranges of x and y are set omega provided by formula (3)GIt is decided that I (x, y) represents the pixel value at coordinate (x, y) in the high resolution color map.
As one or more embodiments, in S108, mapping real edge pixel points in the initialized depth map to the edge-enhanced low-resolution depth map, and setting influence factors generated due to spatial position constraints on pixel points at different positions in a pixel block; the method comprises the following specific steps:
first, real edge pixel point (x)fY) mapping to a low resolution depth space with the mapped coordinates being
Figure BDA0002446566310000122
Selecting pixel points
Figure BDA0002446566310000123
A central 3 × 3 pixel block;
secondly, setting influence factors generated by space position constraint on pixel points at different positions in a pixel block by utilizing the correlation among pixels;
wherein, the setting of the influence factor is as follows:
assuming a low resolution depth image
Figure BDA0002446566310000131
Initialized depth map D with size of l × n and K times of magnification0Is of size LxN, i.e.
Figure BDA0002446566310000132
Wherein the low resolution depth image
Figure BDA0002446566310000139
Coordinates (x) of any pixel point in the spaceL,yL) And the initialized depth map D0The coordinate mapping formula between the coordinates (x, y) of any pixel point is as follows:
Figure BDA0002446566310000133
Figure BDA0002446566310000134
low resolution depth image
Figure BDA0002446566310000135
Middle pixel (x)L,yL) And a distance point (x)L,yL) The depth values of the last 8 pixel points are used to calculate the initialized depth map D0Parameters of depth values at the middle (x, y) position, and obtaining the influence factor alpha of 9 pixel points in a 3 multiplied by 3 pixel block by utilizing a space constraint function to obtain an initialized depth map D0The depth value at the (x, y) position of (a) is calculated according to the two-dimensional characteristics of the pixel points, and the spatial position constraint function is calculated according to the rows and the columns of the pixel points, wherein the spatial constraint function in the x direction is as follows:
Figure BDA0002446566310000136
wherein, CiRepresenting the influence factor generated in the x direction, m1 is 3 or 4, m2 is 1 or 2, i is 1,2,3, r2Indicating the difference in longitudinal distance in the x-direction.
To obtain CjThe formula is as follows:
Figure BDA0002446566310000137
wherein, CjDenotes an influence factor generated in the y direction, j is 1,2,3, m1 is 3, 4, m2 is 1,2, r1Indicating the difference in longitudinal distance in the y-direction.
The transverse distance in the x direction is r2Longitudinal distance in y direction of r1The formulas are respectively as follows:
Figure BDA0002446566310000138
Figure BDA0002446566310000141
wherein x and y represent the horizontal and vertical coordinates of the pixel points in the initialized depth map, and xL,yLRespectively representing the horizontal and vertical coordinates of pixel points in the low-resolution depth map which maps x and y to the finished depth correction. And l and n respectively represent the size of the low-resolution depth map with the depth correction completed, and K is a sampling factor for realizing depth up-sampling.
In a 3 × 3 window region, the influence factor α of a certain pixel is:
αi,j=Ci(r2)Cj(r1),i=1,2,3,j=1,2,3 (11)
wherein alpha isi,jRepresenting the influence factor, C, at different positions in the combined x and y directionsiDenotes an influence factor, C, generated in the x directionjRepresenting the impact factor produced in the y direction. i and j are used to represent the position coordinates of the 9 pixel points in the 3 x 3 window region, with xL,yLIs positively correlated with the size of (a), and the coordinate (x) is represented by i ═ 1 and j ═ 1L-1,yL-1) position, with i-2, j-2 representing the coordinate (x)L,yL) The position of (a).
In S109, the correcting the depth value of the depth-reliable pixel region of the initialized depth map is performed based on the influence factor; the method comprises the following specific steps:
the formula for completing the depth correction of the depth-reliable pixel region by using the real edge points and the effective depth mapped to the low-resolution space is as follows:
Figure BDA0002446566310000142
wherein D isH(x, y) denotes a depth value, α, at (x, y) in the finally output high-resolution depth mapi,jThe influence factors set when coordinates (i, j) in the x and y directions are integrated are shown, and i and j are 1,2 and 3 respectively. s1, t1,s2,t2The number of the whole groups is an integer,
Figure BDA0002446566310000143
respectively representing low resolution at which depth correction has been completedIn the depth map
Figure BDA0002446566310000144
The depth value of (c). x is the number of1,xm,xfRespectively representing the abscissa of the first pixel and the last pixel in a column and the abscissa of the true edge point, epsilon1, εmRespectively expressed as the gradient difference between the real edge point and the first pixel point and the last pixel point in a certain column.
In one or more embodiments, in S109, completing depth value correction of the depth unreliable pixel region of the initialized depth map; the method comprises the following specific steps:
suppose that the error pixel point in the initialized depth map and the projection point in the low resolution space are (x, y), (x) and (y), respectivelyL,yL) Wherein, pixel point (x)L,yL) The set of pixels in the 8 neighborhoods belonging to a valid depth value is:
Figure BDA0002446566310000151
where Ω is the set of effective depths, s, t are integers,
Figure BDA0002446566310000152
for representation at a pixel point (x)L,yL) And 8, the depth value of the pixel point in the neighborhood.
Correcting unreliable pixel points:
Figure BDA0002446566310000153
wherein D isH(x, y) represents the depth value at (x, y) in the final output high resolution depth map, the selection and initialization depth map D0The depth value with the minimum difference of the depth values of the middle pixel points (x, y) is used as the final output high-resolution depth map DHThe final depth value at (x, y), d being the depth values in the set Ω.
First, unreliable pixels of the low resolution depth map are corrected. Edge map for obtaining low-resolution depth image by using Sobel edge detection operator
Figure BDA0002446566310000154
And a gradient map G of the high-resolution color image, and obtaining a high-resolution depth edge map by using the mapping relation between the low-resolution and high-resolution maps
Figure BDA0002446566310000155
Secondly, by utilizing the correlation among pixels and taking a 3 multiplied by 3 pixel block as a unit, judging the structural consistency of the initialized depth map and the edge region of the color map, dividing the pixel depth into a depth-reliable pixel point region and a depth-unreliable pixel point region, and finishing pixel point classification.
Finally, for pixel areas with consistent edge structures, setting influence factors on the existing effective depth by utilizing space position constraint to complete depth correction;
for pixel areas with inconsistent edge structures, effective depth is searched by using the pixel characteristics of a low-resolution depth space, and finally a high-quality high-resolution depth map D is outputH
At the output of the high resolution depth image DHIs equivalent to the initialized depth map D0
Completing the initialization of the depth map D0The high-resolution depth image with clear edge structure and high quality is output by the correction of (2).
Description of the experiments
1. Simulation conditions are as follows:
simulations were performed on Intel (R) core (TM) i7-8700CPU @3.20GHz, WINDOWS 10 system, MatlabR2018a platform.
The present disclosure selects two test images for simulation as shown in fig. 3(a) -3 (d), where fig. 3(a) is a high resolution color map of an Art image, fig. 3(b) is a high resolution depth map of an Art image, fig. 3(c) is a high resolution color map of a reindreer image, and fig. 3(d) is a high resolution depth map of a reindreer image.
Before the experiment begins, 2 times, 4 times and 8 times of downsampling processing are respectively carried out on the high-resolution depth image provided in the test set, and a low-resolution depth image to be upsampled is obtained.
2. The simulation method comprises the following steps:
first, bicubic interpolation
② JBU method for united bilateral filtering up-sampling proposed by Kopf
Third, Ferstl proposes a method for realizing depth up-sampling TGV by utilizing anisotropic diffusion tensor
Angle point and gradient assistance-based depth up-sampling EGDU (extended edge data Unit) method proposed by Ren
Depth upsampling method based on depth edge and color image guide
3. Simulation content:
simulation 1, the Art and reindreer images in fig. 3(a) -3 (d) were up-sampled by 4 times the depth map using Bicubic, JBU, TGV, EGDU and the method of the present disclosure, respectively, and the results are shown in fig. 4(a) -4 (e) and fig. 5(a) -5 (e), where:
fig. 4(a) is a high-resolution depth image obtained by up-sampling 4 times by the Bicubic method; fig. 4(b) is a high-resolution depth image obtained by up-sampling 4 times by the JBU method; fig. 4(c) is a high resolution depth image obtained by 4 times up-sampling by the TGV method; fig. 4(d) is a high resolution depth image obtained by 4 times up-sampling by the EGDU method; FIG. 4(e) is a high resolution depth image obtained by 4-fold upsampling by the method of the present disclosure;
fig. 5(a) is a high-resolution depth image obtained by up-sampling 4 times by the Bicubic method; fig. 5(b) is a high-resolution depth image obtained by up-sampling 4 times by the JBU method; fig. 5(c) is a high resolution depth image obtained by up-sampling 4 times by TGV method; fig. 5(d) is a high resolution depth image obtained by 4 times up-sampling by the EGDU method; FIG. 5(e) is a high resolution depth image obtained by 4-fold upsampling by the method of the present disclosure;
and (4) comparing the results:
the high-resolution depth images output by Bicubic and JBU algorithms have the problems of image blurring, expansion of depth value missing areas, local edge missing and the like, and experimental output images of TGV and EDGU show that the two algorithms cannot effectively enhance the detailed edge areas and cannot effectively reconstruct the depth missing areas. As can be seen from the images in FIG. 4(e) and FIG. 5(e), the image is clear and rich in detail, the method not only can enhance the edge detail, but also can repair the deep black hole in the low-resolution depth image, and output the accurate high-resolution depth image with complete depth information. By comparing the depth maps output by the 5 methods, the images obtained by the first 4 methods have the problems of blurring and artifacts, wherein the phenomenon of edge mixing also exists. From the aspect of subjective effect, the depth image generated by the method is clearer, the detail edge area can be effectively enhanced, the reconstruction of the depth missing part is completed, and the performance of the algorithm provided by the method is better subjectively.
Simulation 2, performing up-sampling on the Art test set graph shown in fig. 3(a) by using Bicubic, JBU, TGV, EDGU and the disclosed method for depth maps of 2 times, 4 times and 8 times, and performing data analysis on the experimental result aiming at an evaluation index.
1) BPR bad point rate
Data analysis was performed for the three evaluation indexes, and the results are shown in table 1. From table 1, it can be seen that, from the upsampling factors 2 to 8, the algorithm proposed herein always obtains the minimum BPR value, which means that the performance of the algorithm is relatively stable, and the obtained experimental result is more reliable than other algorithms. The smaller the value of the image quality evaluation index BPR is, the smaller the difference between the output image of the algorithm and the real high-resolution depth map is, and the data in the table 1 show that the algorithm has higher effectiveness. The method disclosed by the invention not only can subjectively bring good visual effect to people, but also has very obvious advantages on the BPR objective evaluation index.
Table 1 data analysis of results of upsampling of low resolution depth images for test image Art using 5 typical methods available with the present disclosure
Art
bilinear 0.1911 0.3373 0.5282
bicubic 0.1870 0.3255 0.5276
JBU 0.1677 0.3117 0.6113
TGV 0.1577 0.1247 0.3096
EGDU 0.0279 0.0314 0.0606
Text algorithm 0.0164 0.0299 0.0411
The second embodiment also provides a depth image sampling system based on the depth edge points and the color images;
depth image upsampling system based on depth edge points and color image guidance, comprising:
an acquisition module configured to: acquiring a low-resolution depth map and a high-resolution color map in the same scene; carrying out edge detection on the low-resolution depth map to obtain a low-resolution depth edge map; dividing a pixel region of the low-resolution depth map into a flat region and an edge region according to the low-resolution depth edge map; marking unreliable pixel points of the low-resolution depth map based on the flat area and the edge area; correcting unreliable pixel points of the low-resolution depth map to obtain an edge-enhanced low-resolution depth map;
a determination module configured to: initializing the edge-enhanced low-resolution depth map to obtain an initialized depth map; carrying out structural consistency judgment on the initialized depth map and the high-resolution color map, finishing the classification of pixel points in the initialized depth map, and obtaining a depth reliable pixel area and a depth unreliable pixel area;
a setup module configured to: according to a high-resolution gradient matrix corresponding to the high-resolution color image, searching real edge pixel points of a depth-reliable pixel region from the initialized depth map; mapping real edge pixel points in the initialized depth map into an edge-enhanced low-resolution depth map, and setting influence factors generated by space position constraint on pixel points at different positions in pixel blocks;
a correction module configured to: completing the depth value correction of the depth reliable pixel area of the initialized depth map based on the influence factor; completing the correction of the depth value of the depth unreliable pixel area of the initialized depth map; and obtaining a corrected high-resolution depth map.
In a third embodiment, the present embodiment further provides an electronic device, which includes a memory, a processor, and computer instructions stored in the memory and executed on the processor, where the computer instructions, when executed by the processor, implement the steps of the method in the first embodiment.
In a fourth embodiment, the present embodiment further provides a computer-readable storage medium for storing computer instructions, and the computer instructions, when executed by a processor, perform the steps of the method in the first embodiment.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (7)

1. The depth image up-sampling method based on depth marginal points and color image guidance is characterized by comprising the following steps:
acquiring a low-resolution depth map and a high-resolution color map in the same scene;
carrying out edge detection on the low-resolution depth map to obtain a low-resolution depth edge map; dividing a pixel area of the low-resolution depth map into a flat area and an edge area according to the low-resolution depth edge map; marking unreliable pixel points of the low-resolution depth map based on the flat region and the edge region; correcting unreliable pixel points of the low-resolution depth map to obtain an edge-enhanced low-resolution depth map;
initializing the edge-enhanced low-resolution depth map to obtain an initialized depth map; carrying out structural consistency judgment on the initialized depth map and the high-resolution color map, finishing the classification of pixel points in the initialized depth map, and obtaining a depth reliable pixel area and a depth unreliable pixel area;
according to a high-resolution gradient matrix corresponding to the high-resolution color image, searching real edge pixel points of a depth-reliable pixel region from the initialized depth map; mapping real edge pixel points in the initialized depth map into an edge-enhanced low-resolution depth map, and setting influence factors generated by space position constraint on pixel points at different positions in a pixel block;
completing the depth value correction of the depth reliable pixel area of the initialized depth map based on the influence factor; completing the correction of the depth value of the depth unreliable pixel area of the initialized depth map; obtaining a corrected high-resolution depth map;
marking unreliable pixel points of the low-resolution depth map based on the flat region and the edge region; the method comprises the following specific steps:
firstly, marking the pixel points with the depth value of 0 as unreliable pixel points;
second, at low resolution depth mapsD L Taking 3 x 3 image blocks, when the image blocks are in a flat area, if the times that the depth value difference value of a central pixel point and a neighborhood pixel point is not less than 3 exceeds the ranget 1 Is arranged att 1 If the number is 3, marking the central pixel point as an unreliable pixel point;
when the image block is in the edge area, if the number of times that the depth value difference between the central pixel point and the neighborhood pixel point is not less than 3 exceeds the threshold valuet 1 Is arranged att 1 If the number is 3, marking the central pixel point as an unreliable pixel point;
when the image block is in the flat area and the edge area at the same time, the low-resolution depth edge image is used for corresponding to the edge area in the image block one by one, the pixel point in the edge area in the image block is compared with the pixel point adjacent to the edge area, and if the number of times that the depth value difference is not less than 3 is greater thant 2 Is arranged att 2 If the number is 2, marking as an unreliable pixel point; comparing the pixel points in the flat area in the image block with the adjacent pixel points by using the flat area, and if the times that the depth value difference is not less than 3 are greater thant 2 Is arranged att 2 If the number is 2, marking as an unreliable pixel point; correcting unreliable pixel points of the low-resolution depth map to obtain edgesEdge enhanced low resolution depth map; the method comprises the following specific steps:
for unreliable pixel points in a flat area or an edge area, filling the depth value by utilizing bicubic interpolation of 8 neighborhood reliable pixel points according to the edge distribution of a low-resolution depth edge map;
for the pixel points of which 8 neighborhoods are not located in a flat area or an edge area at the same time, filling the pixel points by using the mean value of the depth values of the adjacent reliable pixel points in the corresponding area to obtain an edge-enhanced low-resolution depth map;
performing structural consistency judgment on the initialized depth map and the high-resolution color map, and completing classification of pixel points in the initialized depth map to obtain a depth-reliable pixel area and a depth-unreliable pixel area; the method comprises the following specific steps:
firstly, selecting a 3 × 3 pixel block a with a certain pixel point O as a center in a high resolution depth space, and respectively determining the belonging area of each pixel point in the pixel block a, wherein the belonging area comprises: an edge region and a flat region;
secondly, finding a pixel block B corresponding to the pixel block A in the high-resolution gradient space, similarly determining regions where 9 pixel points of the pixel block B are located, performing consistency judgment on the region characteristics of the pixel points in the two pixel blocks, and judging as a depth-reliable pixel point if the region characteristics of the pixel points in the two pixel blocks are the same; otherwise, projecting the pixel points with inconsistent area characteristics in the two pixel blocks into the low-resolution depth map, and judging.
2. The method of claim 1, wherein edge detection is performed on the low resolution depth map to obtain a low resolution depth edge map; the method comprises the following specific steps:
and (4) extracting edge points of the low-resolution depth map by using a Sobel operator to obtain the low-resolution depth edge map.
3. The method of claim 1, wherein the low resolution depth edge map is divided into a flat region and an edge region; the method comprises the following specific steps: and extracting Sobel edge points of the low-resolution depth map, wherein the extracted edge points are edge regions, and the points which are not extracted are flat regions.
4. The method of claim 1, wherein initializing the edge-enhanced low resolution depth map results in an initialized depth map; the method comprises the following specific steps:
and carrying out bicubic interpolation on the edge-enhanced low-resolution depth map to obtain an initialized depth map.
5. The depth image up-sampling system based on depth marginal points and color image guidance is characterized by comprising the following components:
an acquisition module configured to: acquiring a low-resolution depth map and a high-resolution color map in the same scene; carrying out edge detection on the low-resolution depth map to obtain a low-resolution depth edge map; dividing a pixel area of the low-resolution depth map into a flat area and an edge area according to the low-resolution depth edge map; marking unreliable pixel points of the low-resolution depth map based on the flat region and the edge region; correcting unreliable pixel points of the low-resolution depth map to obtain an edge-enhanced low-resolution depth map;
a determination module configured to: initializing the edge-enhanced low-resolution depth map to obtain an initialized depth map; carrying out structural consistency judgment on the initialized depth map and the high-resolution color map, finishing the classification of pixel points in the initialized depth map, and obtaining a depth reliable pixel area and a depth unreliable pixel area;
a setup module configured to: according to a high-resolution gradient matrix corresponding to the high-resolution color image, searching real edge pixel points of a depth-reliable pixel region from the initialized depth map; mapping real edge pixel points in the initialized depth map into an edge-enhanced low-resolution depth map, and setting influence factors generated by space position constraint on pixel points at different positions in a pixel block;
a correction module configured to: completing the depth value correction of the depth reliable pixel area of the initialized depth map based on the influence factor; completing the correction of the depth value of the depth unreliable pixel area of the initialized depth map; obtaining a corrected high-resolution depth map;
marking unreliable pixel points of the low-resolution depth map based on the flat region and the edge region; the method comprises the following specific steps:
firstly, marking the pixel point with the depth value of 0 as an unreliable pixel point;
second, at low resolution depth mapsD L Taking 3 x 3 image blocks, when the image blocks are in a flat area, if the times that the depth value difference value of a central pixel point and a neighborhood pixel point is not less than 3 exceeds the ranget 1 Is arranged att 1 If the number is 3, marking the central pixel point as an unreliable pixel point;
when the image block is in the edge area, if the number of times that the depth value difference between the central pixel point and the neighborhood pixel point is not less than 3 exceeds the threshold valuet 1 Is arranged att 1 If the number is 3, marking the central pixel point as an unreliable pixel point;
when the image block is in a flat area and an edge area at the same time, the low-resolution depth edge image is used for corresponding to the edge area in the image block one by one, the pixel point in the edge area in the image block is compared with the pixel point adjacent to the edge area, and if the frequency that the difference value of the depth values is not less than 3 is greater thant 2 Is arranged att 2 If the number is 2, marking as an unreliable pixel point; comparing the pixel points in the flat area in the image block with the adjacent pixel points by using the flat area, and if the times that the depth value difference is not less than 3 are greater thant 2 Is arranged att 2 If the number is 2, marking as an unreliable pixel point; correcting unreliable pixel points of the low-resolution depth map to obtain an edge-enhanced low-resolution depth map; the method comprises the following specific steps:
for unreliable pixel points in a flat area or an edge area, filling the depth value by utilizing bicubic interpolation of 8 neighborhood reliable pixel points according to the edge distribution of a low-resolution depth edge map;
for the pixel points of which 8 neighborhoods are not located in a flat area or an edge area at the same time, filling the pixel points by using the mean value of the depth values of the adjacent reliable pixel points in the corresponding area to obtain an edge-enhanced low-resolution depth map;
carrying out structural consistency judgment on the initialized depth map and the high-resolution color map, finishing the classification of pixel points in the initialized depth map, and obtaining a depth reliable pixel area and a depth unreliable pixel area; the method comprises the following specific steps:
firstly, selecting a 3 × 3 pixel block a with a certain pixel point O as a center in a high resolution depth space, and respectively determining the belonging area of each pixel point in the pixel block a, wherein the belonging area comprises: an edge region and a flat region;
secondly, finding a pixel block B corresponding to the pixel block A in the high-resolution gradient space, similarly determining regions where 9 pixel points of the pixel block B are located, performing consistency judgment on the region characteristics of the pixel points in the two pixel blocks, and judging as a depth-reliable pixel point if the region characteristics of the pixel points in the two pixel blocks are the same; otherwise, projecting the pixel points with inconsistent area characteristics in the two pixel blocks into the low-resolution depth map, and judging.
6. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executable on the processor, the computer instructions when executed by the processor performing the method of any of claims 1-4.
7. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the method of any one of claims 1 to 4.
CN202010280991.9A 2020-04-10 2020-04-10 Depth image up-sampling method and system based on depth marginal point and color image Expired - Fee Related CN111489383B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010280991.9A CN111489383B (en) 2020-04-10 2020-04-10 Depth image up-sampling method and system based on depth marginal point and color image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010280991.9A CN111489383B (en) 2020-04-10 2020-04-10 Depth image up-sampling method and system based on depth marginal point and color image

Publications (2)

Publication Number Publication Date
CN111489383A CN111489383A (en) 2020-08-04
CN111489383B true CN111489383B (en) 2022-06-10

Family

ID=71798244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010280991.9A Expired - Fee Related CN111489383B (en) 2020-04-10 2020-04-10 Depth image up-sampling method and system based on depth marginal point and color image

Country Status (1)

Country Link
CN (1) CN111489383B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233191A (en) * 2020-09-18 2021-01-15 南京理工大学 Depth map colorizing method
CN113284081B (en) * 2021-07-20 2021-10-22 杭州小影创新科技股份有限公司 Depth map super-resolution optimization method and device, processing equipment and storage medium
CN116883255A (en) * 2023-05-22 2023-10-13 北京拙河科技有限公司 Boundary correction method and device for high-precision light field image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609977A (en) * 2012-01-12 2012-07-25 浙江大学 Depth integration and curved-surface evolution based multi-viewpoint three-dimensional reconstruction method
CN103854257A (en) * 2012-12-07 2014-06-11 山东财经大学 Depth image enhancement method based on self-adaptation trilateral filtering
WO2016193393A1 (en) * 2015-06-05 2016-12-08 Université Du Luxembourg Real-time temporal filtering and super-resolution of depth image sequences
CN106651938A (en) * 2017-01-17 2017-05-10 湖南优象科技有限公司 Depth map enhancement method blending high-resolution color image
CN107563963A (en) * 2017-08-11 2018-01-09 北京航空航天大学 A kind of method based on individual depth map super-resolution rebuilding
CN107689050A (en) * 2017-08-15 2018-02-13 武汉科技大学 A kind of depth image top sampling method based on Color Image Edge guiding
CN108293136A (en) * 2015-09-23 2018-07-17 诺基亚技术有限公司 Method, apparatus and computer program product for encoding 360 degree of panoramic videos
CN110866882A (en) * 2019-11-21 2020-03-06 湖南工程学院 Layered joint bilateral filtering depth map restoration algorithm based on depth confidence
WO2020056769A1 (en) * 2018-09-21 2020-03-26 Intel Corporation Method and system of facial resolution upsampling for image processing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8983177B2 (en) * 2013-02-01 2015-03-17 Mitsubishi Electric Research Laboratories, Inc. Method for increasing resolutions of depth images

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609977A (en) * 2012-01-12 2012-07-25 浙江大学 Depth integration and curved-surface evolution based multi-viewpoint three-dimensional reconstruction method
CN103854257A (en) * 2012-12-07 2014-06-11 山东财经大学 Depth image enhancement method based on self-adaptation trilateral filtering
WO2016193393A1 (en) * 2015-06-05 2016-12-08 Université Du Luxembourg Real-time temporal filtering and super-resolution of depth image sequences
CN108293136A (en) * 2015-09-23 2018-07-17 诺基亚技术有限公司 Method, apparatus and computer program product for encoding 360 degree of panoramic videos
CN106651938A (en) * 2017-01-17 2017-05-10 湖南优象科技有限公司 Depth map enhancement method blending high-resolution color image
CN107563963A (en) * 2017-08-11 2018-01-09 北京航空航天大学 A kind of method based on individual depth map super-resolution rebuilding
CN107689050A (en) * 2017-08-15 2018-02-13 武汉科技大学 A kind of depth image top sampling method based on Color Image Edge guiding
WO2020056769A1 (en) * 2018-09-21 2020-03-26 Intel Corporation Method and system of facial resolution upsampling for image processing
CN110866882A (en) * 2019-11-21 2020-03-06 湖南工程学院 Layered joint bilateral filtering depth map restoration algorithm based on depth confidence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于像素滤波和中值滤波的深度图像修复方法;刘继忠等;《光电子激光》;20180531;第539-544页 *

Also Published As

Publication number Publication date
CN111489383A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN106651938B (en) A kind of depth map Enhancement Method merging high-resolution colour picture
CN111489383B (en) Depth image up-sampling method and system based on depth marginal point and color image
JP6561216B2 (en) Generating intermediate views using optical flow
Waechter et al. Let there be color! Large-scale texturing of 3D reconstructions
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
US20190251675A1 (en) Image processing method, image processing device and storage medium
CN106447601B (en) Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation
CN107665483B (en) Calibration-free convenient monocular head fisheye image distortion correction method
CN109584156A (en) Micro- sequence image splicing method and device
CN109447930B (en) Wavelet domain light field full-focusing image generation algorithm
CN107689050B (en) Depth image up-sampling method based on color image edge guide
CN111626927B (en) Binocular image super-resolution method, system and device adopting parallax constraint
CN111899295B (en) Monocular scene depth prediction method based on deep learning
CN103020898B (en) Sequence iris image super resolution ratio reconstruction method
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
CN111861888A (en) Image processing method, image processing device, electronic equipment and storage medium
Liu et al. Asflow: Unsupervised optical flow learning with adaptive pyramid sampling
CN111179173A (en) Image splicing method based on discrete wavelet transform and gradient fusion algorithm
CN103310482B (en) A kind of three-dimensional rebuilding method and system
CN109345444B (en) Super-resolution stereoscopic image construction method with enhanced depth perception
CN108615221B (en) Light field angle super-resolution method and device based on shearing two-dimensional polar line plan
CN111369435B (en) Color image depth up-sampling method and system based on self-adaptive stable model
CN107392986B (en) Image depth of field rendering method based on Gaussian pyramid and anisotropic filtering
CN111126418A (en) Oblique image matching method based on planar perspective projection
CN116934592A (en) Image stitching method, system, equipment and medium based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220610