CN114519682A - Depth map enhancement method based on autoregressive model - Google Patents

Depth map enhancement method based on autoregressive model Download PDF

Info

Publication number
CN114519682A
CN114519682A CN202210040794.9A CN202210040794A CN114519682A CN 114519682 A CN114519682 A CN 114519682A CN 202210040794 A CN202210040794 A CN 202210040794A CN 114519682 A CN114519682 A CN 114519682A
Authority
CN
China
Prior art keywords
depth map
depth
bilateral
confidence
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210040794.9A
Other languages
Chinese (zh)
Inventor
杨洋
于红蓓
赵岩
曾兰玲
王新宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202210040794.9A priority Critical patent/CN114519682A/en
Publication of CN114519682A publication Critical patent/CN114519682A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/70
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Abstract

The invention discloses a depth map enhancement method based on an autoregressive model. Given the image and parameters input by the user, the method firstly obtains the corresponding confidence degree according to the depth image, and then carries out correction and up-sampling operation on the depth image. For depth map correction, the less confidence is given to the untrusted edge regions and the higher confidence is given to the trusted flat regions. For depth map up-sampling, firstly interpolating a low-resolution depth map to obtain a high-resolution depth map, giving a lower confidence coefficient to an untrusted interpolated depth value, and giving a higher confidence coefficient to an authentic original depth value; finally, the image to be processed, the obtained confidence coefficient and the corresponding reference image are input into the model, and the model is solved, so that the goal of depth map enhancement is achieved. Experimental results show that the method can overcome artifacts and protect edges smoothly while ensuring the processing speed, and obtains a better depth map enhancement effect.

Description

Depth map enhancement method based on autoregressive model
Technical Field
The invention belongs to the technical field of computational photography, and particularly relates to a depth map enhancement method based on an autoregressive model.
Background
In the field of computer vision, depth information has always played an irreplaceable role. Existing methods for obtaining depth information include laser range scanners, Kinect cameras, TOF cameras, and the like. However, these methods have various problems, and although the laser range scanner has high accuracy, the laser range scanner has poor real-time performance and data loss, the Kinect camera has high speed, but the range is short, the acquired depth information is a "hole phenomenon", the TOF camera has strong real-time performance, but the acquired depth image has low resolution and a large amount of random noise. In addition to acquiring depth data by using an instrument, a depth map acquired by a depth learning mode is a trend in the field of computer vision at present, but the depth image acquired by the mode often has more difference values. Depth map enhancement techniques are needed to process depth images acquired by imaging devices and depth images acquired by depth learning, to improve resolution and to correct erroneous depth values to meet the requirements of related applications.
The existing depth map enhancement methods can be divided into two types, namely a traditional method and a solver. The traditional methods which are more classical comprise bilateral filtering, guide image filtering and weighted median filtering, and the methods are more flexible, fast and efficient, but the methods are not good in performance and cannot achieve a good correction effect when a depth image correction task is carried out. The fast bilateral solver is a representative method of a depth map enhancement method of the solver, the method is fast and can be applied to various computer vision tasks, but the depth map obtained by the solver can generate an artifact phenomenon.
The invention provides a novel depth map enhancement method based on an autoregressive model, which can better correct a depth image, realize depth map upsampling, better overcome an artifact phenomenon generated by a rapid bilateral solver and have higher calculation efficiency.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a depth map enhancement method based on an autoregressive model, which can realize the up-sampling of the depth map and the correction of the depth value while ensuring the efficiency and can effectively overcome the artifact phenomenon.
A depth map enhancement method based on an autoregressive model comprises the following steps:
step 1, setting matrix grid parameter sigmal(luminance Bandwidth), σu,v(color Bandwidth), σx,y(spatial bandwidth) and a smoothing parameter lambda, the depth image to be enhanced is selected.
Step 2, solving a target function of the depth image, and enhancing the depth image to obtain a processed depth image;
further, the objective function used for enhancing the depth image is as follows:
Figure BDA0003470135480000021
where O is the depth value that needs to be enhanced; c is the confidence level of the depth value at the position, and the calculation of the confidence level is divided into two cases of depth map correction and depth map up-sampling.
Further, the simplified bilateral affine matrix solution is:
according to the definition of the bilateral affine matrix:
Figure BDA0003470135480000022
wherein
Figure BDA0003470135480000023
Which represents the spatial position of the pixel i,
Figure BDA0003470135480000024
the values of the channels l, u and v respectively represent the pixel i, and the closer the distance between the pixel i and the pixel j is, the closer the color is, and the corresponding value A of the bilateral affine matrix isi,jThe larger.
However, it is impractical to solve the objective function proposed in claim 2 based on the definition formula of the bilateral affine matrix, because the depth value of a depth map is usually thousands, and the time taken to solve the objective function directly by the definition of the bilateral affine matrix is not imaginable, so that it is necessary to simplify the bilateral affine matrix, which is the key point for solving the objective function.
According to the previous study, the bilateral affine matrix can be approximated by the following equation:
A≈STBS
STb, S represent the Splat, Blur, Slice procedures, respectively. Now, suppose that a pixel value X needs to be processed, and a certain pixel point is determined by the position (X, y) and the color value (l, u, v), so that X can be regarded as P ═ Px,py,pl,pu,pv) The point set of (2). According to the previous analysis, it is not advisable to find a bilateral affine matrix for solving X directly, which is too expensive due to the time consumption, and the approximation result of the bilateral affine matrix is completed by the following three steps:
splat: each pixel value xiProjection to the nearest piIs located at the apex of (a).
Blur: and performing smoothing operation on the vertex value of the bilateral space.
Slice: from the nearest piThe new pixel value is interpolated from the blur value of the vertex of (a).
Used in an objective function
Figure BDA0003470135480000025
Is defined by the formula:
Figure BDA0003470135480000026
wherein Dm、DnIs a diagonal matrix.
If the depth map is corrected, the confidence level is lower when the position is an edge area, and the confidence level is higher when the position is a smooth area, specifically, the confidence solving formula is
Figure BDA0003470135480000031
Wherein, I is the depth image needing to be corrected,g is a color guide image, σcFor the smoothing parameter, it is set to 0.125 in the method of the present invention.
If the depth map is upsampled, the depth map with low resolution is firstly interpolated to the corresponding resolution, the confidence of the interpolated position is naturally low, and the confidence of the interpolated position is lower as the difference from the real depth value is larger. Aiming at solving the confidence level under the condition, the method mainly adopts a Gaussian function enveloping method. Specifically, the local confidence solving formula is as follows:
Figure BDA0003470135480000032
where F is a multiple of the upsampling, and F is a matrix of F rows and F columns. Subsequently, C is addedfLongitudinally copying m parts, transversely copying n parts to obtain confidence C at corresponding resolution,
Figure BDA0003470135480000033
a is the length of the depth map to be enhanced, b is the width of the depth map to be enhanced, specifically, the confidence thereof is
Figure BDA0003470135480000034
I is the actual depth value somewhere in the depth map to be processed; λ is an adjustable smoothing parameter;
Figure BDA0003470135480000035
is a bilateral affine matrix, and an approximate form of the bilateral affine matrix is used in the objective function.
Further, the method for simplifying the objective function and solving the corresponding area depth value according to the simplified bilateral affine matrix comprises the following steps:
approximating a bilateral affine matrix
Figure BDA0003470135480000036
Replacing in the target formula
Figure BDA0003470135480000037
Then to OiCalculating the partial derivative, setting the partial derivative as 0, and further simplifying the formula of the objective function as follows:
Figure BDA0003470135480000038
writing the above equation in matrix form yields:
Figure BDA0003470135480000039
the above formula is arranged as follows:
Figure BDA00034701354800000310
then let O be STY,
Figure BDA00034701354800000311
To obtain:
Figure BDA0003470135480000041
multiplying S on both sides of the above equation to obtain the following equation:
Figure BDA0003470135480000042
wherein SST=Dm,Sdiag(C)ST(sc), diag (C) I ═ C ═ I, thus the formula is equivalent to:
Figure BDA0003470135480000043
finally, the depth value to be solved can be obtained only by carrying out one-step slice operation on the solved Y.
Further, the above process performs a slice operation on the solved Y, that is, multiplies the solved Y matrix by the transpose of the S matrix, and calculates by the following formula:
O=STY
the invention has the beneficial effects that:
the depth images directly acquired by the prior art have the problems of low resolution, wrong depth values and the like, so that the depth images need to be repaired by a depth map enhancement technology before being applied to a computer vision task so as to meet the requirements of related applications. The invention can realize a better enhancement effect on the depth map, namely correct the error depth value, better overcome the generation of artifact phenomenon in the sampling on the depth map, and have quicker processing efficiency.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a comparison graph of different upsampling multiples of the present invention and a bilateral filtering and fast bilateral solver. Wherein:
fig. 2(a) is the result of 2 times up-sampling of the present invention, fig. 2(b) is the result of 4 times up-sampling of the present invention, and fig. 2 (C) is the result of 8 times up-sampling of the present invention. Fig. 2(d) is the result of bilateral filtering up-sampling by a factor of 2, fig. 2(e) is the result of bilateral filtering up-sampling by a factor of 4, and fig. 2(f) is the result of bilateral filtering up-sampling by a factor of 8. Fig. 2(g) shows the result of up-sampling by 2 times by the fast bilateral solver, fig. 2(h) shows the result of up-sampling by 4 times by the fast bilateral solver, and fig. 2(i) shows the result of up-sampling by 8 times by the fast bilateral solver. Wherein, the parameter of the invention is lambda-5, sigmal=10,σu,v=5,σx,y30, the bilateral filter parameter is r-16, the global variance is 16, the local variance is 0.8, the fast bilateral solver parameter is λ -5, σl=10,σu,v=5,σx,y=30。
FIG. 3 is a comparison of the present invention with other methods for depth map correction. Wherein:
FIG. 3(a) is a depth map of residual network prediction; 3(b) is the corrected result of the present invention, and the parameter is λ 15, σl=5,σu,v=10,σx,yFig. 3(c) shows the result of the domain transform filtering DF, where σ _ s is 8 σ _ r is 4.6, the parameters r is 16, the global variance is 16, the local variance is 0.8, and fig. 3(d) shows the result of the weighted median filtering WMF, where r is 10, σ is 15.5, and w is 1; FIG. 3(e) shows the result corrected by the fast bilateral solver, with the parameter λ being 15 ″, σl=5,σu,v=10,σx,yFig. 3(f) shows the result of the filtering correction of the pilot map, where r is 10.
Detailed Description
The implementation of the invention is as follows: given the image and parameters input by the user, the method firstly obtains the corresponding confidence degree according to the depth image, and then carries out correction and up-sampling operation on the depth image. For depth map correction, because the edge part of a depth map obtained by full convolution residual error network prediction has larger error and the prediction result of an image smooth area is more accurate, when the method of the invention is used for depth map correction, a lower confidence coefficient is given to the edge part of an incredible image, and a higher confidence coefficient is given to a credible image smooth area. For depth map up-sampling, firstly interpolating a low-resolution depth map to obtain a high-resolution depth map, wherein a depth value obtained by interpolation is not credible and is correspondingly given a lower confidence value, and an original depth value is higher in confidence and is correspondingly given a higher confidence value; then, solving a simplified bilateral affine matrix in the method; finally, the image to be processed, the obtained confidence coefficient and the corresponding reference image are processed by the method provided by the invention, so that the effects of correction and up-sampling are achieved. Experimental results show that the method can overcome artifacts and protect edges smoothly while ensuring the processing speed, and obtains a better depth map enhancement effect.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the depth map enhancement method based on the autoregressive model provided by the present invention specifically includes the following processes:
step 1, setting matrix grid parameters, sigmal(luminance Bandwidth), σu,v(color Bandwidth), σx,y(spatial bandwidth) and a smoothing parameter lambda, the depth image to be enhanced and the corresponding color reference image are selected. The target function for establishing the target depth image for depth enhancement is as follows:
Figure BDA0003470135480000051
wherein, IiIs to input the actual depth value somewhere in the depth map to be processed; λ is an adjustable smoothing parameter;
Figure BDA0003470135480000052
the method is characterized in that the method is a bilateral affine matrix, an approximate form of the bilateral affine matrix is used in a target function, O is a depth value needing to be enhanced, and i, j represents the position of a pixel in an image; c is the confidence of the depth value at this position, and is divided into two cases of depth map correction and depth map upsampling.
Step 2, solving the simplified bilateral affine matrix and confidence coefficient in the objective function
For the objective function proposed in the above steps, two main problems need to be solved, namely solving a simplified bilateral affine matrix and solving a confidence coefficient.
The method for solving the simplified bilateral affine matrix comprises the following steps:
according to the definition formula of the bilateral affine matrix:
Figure BDA0003470135480000061
wherein
Figure BDA0003470135480000062
The spatial position of the pixel i is represented,
Figure BDA0003470135480000063
representing the values of the channels l, u, v, respectively, of the pixel i, σx,y,σl,σuvThree adjustable parameters are referred, and the closer the distance between the pixel i and the pixel j is, the closer the color is, and the corresponding value A of the bilateral affine matrix isi,jThe larger.
However, it is not practical to solve the proposed objective function based on the definition formula of the bilateral affine matrix, because the depth value of a depth map is usually thousands of depth values, and the time taken to solve the objective function directly by the definition of the bilateral affine matrix is not imaginable, so that the bilateral affine matrix needs to be simplified, which is the key point for solving the objective function.
The bilateral affine matrix is approximated using the following equation:
A≈STBS
STb, S represent Splat, Blur, Slice procedures, respectively. Assuming that a pixel value X needs to be processed, a certain pixel point is determined by the position (X, y) and the color value (l, u, v), so that X can be regarded as P ═ Px,py,pl,pu,pv) The point set of (2). From the previous analysis, it is not advisable to derive a bilateral affine matrix that directly solves for X, since it is too expensive in terms of time, and therefore the approximation of the bilateral affine matrix is completed by the following three processes:
splat: each pixel value xiProjected to the nearest piAt the apex of (a).
Blur: and performing smoothing operation on the vertex value of the bilateral space.
Slice: from the nearest piThe new pixel value is interpolated from the blurred value of the vertex.
Used in an objective function
Figure BDA0003470135480000064
Is defined by the formula:
Figure BDA0003470135480000065
wherein Dm、DnIs a diagonal matrix.
If the depth map is corrected, the confidence level is lower when the position is an edge area, and the confidence level is higher when the position is a smooth area, specifically, the confidence solving formula is
Figure BDA0003470135480000071
Where I is the depth image to be corrected, G is the color guide image, σcFor the smoothing parameter, it is set to 0.125 in the present invention.
If the depth map is upsampled, the depth map with low resolution is firstly interpolated to the corresponding resolution, the confidence of the interpolated position is naturally low, and the confidence of the interpolated position is lower as the difference from the real depth value is larger. The invention mainly adopts a Gaussian function enveloping method, and particularly, the local confidence solving formula is as follows:
Figure BDA0003470135480000072
where F is a multiple of the upsampling, and F is a matrix of F rows and F columns.
Subsequently, C is addedfLongitudinally copying m parts, transversely copying n parts to obtain confidence C at corresponding resolution,
Figure BDA0003470135480000073
a is the length of the depth map to be enhanced, b is the width of the depth map to be enhanced, specifically, the confidence thereof is
Figure BDA0003470135480000074
The method for simplifying the objective function according to the simplified bilateral affine matrix comprises the following steps:
approximating a bilateral affine matrix
Figure BDA0003470135480000075
Replacing in the target formula
Figure BDA0003470135480000076
Then to OiThe partial derivative is calculated and set to 0, and the formula of the objective function can be further simplified as follows:
Figure BDA0003470135480000077
writing the above equation as a matrix form yields:
Figure BDA0003470135480000078
the above formula is arranged as follows:
Figure BDA0003470135480000079
then let O be STY,
Figure BDA00034701354800000710
To obtain:
Figure BDA00034701354800000711
multiplying S on both sides of the above equation to obtain the following equation:
Figure BDA00034701354800000712
wherein SST=Dm,Sdiag(C)ST=diag(SC),diag(C)I=C⊙I,
The above formula is therefore equivalent to:
Figure BDA0003470135480000081
and finally, only one-step slice operation is needed to be carried out on the solved Y, so that the depth value to be solved can be obtained.
The above process performs a slice operation on the solved Y, that is, multiplies the solved Y matrix by the transpose of the S matrix, and calculates by the following formula:
O=STY
and 3, taking the depth image to be enhanced as a target image, taking the corresponding color image as a reference image, inputting the reference image into a solver, embedding the target function into the solver, solving corresponding confidence coefficients according to two different conditions, and finally constructing and outputting the enhanced depth image.
In addition, the size of the grid of the bilateral space mentioned in step 2 can be controlled, and the larger the grid, the faster the speed, but the larger the grid, the effect of enhancing the depth map is affected. In the embodiment, when a depth map line of 1390 pixels × 1110 pixels is enhanced on a machine with an Intel i5-4200H CPU @2.80GHz and a 16G memory, the time is about 1s, and the requirement on the depth map repair calculation efficiency in real application can be met.
As shown in fig. 2, the comparison graph of the present invention and the bilateral filtering and fast bilateral solver with different upsampling multiples is shown, and compared with the other two methods, the method of the present invention can better retain the details of the picture under the same upsampling multiple, sharpens the edge, and achieves a better upsampling effect. For example, fig. 3 is a comparison graph for depth map correction by using the method of the present invention and other methods, the method of the present invention compares four methods, namely domain transform filtering, weighted median filtering, fast bilateral solver and guide graph filtering, and it can be seen from the comparison graph that the correction effect of the method of the present invention is superior to that of other methods, especially in the edge region of the residual error network prediction depth map, the correction effect is more obvious. The following table shows the comparison result of the method with other methods in the running time, and the image sizes processed by the method and other methods are 1390 pixel × 1110 pixel, so that the method has higher efficiency and can meet the real-time requirement in the practical application.
Method Run time(s)
The invention 1.07
Bilateral filtering 4.07
Quick bilateral solver 1
Domain transform filtering 0.25
Weighted median filtering 1.21
Guide map filtering 11.96
The above-listed series of detailed descriptions are merely specific illustrations of possible embodiments of the present invention, and they are not intended to limit the scope of the present invention, and all equivalent means or modifications that do not depart from the technical spirit of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. A depth map enhancement method based on an autoregressive model is characterized by comprising the following steps:
s1, setting matrix grid parameter sigmal(Bright)Degree bandwidth), σu,v(color Bandwidth), σx,y(spatial bandwidth) and a smoothing parameter lambda, selecting a depth image to be enhanced, and establishing a target function of the depth image;
and S2, solving an objective function of the depth image, and enhancing the depth image to obtain a processed depth image.
2. The method for enhancing depth map based on autoregressive model as claimed in claim 1, wherein in S1, the objective function of the depth image to be enhanced is established as follows:
Figure FDA0003470135470000011
where I is the actual depth value somewhere in the depth map to be processed, λ is an adjustable smoothing parameter,
Figure FDA0003470135470000012
is a bilateral affine matrix, O is the depth value to be enhanced, i, j represents the position of the pixel in the image, and C is the confidence of the depth value, which is divided into two cases of depth map correction and depth map upsampling.
3. The method of claim 2, wherein the confidence level is lower if the depth map is corrected and the confidence level is higher if the depth map is an edge region and the confidence level is higher if the depth map is a smooth region, and the confidence solving formula is
Figure FDA0003470135470000013
Where I is the depth image to be corrected, G is the color guide image, σcIt is set to 0.125 for the smoothing parameter.
4. The method of claim 2, wherein for depth map upsampling, a low-resolution depth map is first interpolated to a corresponding resolution, and the confidence of the interpolated depth map is low and the confidence is lower the greater the difference between the interpolated depth map and the true depth value, and then the local confidence is calculated by using a gaussian envelope method:
Figure FDA0003470135470000014
where F is a multiple of the upsampling, and F is a matrix of F rows and F columns.
Subsequently, C is addedfLongitudinally copying m parts, transversely copying n parts to obtain confidence C at corresponding resolution,
Figure FDA0003470135470000015
a is the length of the depth map to be enhanced, b is the width of the depth map to be enhanced, specifically, the confidence thereof is
Figure FDA0003470135470000021
5. The method of claim 2, wherein the objective function is solved using a simplified bilateral affine matrix in S2.
6. The depth map enhancement method based on the autoregressive model as claimed in claim 2, wherein the simplified solution of the bilateral affine matrix is as follows:
defining a bilateral affine matrix:
Figure FDA0003470135470000022
wherein
Figure FDA0003470135470000023
Representing the spatial position of the pixel j,
Figure FDA0003470135470000024
representing the values of the channels l, u, v, respectively, of the pixel i, σx,y,σl,σuvThree adjustable parameters are referred, and the closer the distance between the pixel i and the pixel j is, the closer the color is, and the corresponding value A of the bilateral affine matrix isi,jThe larger;
approximate processing of a bilateral affine matrix:
A≈STBS
STb, S represent the Splat, Blur, Slice processes, respectively, as follows:
splat: each pixel value xiProjection to the nearest piOn the vertex of (a);
blur: performing smoothing operation on the vertex value of the bilateral space;
slice: from the nearest piInterpolating a new pixel value by the fuzzy value of the vertex of (1);
thus, used in the objective function
Figure FDA0003470135470000025
Is defined by the formula:
Figure FDA0003470135470000026
wherein Dm、DnIs a diagonal matrix.
7. The depth map enhancement method based on the autoregressive model as claimed in claim 6, wherein the objective function is simplified according to a simplified bilateral affine matrix, and the specific method is as follows:
approximating a bilateral affine matrix
Figure FDA0003470135470000027
Replacing in the formula of the objective function
Figure FDA0003470135470000028
Then to OiWhen the partial derivative is calculated and set to 0, the objective function in claim 2 is further simplified as:
Figure FDA0003470135470000031
writing the above formula as a matrix form:
Figure FDA0003470135470000032
the above formula is arranged as follows:
Figure FDA0003470135470000033
then let O be STY,
Figure FDA0003470135470000034
To obtain:
Figure FDA0003470135470000035
multiplying S on both sides of the above equation to obtain the following equation:
Figure FDA0003470135470000036
wherein SST=Dm,Sdiag(C)ST=diag(SC),diag(C)I=C⊙I,
The above formula is therefore equivalent to:
Figure FDA0003470135470000037
and finally, only one-step slice operation is needed to be carried out on the solved Y, so that the depth value to be solved can be obtained.
8. The method of claim 7, wherein the slice operation is a multiplication of the solved Y matrix by a transpose of the S matrix, and the calculation expression is as follows:
O=STY。
CN202210040794.9A 2022-01-14 2022-01-14 Depth map enhancement method based on autoregressive model Pending CN114519682A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210040794.9A CN114519682A (en) 2022-01-14 2022-01-14 Depth map enhancement method based on autoregressive model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210040794.9A CN114519682A (en) 2022-01-14 2022-01-14 Depth map enhancement method based on autoregressive model

Publications (1)

Publication Number Publication Date
CN114519682A true CN114519682A (en) 2022-05-20

Family

ID=81596063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210040794.9A Pending CN114519682A (en) 2022-01-14 2022-01-14 Depth map enhancement method based on autoregressive model

Country Status (1)

Country Link
CN (1) CN114519682A (en)

Similar Documents

Publication Publication Date Title
Ma et al. Learning deep context-sensitive decomposition for low-light image enhancement
US9692939B2 (en) Device, system, and method of blind deblurring and blind super-resolution utilizing internal patch recurrence
Chan et al. An augmented Lagrangian method for total variation video restoration
Molina et al. Bayesian multichannel image restoration using compound Gauss-Markov random fields
CN112734650B (en) Virtual multi-exposure fusion based uneven illumination image enhancement method
CN107749987B (en) Digital video image stabilization method based on block motion estimation
US20150036943A1 (en) Patch-Based, Locally Content-Adaptive Image and Video Sharpening
KR100860968B1 (en) Image-resolution-improvement apparatus and method
CN111353955A (en) Image processing method, device, equipment and storage medium
CN114640885B (en) Video frame inserting method, training device and electronic equipment
Xu et al. Efficient deep image denoising via class specific convolution
CN101142614A (en) Single channel image deformation system and method using anisotropic filtering
Carbajal et al. Blind motion deblurring with pixel-wise kernel estimation via kernel prediction networks
Soh et al. Joint high dynamic range imaging and super-resolution from a single image
CN114519682A (en) Depth map enhancement method based on autoregressive model
JP2011070283A (en) Face image resolution enhancement device and program
CN110796609B (en) Low-light image enhancement method based on scale perception and detail enhancement model
CN110895790B (en) Scene image super-resolution method based on posterior degradation information estimation
CN108364258B (en) Method and system for improving image resolution
CN113506212A (en) Improved POCS-based hyperspectral image super-resolution reconstruction method
Ye et al. A sparsity-promoting image decomposition model for depth recovery
CN113822823B (en) Point neighbor restoration method and system for aerodynamic optical effect image space-variant fuzzy core
Hirsch et al. Self-calibration of optical lenses
US7526138B1 (en) Context based adaptive image resampling
Gao et al. Single Image Dehazing via Relativity-of-Gaussian

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination