CN108416803A - A kind of scene depth restoration methods of the Multi-information acquisition based on deep neural network - Google Patents

A kind of scene depth restoration methods of the Multi-information acquisition based on deep neural network Download PDF

Info

Publication number
CN108416803A
CN108416803A CN201810208334.6A CN201810208334A CN108416803A CN 108416803 A CN108416803 A CN 108416803A CN 201810208334 A CN201810208334 A CN 201810208334A CN 108416803 A CN108416803 A CN 108416803A
Authority
CN
China
Prior art keywords
depth
boundary
image
resolution
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810208334.6A
Other languages
Chinese (zh)
Other versions
CN108416803B (en
Inventor
叶昕辰
段祥越
严倩羽
李豪杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201810208334.6A priority Critical patent/CN108416803B/en
Publication of CN108416803A publication Critical patent/CN108416803A/en
Application granted granted Critical
Publication of CN108416803B publication Critical patent/CN108416803B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention is a kind of scene depth restoration methods of the Multi-information acquisition based on deep neural network, belongs to image processing field.This method uses depth convolutional network predetermined depth image boundary, and using the boundary acquired carries out that interpolation is instructed to acquire high quality depth map.Using coloured image assistant images Boundary Prediction, it can preferably predict that the unconspicuous boundary on the depth image of low resolution, the depth image that coloured image auxiliary interpolation enables to meet the space structure of actual scene.Method program is simple, it is easy to accomplish.Depth information is asked to depth image piecemeal according to the boundary of prediction, calculating speed is fast, avoids the interference of the depth information of different zones, accuracy is high, and the high-resolution depth graph picture acquired is clear, and boundary is sharp.

Description

A kind of scene depth restoration methods of the Multi-information acquisition based on deep neural network
Technical field
The invention belongs to image processing field, it is related to using depth convolutional network predetermined depth image boundary, and use side It instructs into row interpolation in the hope of high quality depth map on boundary, and in particular to a kind of field of the Multi-information acquisition based on deep neural network Depth of field degree restoration methods.
Scene depth is particularly significant for natural scene understanding, is widely used in three-dimensional (3D) modeling, visualizes and automatic Drive etc.;However the limitation of the complexity of actual scene and imaging sensor, the accuracy of the depth information of scene of acquisition with And resolution ratio is all not sufficient to apply to actual scene.Such as the depth of second generation Kinect (Kinect2) acquisitions of current Microsoft The resolution ratio of image is only 512 × 424, and the resolution ratio of corresponding coloured image is 1920 × 1080.General actual use is adopted The depth information collected needs the depth information of acquisition promoting resolution ratio.
In general, a method for restoring high-resolution scene depth image is carried out using corresponding coloured image auxiliary Bilateral interpolation carries out the recovery of depth image.Some existing methods are according to the corresponding property of texture of coloured image and depth image Devise energy function (J.Yang, X.Ye, K.Li, C.Hou, and Y.Wang, " Color-guided depthrecovery from RGBD data using an adaptive autoregressive model.”IEEETIP,vol.23,no.8, Pp.3443-3458,2014), make the depth image both distribution of satisfaction value size and small resolutions of recovery by optimizing energy function The image distribution of rate is identical, and meets the compatibility on texture.Certain methods structure boundary dictionary stores the depth of small resolution ratio The texture of figure and high-resolution depth map texture correspondence (Jun Xie, R.S.Feris, and Ming Ting Sun, “Edge-guidedsingledepthimagesuperresolution,”IEEETransactions on Image Processing, vol.25, no.1, pp.428,2016), so that the depth image of big resolution ratio is acquired according to dictionary search Then the interpolation of depth image is instructed on boundary.But the obtained boundary of this method is not smooth enough, and without using colored The information of image, so picture quality is not high.
Coloured image provides high-resolution scene information, and abundant texture letter can be provided for the recovery of scene depth Breath.Scene depth is less relative to actual scene texture information.The difficult point that depth information restores is the recovery of border texture.Base In this, by deep learning, the prediction that can design a fusion color image information and deep image information is smoother The network on boundary, and combine color image restoration scene depth image using the boundary predicted.
Invention content
The present invention is directed to overcome the deficiencies in the prior art, a kind of scene of the Multi-information acquisition based on deep neural network is deep Spend restoration methods.It is observed that the texture structure of scene depth image is relatively simple compared to coloured image texture, restore accurate Border issue is the difficult point that depth image restores.Based on this, this method devises fusion cromogram using the method for deep learning The network on the smooth boundary of prediction of picture and depth image, and combine color image restoration scene depth using the boundary predicted Image.
The technical scheme is that a kind of scene depth recovery side of the Multi-information acquisition based on deep neural network Method, the method includes the following steps:
The first step prepares training data;
Training data includes high-resolution coloured image, the depth image of low resolution, high-resolution depth image Corresponding boundary image.
Second step, builds Boundary Prediction network, and the coloured image of input includes two convolution by a convolution sum one Residual error structure, obtain the characteristic pattern of colored branch;The depth image of input is realized using deconvolution operation and is up-sampled twice, is divided Resolution size as color image resolution passes through 3 convolution before each deconvolution, wherein after twice convolution use it is residual The structure of poor network is conducive to network convergence;After the characteristic pattern for merging high-resolution coloured image, prediction obtains corresponding to height The boundary image of depth of resolution image;
Third walks, and builds loss function and training network;
Loss functionIt weighs on training data by coloured image I and low resolution depth map DlThe boundary result E of prediction and The boundary E really extractedgtGap.
Wherein, Indicate that network infers process.Indicate square of 2 norms.Net Network training process is to continue to optimize network parameter w in training data to makeConvergence obtains final
4th step predicts the boundary of the corresponding high-resolution scene depth image of the depth map of low resolution.According to survey Coloured image I on examination collection and low resolution depth map Dl, the network that process has been trained obtains boundary
5th step, according to coloured image I and low resolution depth map DlIt carries out sub-regional interpolation or copy obtains high-resolution Rate depth map Dh
5-1) boundary of prediction is expanded, obtains smooth region and borderline region, smooth region directly carries out depth The copy of value obtains the depth map D of smooth regionsmooth
The depth map D of borderline region 5-2) is obtained into row interpolation to borderline regionedge
Gaussian spatial distance d 5-2-1) is calculated between the adjacent pixel x, y on coloured image:
D=Gσ(Ix-Iy)
σ is the parameter of Gaussian function, value 0.5.Similitude is bigger between the value of d shows more greatly two pixels.IxAnd Iy The value of color on coloured image at x and y is indicated respectively.
5-2-2) judge whether adjacent pixel x, y are in the both sides on depth image boundary, are formulated as 1 (x, y;E), Indicate whether x, y are in the side on boundary in E, wherein 1 () for being judged as whether present case is true, set up anti-for 1 Then be 0.
5-2-3) judged according to above, carries out point-by-point interpolation.Equation is as follows:
Wherein,WithRespectively indicate high-resolution depth graph x at value of the low resolution depth map at y, K For normalization factor.Indicate the point set on the low-resolution image around x.
The result of upper two step 5-3) is merged one piece and obtains ultimate depth result Dh=Dsmooth+Dedge
The beneficial effects of the invention are as follows:
The present invention is based on the prioris that low resolution depth image is mainly obscured in borderline region, pass through neural network forecast height The boundary of depth of resolution image, and then interpolation algorithm is used, high quality depth image is obtained, is had the characteristics that:
1, program is simple, it is easy to accomplish, the depth image of high quality can be obtained;
2, the recovery of scene depth image is divided into two steps by this method, passes through the high-resolution depth image of neural network forecast first Boundary, then, in conjunction with boundary and high-resolution image into row interpolation;
3, algorithm uses deep learning neural network forecast depth image boundary, smooth clear, in conjunction with coloured image neighbor interpolation, Obtained depth image is clear, and boundary is sharp.
Description of the drawings
Fig. 1 is implementing procedure figure, by taking the low resolution depth image block of four times of (4 ×) down-samplings as an example.
Fig. 2 is primary data.Wherein:(a) low quality depth map (b) high-resolution color figure
The case where Fig. 3 is both sides or the homonymy that adjacent pixel is in side, wherein (a) figure is x, y is in the same of boundary Side, (b) the bright x of (c) chart, y are in the both sides on boundary.
Fig. 4 is the restoration result of two groups of data and the comparison with other methods, wherein:(a) figure is colour-depth standards Data set, (b) Federated filter result (Yijun Li, Jia Bin Huang, Narendra Ahuja, and Ming Hsuan Yang, " Deep joint image filtering, " in Proc.ECCV, 2016.), (c) result of the invention.
Specific implementation mode
With reference to embodiment and attached drawing to the scene depth of the Multi-information acquisition based on deep neural network of the present invention Restoration methods are described in detail.
A kind of scene depth restoration methods of the Multi-information acquisition based on deep neural network, as shown in Figure 1, the method (by 4 × for) include the following steps:
The first step prepares primary data;
Primary data includes low resolution depth map and the high-resolution color figure with visual angle, one of which data such as Fig. 2 It is shown.For training network, data set uses Middlebury officials data (http://vision.middlebury.edu), In 38 colour-depth images to being used to train, 6 colour-depth images are for testing.For training data, scheme from training As in stride be 10 pixels interception 15 × 15 depth image block.Corresponding coloured image stride is 40 pixels, and image block is big Small is 60 × 60, ultimately forms 13860 images to being used to train.
Second step builds Boundary Prediction network, as shown in Figure 1, left-half is network structure.The coloured image of input By one residual error structure for including two convolution of a convolution sum, the characteristic pattern of colored branch is obtained.The depth image of input After up-sampling operation twice, resolution ratio size as color image resolution, every time by primary before up-sampling One residual error structure for including two convolution of convolution sum.The structure of residual error network is conducive to network convergence.It merges from high-resolution After the characteristic pattern of coloured image, two parts characteristic pattern passes through cubic convolution, can predict to obtain corresponding to high-resolution depth The boundary image of image;
Third walks, and builds loss function and training network;
Loss functionIt weighs on training data by coloured image I and low resolution depth map DlThe boundary result E of prediction and The boundary E really extractedgtGap.
Wherein, Indicate that network infers process.Indicate square of 2 norms.Net Network training process is to continue to optimize network parameter w in training data to makeConvergence obtains finalWhen training, together When input low resolution depth image and corresponding high-definition picture and high-resolution boundary image, neural network forecast is gone out Boundary image constantly relatively and automatically updates network parameter with high-resolution boundary image, trained to achieve the purpose that.When training, Initial learning rate is set as 0.001, and every 50 periods reduce by one times.When the loss function reduction of network tends towards stability no longer Network training can terminate when reduction.
4th step predicts the boundary of the corresponding high-resolution scene depth image of the depth map of low resolution.According to survey Coloured image I on examination collection and low resolution depth map Dl, infer to obtain boundary by network
5th step, according to coloured image I and low resolution depth map DlIt carries out sub-regional interpolation or copy obtains high-resolution Rate depth map Dh
5-1) boundary of prediction is expanded, obtains smooth region and borderline region, smooth region directly carries out depth The copy of value obtains the depth map D of smooth regionsmooth
The depth map D of borderline region 5-2) is obtained into row interpolation to borderline regionedge
Gaussian spatial distance d 5-2-1) is calculated between the adjacent pixel x, y on coloured image:
D=Gσ(Ix-Iy)
σ is the parameter of Gaussian function, value 0.5.Similitude is bigger between the value of d shows more greatly two pixels.IxAnd Iy The value of color on coloured image at x and y is indicated respectively.
5-2-2) judge whether adjacent pixel x, y are in the both sides on depth image boundary, are formulated as 1 (x, y;E), Indicate whether x, y are in the side on boundary in E, wherein 1 () for being judged as whether present case is true, set up anti-for 1 Then be 0.It is divided into 3 kinds of situations and as shown in Figure 3:
A) line of point x, y, without overlapping or intersecting, are set up at this time with boundary;
B) line of point x, y intersect with boundary, invalid at this time;
C) line of point x, y have that pixel is overlapping with boundary, invalid at this time.
5-2-3) judged according to above, carries out point-by-point interpolation.Equation is as follows:
Wherein,WithRespectively indicate high-resolution depth graph x at value of the low resolution depth map at y, K For normalization factor.Indicate the point set on the low-resolution image around x.
The result of upper two step 5-3) is merged one piece and obtains ultimate depth result Dh=Dsmooth+Dedge
The restoration result of one group of data of this method pair and with the comparisons of other methods as shown in figure 4, wherein (a) figure is color Color-depth standards data set, (b) Federated filter result (Yijun Li, Jia Bin Huang, Narendra Ahuja, and Ming Hsuan Yang, " Deep joint image filtering, " in Proc.ECCV, 2016.), (c) of the invention As a result.

Claims (2)

1. a kind of scene depth restoration methods of the Multi-information acquisition based on deep neural network, which is characterized in that including as follows Step:
The first step prepares training data, including the depth image of high-resolution coloured image, low resolution and high-resolution The corresponding boundary image of depth image;
Second step, builds Boundary Prediction network, and the coloured image of input includes the residual of two convolution by a convolution sum one Poor structure obtains the characteristic pattern of colored branch;The depth image of input is up-sampled twice using deconvolution operation realization, resolution ratio With size as color image resolution, 3 convolution are passed through before each deconvolution, wherein after twice convolution use residual error net The structure of network;After the characteristic pattern for merging high-resolution coloured image, prediction obtain correspond to high-resolution depth graph as Boundary image;
Third walks, and builds loss function and training network;
Loss functionIt weighs on training data by coloured image I and low resolution depth map DlThe boundary result E of prediction and true The boundary E of extractiongtGap;
Wherein, Indicate that network infers process;Indicate 2 norms;Network training process It is to continue to optimize network parameter w in training data to makeConvergence obtains final
4th step predicts the boundary of the corresponding high-resolution scene depth image of the depth map of low resolution;
According to the coloured image I and low resolution depth map D on test setl, boundary is obtained by network
5th step, according to coloured image I and low resolution depth map DlIt carries out sub-regional interpolation or copy obtains high-resolution depth Scheme Dh
2. a kind of scene depth restoration methods of Multi-information acquisition based on deep neural network according to claim 1:, It is characterized in that:
5th step, according to coloured image I and low resolution depth map DlIt carries out sub-regional interpolation or copy obtains high-resolution depth Dh, include the following steps:
5-1) boundary of prediction is expanded, obtains smooth region and borderline region, smooth region directly carries out depth value Copy obtains the depth map D of smooth regionsmooth
The depth map D of borderline region 5-2) is obtained into row interpolation to borderline regionedge
Gaussian spatial distance d 5-2-1) is calculated between the adjacent pixel x, y on coloured image
D=Gσ(Ix-Iy)
σ is the parameter of Gaussian function, value 0.5;Similitude is bigger between the value of d shows more greatly two pixels;IxAnd IyRespectively Indicate the value of color at x and y on coloured image;
5-2-2) judge whether adjacent pixel x, y are in the both sides on depth image boundary, are formulated as 1 (x, y;E), indicate Whether x, y are in the side on boundary in E, wherein 1 () for being judged as whether present case true, set up for 1 on the contrary then It is 0;It is divided into three kinds of situations:
A) line of point x, y, without overlapping or intersecting, are set up at this time with boundary;
B) line of point x, y intersect with boundary, invalid at this time;
C) line of point x, y have that pixel is overlapping with boundary, invalid at this time;
5-2-3) judged according to above, carries out point-by-point interpolation;Equation is as follows:
Wherein,WithHigh-resolution depth graph is indicated respectively at x and value of the low resolution depth map at y, K are to return One changes the factor;Indicate the point set on the low-resolution image around x;
The result of upper two step 5-3) is merged one piece and obtains ultimate depth result Dh=Dsmooth+Dedge
CN201810208334.6A 2018-03-14 2018-03-14 Scene depth recovery method based on multi-information fusion of deep neural network Expired - Fee Related CN108416803B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810208334.6A CN108416803B (en) 2018-03-14 2018-03-14 Scene depth recovery method based on multi-information fusion of deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810208334.6A CN108416803B (en) 2018-03-14 2018-03-14 Scene depth recovery method based on multi-information fusion of deep neural network

Publications (2)

Publication Number Publication Date
CN108416803A true CN108416803A (en) 2018-08-17
CN108416803B CN108416803B (en) 2020-01-24

Family

ID=63131442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810208334.6A Expired - Fee Related CN108416803B (en) 2018-03-14 2018-03-14 Scene depth recovery method based on multi-information fusion of deep neural network

Country Status (1)

Country Link
CN (1) CN108416803B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111289A (en) * 2019-04-28 2019-08-09 深圳市商汤科技有限公司 A kind of image processing method and device
CN110136061A (en) * 2019-05-10 2019-08-16 电子科技大学中山学院 Resolution improving method and system based on depth convolution prediction and interpolation
CN111062981A (en) * 2019-12-13 2020-04-24 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN111260711A (en) * 2020-01-10 2020-06-09 大连理工大学 Parallax estimation method for weakly supervised trusted cost propagation
CN111738921A (en) * 2020-06-15 2020-10-02 大连理工大学 Depth super-resolution method for multi-information progressive fusion based on depth neural network
CN111784659A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Image detection method and device, electronic equipment and storage medium
CN113763449A (en) * 2021-08-25 2021-12-07 北京的卢深视科技有限公司 Depth recovery method and device, electronic equipment and storage medium
CN113781538A (en) * 2021-07-27 2021-12-10 武汉中海庭数据技术有限公司 Image depth information fusion method and system, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120007950A1 (en) * 2010-07-09 2012-01-12 Yang Jeonghyu Method and device for converting 3d images
CN102722863A (en) * 2012-04-16 2012-10-10 天津大学 Super-resolution reconstruction method for depth map by adopting autoregressive model
EP2677496A1 (en) * 2012-06-20 2013-12-25 Vestel Elektronik Sanayi ve Ticaret A.S. Method and device for determining a depth image
CN105741265A (en) * 2016-01-21 2016-07-06 中国科学院深圳先进技术研究院 Depth image processing method and depth image processing device
CN106447714A (en) * 2016-09-13 2017-02-22 大连理工大学 Scene depth recovery method based on signal decomposition
CN107194893A (en) * 2017-05-22 2017-09-22 西安电子科技大学 Depth image ultra-resolution method based on convolutional neural networks
CN107680140A (en) * 2017-10-18 2018-02-09 江南大学 A kind of depth image high-resolution reconstruction method based on Kinect cameras

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120007950A1 (en) * 2010-07-09 2012-01-12 Yang Jeonghyu Method and device for converting 3d images
CN102722863A (en) * 2012-04-16 2012-10-10 天津大学 Super-resolution reconstruction method for depth map by adopting autoregressive model
EP2677496A1 (en) * 2012-06-20 2013-12-25 Vestel Elektronik Sanayi ve Ticaret A.S. Method and device for determining a depth image
CN105741265A (en) * 2016-01-21 2016-07-06 中国科学院深圳先进技术研究院 Depth image processing method and depth image processing device
CN106447714A (en) * 2016-09-13 2017-02-22 大连理工大学 Scene depth recovery method based on signal decomposition
CN107194893A (en) * 2017-05-22 2017-09-22 西安电子科技大学 Depth image ultra-resolution method based on convolutional neural networks
CN107680140A (en) * 2017-10-18 2018-02-09 江南大学 A kind of depth image high-resolution reconstruction method based on Kinect cameras

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JINGYU YANG 等: "Depth super-resolution via fully edge-augmented guidance", 《2017 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)》 *
JUN XIE 等: "Edge-Guided Single Depth Image Super Resolution", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
WENTIAN ZHOU 等: "Guided deep network for depth map super-resolution: How much can color help?", 《2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)》 *
YAN DONG 等: "Depth map upsampling using joint edge-guided convolutional neural network for virtual view synthesizing", 《JOURNAL OF ELECTRONIC IMAGING》 *
YIJUN LI 等: "Deep Joint Image Filtering", 《ECCV2016》 *
隋瑶 等: "基于改进的测地线距离变换的深度图像恢复", 《信息技术》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111289A (en) * 2019-04-28 2019-08-09 深圳市商汤科技有限公司 A kind of image processing method and device
CN110111289B (en) * 2019-04-28 2021-09-28 深圳市商汤科技有限公司 Image processing method and device
CN110136061A (en) * 2019-05-10 2019-08-16 电子科技大学中山学院 Resolution improving method and system based on depth convolution prediction and interpolation
CN110136061B (en) * 2019-05-10 2023-02-28 电子科技大学中山学院 Resolution improving method and system based on depth convolution prediction and interpolation
CN111062981A (en) * 2019-12-13 2020-04-24 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN111062981B (en) * 2019-12-13 2023-05-05 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN111260711B (en) * 2020-01-10 2021-08-10 大连理工大学 Parallax estimation method for weakly supervised trusted cost propagation
CN111260711A (en) * 2020-01-10 2020-06-09 大连理工大学 Parallax estimation method for weakly supervised trusted cost propagation
CN111738921A (en) * 2020-06-15 2020-10-02 大连理工大学 Depth super-resolution method for multi-information progressive fusion based on depth neural network
CN111784659A (en) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 Image detection method and device, electronic equipment and storage medium
CN113781538A (en) * 2021-07-27 2021-12-10 武汉中海庭数据技术有限公司 Image depth information fusion method and system, electronic equipment and storage medium
CN113781538B (en) * 2021-07-27 2024-02-13 武汉中海庭数据技术有限公司 Image depth information fusion method, system, electronic equipment and storage medium
CN113763449A (en) * 2021-08-25 2021-12-07 北京的卢深视科技有限公司 Depth recovery method and device, electronic equipment and storage medium
CN113763449B (en) * 2021-08-25 2022-08-12 合肥的卢深视科技有限公司 Depth recovery method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108416803B (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN108416803A (en) A kind of scene depth restoration methods of the Multi-information acquisition based on deep neural network
CN111062872B (en) Image super-resolution reconstruction method and system based on edge detection
CN105069746B (en) Video real-time face replacement method and its system based on local affine invariant and color transfer technology
CN109671023A (en) A kind of secondary method for reconstructing of face image super-resolution
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN110310319B (en) Illumination-separated single-view human body clothing geometric detail reconstruction method and device
CN108389226A (en) A kind of unsupervised depth prediction approach based on convolutional neural networks and binocular parallax
CN103020898B (en) Sequence iris image super resolution ratio reconstruction method
CN106339996B (en) A kind of Image Blind deblurring method based on super Laplace prior
CN109978786A (en) A kind of Kinect depth map restorative procedure based on convolutional neural networks
CN106067161A (en) A kind of method that image is carried out super-resolution
CN106023230B (en) A kind of dense matching method of suitable deformation pattern
CN112734890B (en) Face replacement method and device based on three-dimensional reconstruction
CN110009722A (en) Three-dimensional rebuilding method and device
CN109272447A (en) A kind of depth map super-resolution method
CN109242834A (en) It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method
CN114897742B (en) Image restoration method with texture and structural features fused twice
CN111696033A (en) Real image super-resolution model and method for learning cascaded hourglass network structure based on angular point guide
CN114049464A (en) Reconstruction method and device of three-dimensional model
CN111539888A (en) Neural network image defogging method based on pyramid channel feature attention
CN103971354A (en) Method for reconstructing low-resolution infrared image into high-resolution infrared image
CN112163996A (en) Flat-angle video fusion method based on image processing
CN107958489B (en) Curved surface reconstruction method and device
CN111260706B (en) Dense depth map calculation method based on monocular camera
CN104182931B (en) Super resolution method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200124

Termination date: 20210314