CN109377524B - Method and system for recovering depth of single image - Google Patents

Method and system for recovering depth of single image Download PDF

Info

Publication number
CN109377524B
CN109377524B CN201811268787.4A CN201811268787A CN109377524B CN 109377524 B CN109377524 B CN 109377524B CN 201811268787 A CN201811268787 A CN 201811268787A CN 109377524 B CN109377524 B CN 109377524B
Authority
CN
China
Prior art keywords
image
depth
color
test
test image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811268787.4A
Other languages
Chinese (zh)
Other versions
CN109377524A (en
Inventor
王吉华
张全英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN201811268787.4A priority Critical patent/CN109377524B/en
Publication of CN109377524A publication Critical patent/CN109377524A/en
Application granted granted Critical
Publication of CN109377524B publication Critical patent/CN109377524B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention discloses a method and a system for recovering the depth of a single image, wherein the method comprises the following steps: receiving a single color test image, searching a plurality of color images similar to the test image in a color-depth image database, and acquiring corresponding depth images to obtain a training image pair; calculating the mapping relation between the color image pixels in the test image and each training image pair; acquiring a plurality of depth gradient fields according to the depth image in each training image pair and the mapping relation; a respective depth image of the test image is calculated based on the plurality of depth gradient fields. The method does not need to establish a large number of model parameters, and is simple to implement and wide in application range.

Description

Method and system for recovering depth of single image
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a method and a system for recovering the depth of a single image.
Background
The depth recovery of a single image is an important research subject in the field of computer vision, and can be applied to the aspects of three-dimensional modeling, medical images, robot navigation, unmanned aerial vehicle driving obstacle detection, target detection and tracking, virtual roaming and the like. Currently, the main methods for estimating depth information include:
active lighting methods employ additional scene lighting to assist in depth estimation, and depth determination is based on attenuated projected light or differences from predictive models. The coded aperture approach is to improve the camera lens so that the blur kernel is more reliable for deblurring. The above methods require either improved hardware or increased external settings. Saxena et al abstracts the features of the image, such as color, texture, shape, etc., into a markov random field model, and estimates the depth of the image by establishing a functional relationship between the features and the depth and by equivalence of the markov random field and the gibbs field. However, the method model is very complicated to establish, needs a large amount of prior knowledge, and is limited in the field of practical application. Zhuo et al use local blur at edge positions as an initial value to perform depth recovery of a defocused image, and extend the value to the whole image to obtain complete depth information. Ma et al extract the image depth information of the single viewpoint with the fuzzy information estimation characteristic-outline sharpness algorithm, characterize the degree of fuzzy information with the sharpness parameter of the gradient outline, and add the edge space information at the same time to extract the outline of the image to obtain the image depth map.
Disclosure of Invention
In order to overcome the defects of complex model establishment and limited application field in the prior art, the invention provides a single image depth recovery method and a single image depth recovery system. The method is simple in algorithm, does not need to establish a complex mathematical model, and can be used for rapidly recovering a single color image.
In order to achieve the above purpose, one or more embodiments of the present disclosure adopt the following technical solutions:
a single image depth recovery method comprises the following steps:
receiving a single color test image, searching a plurality of color images similar to the test image in a color-depth image database, and acquiring corresponding depth images to obtain a training image pair;
calculating the mapping relation between the color image pixels in the test image and each training image pair;
acquiring a plurality of depth gradients according to the depth image in each training image pair and the mapping relation;
a respective depth image of the test image is calculated based on the plurality of depth gradients.
Further, before retrieving a plurality of images similar to the test image, further comprising: and performing brightness remapping on the color image in the color-depth image database and the test image.
Further, the retrieving a plurality of color images similar to the test image comprises:
calculating the similarity between the color image and the test image in a color-depth image database based on the pyramid direction gradient histogram; color images with a similarity exceeding a certain threshold are selected.
Further, the calculating of the mapping relationship between the test image and the color image pixels in each training image pair is represented by a warping function:
Figure BDA0001845517030000021
wherein m isk(p) is the warping function, fI(p) represents the feature vector of the test image I at pixel p, fk(p) represents the feature vector of the color image at pixel p in the K-th training image pair, K being 1,2, … K.
Further, the acquiring a plurality of depth gradients comprises:
Figure BDA0001845517030000022
wherein, g(k)(p) denotes the kth depth gradient,
Figure BDA0001845517030000023
gradient factor in q-direction, Dk(p) represents the depth value of the depth image at pixel p.
Further, said computing a respective depth image for the test image based on the plurality of depth gradients comprises:
calculating the confidence of all pixels of the test image relative to each color image;
for each pixel, the confidence scores are arranged in an ascending order and numbered 1,2, … K, and a minimum K is calculated so that the confidence score sum corresponding to the number 1-K is greater than or equal to half of the confidence score sum;
the depth gradients are arranged in an ascending order, and the depth gradient corresponding to the number k of each pixel is extracted to form a final gradient field;
and normalizing each pixel value in the depth field to an interval [0,255] to obtain a final depth map.
Further, the confidence calculation formula is:
Figure BDA0001845517030000031
wk(p) represents the confidence of the test image at pixel p relative to the color image in the kth training image pair.
One or more embodiments provide a three-dimensional reconstruction method based on a single image, including the steps of:
receiving a single color test image, and acquiring a depth image based on the depth recovery method;
and performing three-dimensional reconstruction according to the color test image and the depth image.
One or more embodiments provide a computer system comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the single image depth restoration method when executing the program.
One or more embodiments provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the single image depth restoration method.
One or more of the technical schemes have the following beneficial effects:
the method comprises the steps of obtaining a color image and a depth image which are similar to the whole scene of an image to be restored through a retrieval method, obtaining a depth gradient field corresponding to the image to be restored based on a mapping relation of the image to be restored and the color image as a bridge, and carrying out a series of post-processing on the depth gradient field to obtain a final gradient image. The method does not need to be based on a large number of model parameters and establish a complex mathematical model, can obtain a depth map with sufficient accuracy as long as enough data matched with the test sample exist in the color-depth database, and is simple to implement and wide in application range.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a flowchart of a single image depth recovery method according to an embodiment of the disclosure;
fig. 2 is a depth map obtained by respectively applying the text algorithm, the Saxena algorithm, and the Zhuo algorithm to four groups of images in the first embodiment of the present disclosure.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Example one
The embodiment discloses a method for recovering the depth of a single image, which comprises the following steps as shown in fig. 1:
step 1: receiving a single color test image, searching a plurality of color images similar to the test image in a color-depth image database, and acquiring corresponding depth images to obtain a training image pair;
step 1-1: and performing brightness remapping on the color image in the color-depth image database and the test image so that the brightness of the color image and the test image is on the same scale.
For any input image I, due to the fact that the shooting mode and the lighting condition are different, the generated image may cause image detail loss due to uneven brightness of light, and thus the phenomenon that the matching of the test image and the target image in the database is wrong due to overlarge brightness difference is caused, so that brightness remapping needs to be carried out on the test image, and errors caused by color problems of the test image and the target image are reduced.
In the process of matching images, a general color image uses R, G, B three primary color model, which is a cube and is established in cartesian product coordinate system, RGB is additive color, and is formed based on superposition of light: red plus green plus blue equals white, and the disadvantages of this model are two: firstly, the method is not intuitive, and the cognitive attribute of the color represented by the value R, G, B is difficult to know from the value, so that the method does not accord with the perception psychology of the color of a human; secondly, the color space distribution of the model is not uniform, and the perception difference of two colors cannot be represented by the distance between two color points in the color space. However, YCbCr color space is different and widely applied to digital images and videos, in the YCbCr model, Y refers to a luminance component, Cb refers to a blue chrominance component, Cr refers to a red chrominance component, the three luminance components can be separated, and the luminance of the image can be directly changed without affecting the color after separation, the model better conforms to the way of describing and explaining the color by human, and simultaneously, in order to simplify the processing process and reduce the amount of computation, the algorithm converts the RGB image into the YCbCr color space first, and the conversion formulas of the two are as follows:
Y=0.299R+0.587G+0.114B (1)
Cb=0.564(B-Y) (2)
Cr=0.731(R-Y) (3)
the formula (1) is respectively substituted into the formulas (2) and (3), and the formula is simplified to obtain:
Figure BDA0001845517030000051
after conversion to the YCbCr color space, the contrast of the image is enhanced by processing only the luminance Y of the image, the Cb and Cr components serve to retain color information, and the human eye is more sensitive to the Y component of the video, so that the human eye will not perceive the change in image quality after the chrominance component is reduced by sub-sampling the chrominance component. After the model processing, the overall brightness of the image is improved, and then the YCbCr model is converted into the RGB model, and the formula is as follows:
R=Y+1.402Cr (5)
G=Y-0.344Cb-0.714Cr (6)
B=Y+1.772Cb (7)
after the brightness remapping processing, the brightness of the test image and the brightness of the image to be retrieved in the database are approximately on the same scale, so that the problem of image matching errors caused by overlarge brightness difference can be effectively avoided.
Step 1-2: retrieving a plurality of images similar to the color image from an RGB-D image database based on the pyramid direction gradient histogram, acquiring corresponding depth images, and forming a training image pair by the retrieved color image and the corresponding depth images;
RGB-D image database T { (I)i,Di)|i=1,2...,N},IiAnd DiRepresenting a color image and its associated depth map, respectively, and N is the database size. Selecting training pairs C { (I) from a database T by high-level image featuresk,Dk) 1,2., K }. In this embodiment, the similarity between two images is measured using histogram of oriented gradients (PhOG) descriptors, PhOG feature vectors of all images in the database are pre-computed and stored, and test image I and color image I from database T are testediThe difference between them is measured as the Sum of Squared Differences (SSD) between two corresponding feature vectors, and the formula is as follows:
Figure BDA0001845517030000061
FI∈Rnan n-dimensional PHOG feature vector representing the image I. For gradient histograms, L ═ 3 goldThe word tower grade and B is 8 grades, which can generate 680-dimensional feature vector
Figure BDA0001845517030000062
And extracting K matching pairs with high similarity according to a matching distance formula, and defining the K matching pairs as training pairs C related to learning depth.
Step 2: calculating the mapping relation between the color image pixels in the test image and each training image pair;
the warping function m is defined over all pixel coordinates of the test image I: i → R2Searching test image I to training center color image IkK is 1,2.. K.
Figure BDA0001845517030000063
Wherein m isk(p) is the warping function, fI(p) represents the feature vector of the test image I at pixel p, fk(p) represents the feature vector of the color image at pixel p in the K-th training image pair, K being 1,2, … K.
And step 3: acquiring a plurality of depth gradient fields according to the depth image in each training image pair and the mapping relation;
Figure BDA0001845517030000064
wherein, g(k)(p) denotes the kth depth gradient field,
Figure BDA0001845517030000065
gradient factor in q-direction, Dk(p) represents the depth value of the depth image at pixel p.
And 4, step 4: computing a respective depth image of the test image based on the plurality of depth gradient fields;
the step 4 specifically includes:
step 4-1: calculating the confidence of all pixels of the test image relative to each color image;
to verify the accuracy of the above estimation, the sampling confidence of all pixels is measured according to the matching distance, and the confidence wk (p) is defined as the normalized matching distance, and the formula is as follows:
Figure BDA0001845517030000066
when p and p + m are usedkThe more matched the two blocks centered (p) in feature space, the confidence wkThe higher (p) is. Based on this, K confidences can be obtained for each pixel of the test image.
Step 4-2: for each pixel, the confidence scores are arranged in an ascending order and numbered 1,2, … K, and a minimum K is calculated so that the confidence score sum corresponding to the number 1-K is greater than or equal to half of the confidence score sum;
k*satisfies the formula:
Figure BDA0001845517030000071
step 4-3: for the depth gradient fields, arranging the depth gradients corresponding to the pixels in an ascending order, and extracting the depth gradient corresponding to the number k of each pixel to form a final gradient field;
wherein
Figure BDA0001845517030000072
Representing an ordered confidence value. The final gradient field g is created using the depth gradient values corresponding to the number k per pixel:
Figure BDA0001845517030000073
Figure BDA0001845517030000074
is the ordered depth gradient field of g.
Step 4-4: and normalizing each pixel value in the depth field to an interval [0,255] to obtain a final depth map.
The method can capture a reasonable natural depth field D in a global sense, and then can carry out three-dimensional reconstruction on the image.
And 5: and finally, combining the gradient field with the Poisson curved surface reconstruction, extracting a depth map of the image, and performing three-dimensional reconstruction.
The depth map D may be obtained by integrating the estimated gradient field g, given g (p) ═ gx(p),gy(p))TThe surface (depth field) D can be obtained by minimizing the following objective function:
Figure BDA0001845517030000075
wherein Dx and Dy represent the gradient field of D along the x-axis and the y-axis, respectively, (14) the solution of the formula can be obtained by solving a poisson equation with the niemann boundary conditions, in the form:
Figure BDA0001845517030000076
where div is the divergence operator and n is the normal vector perpendicular to the surface D.
In this embodiment, a test is performed on an Inter core 8GHZ PC and a hard disk 64G, Windows 10PC by using a Matlab R2016a platform, and a test is performed on a long-range view image and a short-range view image by using a text algorithm, a Saxena algorithm, and a Zhuo algorithm, and an effect graph is shown in fig. 2. Column 1 is the input color image, column 2 is the text algorithm, column 3 is the Saxena algorithm, column 4 is the Zhuo algorithm, and the overall effect and edge fineness are still more desirable.
For a more scientific comparison of the effectiveness of the algorithm herein, the Relative Error (REL) was used as shown in table 1.
Figure BDA0001845517030000081
Where D represents the depth value estimated by the algorithm and D represents the true depth map of the input image. The input images in fig. 2 are numbered A, B, C, D respectively, and the relative errors REL1, REL2, REL3 between the depth values estimated by the Saxena algorithm, the Zhuo algorithm and the text algorithm and the real depth map are calculated, and the results are shown in table 1. Quantitative comparisons were made in terms of root mean square value, root mean square gradient error (MGE), and time spent processing an image, as shown in table 2. Through comparison of the algorithm with the Saxena algorithm and the Zhuho algorithm, the result error rate of the algorithm is minimum, and for the same image, the time consumed by the algorithm for processing the image is also minimum, because the algorithm does not need a large number of model parameters, does not need operations such as MRF modeling and image over-segmentation, the complexity of the algorithm is correspondingly reduced, and the processing time is obviously superior to the Zhuho algorithm and the Saxena algorithm.
TABLE 1 comparison of the algorithms herein with the qualitative analysis of Saxena and Zhuo
Figure BDA0001845517030000082
TABLE 2 comparison of the algorithms herein with Saxena and Zhuo quantitative analysis
Figure BDA0001845517030000083
Example two
The embodiment aims at providing a computer system.
A computer system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program effecting:
receiving a single color test image, searching a plurality of color images similar to the test image in a color-depth image database, and acquiring corresponding depth images to obtain a training image pair;
calculating the mapping relation between the color image pixels in the test image and each training image pair;
acquiring a plurality of depth gradients according to the depth image in each training image pair and the mapping relation;
a respective depth image of the test image is calculated based on the plurality of depth gradients.
EXAMPLE III
An object of the present embodiment is to provide a computer-readable storage medium.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, performs the steps of:
receiving a single color test image, searching a plurality of color images similar to the test image in a color-depth image database, and acquiring corresponding depth images to obtain a training image pair;
calculating the mapping relation between the color image pixels in the test image and each training image pair;
acquiring a plurality of depth gradients according to the depth image in each training image pair and the mapping relation;
a respective depth image of the test image is calculated based on the plurality of depth gradients.
The steps involved in the second and third embodiments correspond to the first embodiment of the method, and the detailed description thereof can be found in the relevant description of the first embodiment. The term "computer-readable storage medium" should be taken to include a single medium or multiple media containing one or more sets of instructions; it should also be understood to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a processor and that cause the processor to perform any of the methods of the present invention.
The above one or more embodiments have the following beneficial effects:
the method comprises the steps of obtaining a color image and a depth image which are similar to the whole scene of an image to be restored through a retrieval method, obtaining a depth gradient field corresponding to the image to be restored based on a mapping relation of the image to be restored and the color image as a bridge, and carrying out a series of post-processing on the depth gradient field to obtain a final gradient image. The method does not need to be based on a large number of model parameters and establish a complex mathematical model, can obtain a depth map with sufficient accuracy as long as enough data matched with the test sample exist in the color-depth database, and is simple to implement and wide in application range.
Those skilled in the art will appreciate that the modules or steps of the present invention described above can be implemented using general purpose computer means, or alternatively, they can be implemented using program code that is executable by computing means, such that they are stored in memory means for execution by the computing means, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps of them are fabricated into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (7)

1. A single image depth recovery method is characterized by comprising the following steps:
receiving a single color test image, searching a plurality of color images similar to the test image in a color-depth image database, and acquiring corresponding depth images to obtain a training image pair;
calculating the mapping relation between the color image pixels in the test image and each training image pair;
acquiring a plurality of depth gradient fields according to the depth image in each training image pair and the mapping relation;
computing a respective depth image of the test image based on the plurality of depth gradient fields;
the retrieving of the plurality of color images similar to the test image comprises: calculating the similarity between the color image and the test image in a color-depth image database based on the pyramid direction gradient histogram; selecting a color image with similarity exceeding a certain threshold;
said computing a respective depth image of the test image based on the plurality of depth gradient fields comprises:
calculating the confidence of all pixels of the test image relative to each color image, wherein the confidence calculation formula is as follows:
Figure FDA0002717306410000011
wk(p) represents the confidence of the test image at pixel p relative to the color image in the k-th training image pair, mk(p) is the warping function, fI(p) represents the feature vector of the test image I at pixel p, fk(p) represents the feature vector of the color image at pixel p in the K-th training image pair, K being 1,2, … K;
for each pixel, the confidence scores are arranged in an ascending order and numbered 1,2, … K, and a minimum K is calculated so that the confidence score sum corresponding to the number 1-K is greater than or equal to half of the confidence score sum;
for the depth gradient fields, arranging the depth gradients corresponding to the pixels in an ascending order, and extracting the depth gradient corresponding to the number k of each pixel to form a final gradient field;
and normalizing each pixel value in the depth field to an interval [0,255] to obtain a final depth map.
2. The method for single image depth restoration according to claim 1, further comprising, before retrieving a plurality of images similar to the test image: and performing brightness remapping on the color image in the color-depth image database and the test image.
3. The method for recovering the depth of a single image according to claim 1, wherein the step of calculating the mapping relationship between the test image and the color image pixels in each training image pair is represented by a warping function:
Figure FDA0002717306410000021
wherein m isk(p) is the warping function, fI(p) represents the feature vector of the test image I at pixel p, fk(p) represents the feature vector of the color image at pixel p in the K-th training image pair, K being 1,2, … K.
4. The method of single image depth restoration according to claim 3, wherein said obtaining a plurality of depth gradient fields comprises:
Figure FDA0002717306410000022
wherein, g(k)(p) denotes the kth depth gradient,
Figure FDA0002717306410000023
gradient factor in q-direction, Dk(p) represents the depth value of the depth image at pixel p.
5. A three-dimensional reconstruction method based on a single image is characterized by comprising the following steps:
receiving a single color test image, and obtaining a depth image based on the depth recovery method of any one of claims 1-4;
and performing three-dimensional reconstruction according to the color test image and the depth image.
6. A computer system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of single image depth recovery as claimed in any one of claims 1 to 4.
7. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method for depth restoration of a single image according to any one of claims 1 to 4.
CN201811268787.4A 2018-10-29 2018-10-29 Method and system for recovering depth of single image Expired - Fee Related CN109377524B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811268787.4A CN109377524B (en) 2018-10-29 2018-10-29 Method and system for recovering depth of single image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811268787.4A CN109377524B (en) 2018-10-29 2018-10-29 Method and system for recovering depth of single image

Publications (2)

Publication Number Publication Date
CN109377524A CN109377524A (en) 2019-02-22
CN109377524B true CN109377524B (en) 2021-02-23

Family

ID=65390441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811268787.4A Expired - Fee Related CN109377524B (en) 2018-10-29 2018-10-29 Method and system for recovering depth of single image

Country Status (1)

Country Link
CN (1) CN109377524B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862296B (en) * 2019-04-24 2023-09-29 京东方科技集团股份有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction system, model training method and storage medium
CN114241141B (en) * 2022-02-28 2022-05-24 深圳星坊科技有限公司 Smooth object three-dimensional reconstruction method and device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750702A (en) * 2012-06-21 2012-10-24 东华大学 Monocular infrared image depth estimation method based on optimized BP (Back Propagation) neural network model
CN103473743A (en) * 2013-09-12 2013-12-25 清华大学深圳研究生院 Method for obtaining image depth information

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6751558B2 (en) * 2001-03-13 2004-06-15 Conoco Inc. Method and process for prediction of subsurface fluid and rock pressures in the earth
US9581014B2 (en) * 2014-01-27 2017-02-28 Schlumberger Technology Corporation Prediction of asphaltene onset pressure gradients downhole
CN107103589B (en) * 2017-03-21 2019-09-06 深圳市未来媒体技术研究院 A kind of highlight area restorative procedure based on light field image
CN107862674B (en) * 2017-11-08 2020-07-03 杭州测度科技有限公司 Depth image fusion method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750702A (en) * 2012-06-21 2012-10-24 东华大学 Monocular infrared image depth estimation method based on optimized BP (Back Propagation) neural network model
CN103473743A (en) * 2013-09-12 2013-12-25 清华大学深圳研究生院 Method for obtaining image depth information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Automatic Depth Prediction from 2D Videos Based on Non-parametric Learning and Bi-directional Depth Propagation";Huihui Xu et al.;《Springer》;20180727;第1-14页 *
"基于非参数化采样的单幅图像深度估计";朱尧 等;《计算机应用研究》;20170630;第34卷(第6期);第1876-1880页 *

Also Published As

Publication number Publication date
CN109377524A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN106650630B (en) A kind of method for tracking target and electronic equipment
CN108229468B (en) Vehicle appearance feature recognition and vehicle retrieval method and device, storage medium and electronic equipment
US9471853B2 (en) Method and apparatus for image processing
CN110108258B (en) Monocular vision odometer positioning method
WO2018024030A1 (en) Saliency-based method for extracting road target from night vision infrared image
CN110070580B (en) Local key frame matching-based SLAM quick relocation method and image processing device
CN109711268B (en) Face image screening method and device
CN110619638A (en) Multi-mode fusion significance detection method based on convolution block attention module
CN110567441B (en) Particle filter-based positioning method, positioning device, mapping and positioning method
CN105678318A (en) Traffic label matching method and apparatus
CN109377524B (en) Method and system for recovering depth of single image
WO2023160312A1 (en) Person re-identification method and apparatus based on self-supervised learning, and device and storage medium
CN111784658A (en) Quality analysis method and system for face image
CN110706253A (en) Target tracking method, system and device based on apparent feature and depth feature
CN111681271B (en) Multichannel multispectral camera registration method, system and medium
CN112001954B (en) Underwater PCA-SIFT image matching method based on polar curve constraint
CN113850748A (en) Point cloud quality evaluation system and method
CN109165551B (en) Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics
CN115841602A (en) Construction method and device of three-dimensional attitude estimation data set based on multiple visual angles
CN115471748A (en) Monocular vision SLAM method oriented to dynamic environment
CN110751163A (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN109934853B (en) Correlation filtering tracking method based on response image confidence region adaptive feature fusion
US10553022B2 (en) Method of processing full motion video data for photogrammetric reconstruction
CN108389219B (en) Weak and small target tracking loss re-detection method based on multi-peak judgment
CN111932470A (en) Image restoration method, device, equipment and medium based on visual selection fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210223

Termination date: 20211029

CF01 Termination of patent right due to non-payment of annual fee