CN108564620B - Scene depth estimation method for light field array camera - Google Patents

Scene depth estimation method for light field array camera Download PDF

Info

Publication number
CN108564620B
CN108564620B CN201810256154.5A CN201810256154A CN108564620B CN 108564620 B CN108564620 B CN 108564620B CN 201810256154 A CN201810256154 A CN 201810256154A CN 108564620 B CN108564620 B CN 108564620B
Authority
CN
China
Prior art keywords
depth
scene
depth estimation
light field
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810256154.5A
Other languages
Chinese (zh)
Other versions
CN108564620A (en
Inventor
杨俊刚
王应谦
肖超
李骏
安玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201810256154.5A priority Critical patent/CN108564620B/en
Publication of CN108564620A publication Critical patent/CN108564620A/en
Application granted granted Critical
Publication of CN108564620B publication Critical patent/CN108564620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images

Abstract

The invention discloses a scene depth estimation method for a light field array camera, which utilizes sub-images acquired by the light field array camera to obtain an initial depth map of a current scene and a corresponding confidence distribution map through longitudinal variance analysis because objects at different depths in a three-dimensional scene correspond to different parallaxes. Subsequently, the invention designs a 'depth propagation under confidence guidance' algorithm to carry out denoising filtering and edge preservation on the initial depth map. By adopting the method of the invention, the depth of the current scene can be effectively estimated. The method of the invention can obtain better results in weak texture areas with difficult depth estimation.

Description

Scene depth estimation method for light field array camera
Technical Field
The invention relates to the field of image processing, computer vision and light field calculation imaging, in particular to a scene depth estimation method for a light field array camera.
Background
In recent years, light field cameras based on light field and computational imaging theory have become a focus of research. The three-dimensional information of the current scene can be obtained in a single exposure by acquiring the light field of the real world, and the functions which cannot be realized by a plurality of traditional cameras such as super-resolution calculation imaging, scene three-dimensional reconstruction and the like can be realized by processing the acquired data. Most of the functions need to estimate the depth of the current scene more accurately.
Depth estimation, an important branch of the field of computer vision research, has been extensively studied over the last decade. However, most of the research is directed to binocular cameras, and if only two sub-cameras in the array camera are used for depth estimation, effective information of the captured current scene cannot be fully utilized. In recent years, researchers have also proposed a depth estimation method based on a microlens-type light field camera, and have achieved a good effect. However, the equivalent base line of the microlens-type light field camera is narrow, the light field samples obtained by the microlens-type light field camera are dense in the angular direction, and the angular resolution is relatively high, so that the requirements of the depth estimation algorithms based on the microlens array on the angular resolution are relatively high, the array camera often has a wide base line, the sampling of the depth estimation algorithms in the angular direction is also sparse, and the low angular resolution often causes the depth estimation result to have large noise and depth mismatching. If the microlens-based depth estimation algorithm is applied directly to the array camera, the effect is lost. Therefore, it is necessary to fully utilize scene information captured by an array camera, and to suppress noise and depth mismatch by fully utilizing the information, so as to achieve better estimation of the depth of the current scene under the condition of sparse angular sampling.
Disclosure of Invention
The invention aims to solve the technical problem that the prior art is insufficient, and provides a scene depth estimation method for a light field array camera, which makes full use of scene information captured by the array camera, suppresses noise and depth mismatching by making full use of the information, and realizes better estimation of the depth of the current scene under the condition of sparse angle sampling.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a scene depth estimation method for a light field array camera, comprising the steps of:
1) estimating scene depth in the initial depth estimation by comparing variances in angular directions during refocusing;
2) calculating the confidence coefficient distribution of the scene depth by analyzing the second-order variance in the angle direction in the refocusing process;
3) filtering out noise and depth mismatch in the initial depth estimation map by using the confidence coefficient distribution;
4) and 3) reinforcing the edge of the depth estimation image processed in the step 3) to obtain an accurate depth estimation image of the current scene.
In step 1), the estimated value of the scene depth
Figure BDA0001609018670000021
Wherein x is an abscissa value of a certain pixel in the scene depth map, and WDIs a neighborhood around x, | WDI represents the total number of pixels in the window;
Figure BDA0001609018670000022
u={u1,u2,L,uUis the position of the cameras in the array; n is the depth resolution; u is the total number of cameras in the U direction; s ═ s1,s2,L,sNIs the focusing factor; l (u, x-s)iu) image taken by a camera with coordinates u in x-s abscissaiThe image grey value at u.
In step 2), the confidence coefficient distribution r (x) is calculated by the following formula:
Figure BDA0001609018670000023
where a is attenuation coefficient, b is translation coefficient, η (x) is LW(x)/max{LW(x)},
Figure BDA0001609018670000024
Is a small amount of the water to be treated,
Figure BDA0001609018670000025
Figure BDA0001609018670000026
is V' (x, s)i) Is measured.
a is 0.3; the value of b is 0.5.
The specific implementation process of the step 3) comprises the following steps:
1) a block P of size ρ × ρ centered at (i, j) is extracted from the initial depth estimation map XX(ii) a Extracting the corresponding block P from the confidence distributionR(ii) a (i, j) is initialized to (1, 1);
2) by normalization
Figure BDA0001609018670000027
Wherein P isR(x, y) is a block PRThe value of the confidence of the x row and the y column; generating a mask M through normalization;
3) will PXFilling the inner product of M into the filtered depth map XfIn, i.e.
Figure BDA0001609018670000031
4) Judging whether all pixels in the X are traversed, if so, outputting the depth map X after filteringf(ii) a Otherwise, returning to the step 1).
The specific implementation process of the step 4) comprises the following steps:
1) from the filtered depth map XfExtracting a block P with a size of rho × rho by taking (i, j) as a centerX(ii) a From expanded confidence distributions ReTo extract a corresponding block PR
2) By confidence inversion and energy normalization
Figure BDA0001609018670000032
Generating a mask Mb
3) Will PXAnd MbInner product filling accurate depth estimation map XbIn, i.e.
Figure BDA0001609018670000033
4) If XfAll the pixels are traversed, and then the accurate depth estimation image X is outputb(ii) a Otherwise, returning to the step 1).
Compared with the prior art, the invention has the beneficial effects that: the depth distribution of the current scene can be accurately estimated by using the light field array camera, so that the three-dimensional structure of the current scene can be analyzed, and the precision improvement of various functions such as scene three-dimensional reconstruction, super-resolution calculation imaging and the like based on the light field array camera is promoted. With the continuous popularization and the popularization of the light field camera, the method has greater significance and practical value.
Drawings
FIG. 1 is a block diagram of a scene depth estimation algorithm for a light field array camera;
FIG. 2 is a schematic diagram of a biplane model of a light field. Wherein, (a) is a biplane three-dimensional model of the light field: the light ray passes through the image plane (pi, y) and the camera plane (q, u, v), and its position and direction can be represented by coordinate values of the two planes. Wherein, (u, v) represents the position coordinates of the camera in the array camera, and (x, y) represents a pixel in the two-dimensional image obtained by a certain camera, so that the data obtained by the whole array camera can be represented by the four coordinates of u, v, x, y, and we use L (u, v, x, y) to represent the gray value (value range is 0-255) of the pixel at the coordinate (x, y) of the image obtained by the camera at the coordinate (u, v), which is determined by the scene shot by the camera, and L can be understood as a mapping relation between the four-dimensional coordinates (two-dimensional camera coordinates, two-dimensional image coordinates) of the light field and the gray value of the image obtained by the camera array. So that L (u, v, x, y) may represent the current light field captured by the camera array. (b) Projection of the light field stereo model in the xu direction: since the four-dimensional light field model has symmetry in two pairs of u and v, x and y directions, in order to simplify the analysis of the model and without loss of generality, we fix y ═ y*And v ═ v*And projecting the light field to an xu space for analysis. By analyzing the two-dimensional projection of the light field in fig. 2(b), we can obtain the displacement deviation d ═ L between the depth γ of the scene and the corresponding pixel of the image at that depth1-L2The relation d-fB/γ is satisfied so that the problem of depth estimation can be attributed to the problem of estimation of the difference in displacement between pixels of corresponding depth in the array sub-images.
Fig. 3 is an effect graph obtained by the algorithm of the present invention, (a) is a scene graph used in an experiment, (b) is a scene confidence distribution graph obtained by the method of the present invention, and (c) is a scene depth graph obtained by the method disclosed by the present invention.
Detailed Description
The depth estimation is mainly realized by estimating the displacement difference of pixels at different positions in the angular direction, and the displacement difference and the depth have a one-to-one correspondence relationship. We therefore refer to the estimation of displacement variance as depth estimation in the present invention. Without loss of generality, the four-dimensional light field model L (u, v, x, y) is simplified into the two-dimensional model L (u, x) in the introduction step process, so that the algorithm is conveniently introduced. According to the method, the sub-images acquired by the array camera are analyzed in the angle direction, the initial depth estimation is carried out by comparing the variance, and the confidence coefficient corresponding to the initial depth is estimated by analyzing the second-order variance. Then, the invention filters out noise and depth mismatch in the initial depth estimation by using a "depth propagation under confidence guidance" algorithm. In the algorithm, the initial depth firstly flows in a forward direction under the guidance of confidence coefficient, so that the noise and the depth mismatching in a low confidence coefficient area are replaced by surrounding areas with high confidence coefficient; the depth then flows backwards under the guidance of the expanded confidence, enhancing the edges in the depth map while further filtering. Through a depth propagation algorithm under the guidance of confidence coefficient, a depth distribution map with ideal and accurate current scene can be obtained. The flow chart of the algorithm of the invention is shown in fig. 1, and specifically comprises the following steps:
1. the scene depth is initially estimated during refocusing by comparing the variance in the angular direction. The refocusing process can be expressed as:
Figure BDA0001609018670000041
wherein u is { u ═ u { (R) }1,u2,L,uUIs the position of the cameras in the array (typically the camera in the middle position is set as the reference camera); s ═ s1,s2,L,sNIs the focusing factor, N is the depth resolution (i.e. the number of depth layers divided in total in the depth direction), and U is the total number of cameras in the U direction. The variance of the array sub-images in the angular direction can be expressed as:
Figure BDA0001609018670000051
because the focus area usually corresponds to a smaller variance in the angle direction, and the non-focus area usually corresponds to a larger variance in the angle direction, the focus factor corresponding to the smallest variance can be selected as the focus factor corresponding to the depth of the pixel by calculating the variance under each focus factor and comparing. To increase the robustness of the algorithm, we use the following formula to calculate the initial depth estimate:
Figure BDA0001609018670000052
Figure BDA0001609018670000053
here, WDIs a neighborhood around x, which may be generally set to a window of 7 × 7, | WDI represents the total number of pixels in the window; d (x) is the initial displacement estimation value.
2. The confidence in the depth of the scene is calculated by analyzing the second order variance in the angular direction during refocusing. And calculating a second-order variance value in the angular direction of the light field array sub-image according to the following formula:
Figure BDA0001609018670000054
in the formula (I), the compound is shown in the specification,
Figure BDA0001609018670000055
is the variance V' (x, s)i) W (x) may be used to measure the fluctuation of V' (x) and thus to measure the reliability of the depth valueDegree of the disease. However, the dimensions of W (x) are too large for direct application and we handle it. Firstly, carrying out logarithmic treatment by adopting the following formula:
Figure BDA0001609018670000056
where it is a small amount to prevent the denominator from being equal to 0, and then normalized according to the following equation:
η(x)=LW(x)/max{LW(x)}
by normalization, the value range of η is limited to 0 to 1. Finally, in order to divide η into high confidence regions and low confidence regions, sigmoid functions are used for mapping. As follows:
Figure BDA0001609018670000057
in the formula, a is an attenuation coefficient, and the sensitivity of a control curve is 0.3; b is a translation coefficient, and the value of the control threshold is 0.5. Through the above calculation, the confidence coefficient distribution R of the current scene depth estimation is obtained.
3. And filtering noise and depth mismatching in the initial depth estimation image by adopting a confidence degree guided depth propagation algorithm. With the calculated confidence distribution R, we can achieve global optimization by minimizing the following objective function.
Figure BDA0001609018670000061
In the formula, X0Is a vectorized initial displacement estimate, X is a variable, R represents a confidence distribution,
Figure BDA0001609018670000062
is a full 1 vector. X, X0R and
Figure BDA0001609018670000063
have the same dimensions. Optimized depth estimation map
Figure BDA0001609018670000064
Can be found by minimizing the objective function. And the fidelity term E in the objective functionR(X) and the regularization term JR(X) composition. And lambda is a regularization coefficient and is used for controlling the strength of the filtering action. The matrix H is an operator that controls the propagation of depth values from high confidence regions to low confidence regions under confidence guidance, HX can be implemented by algorithm 1.
Figure BDA0001609018670000065
Note that the boundary is processed by filling in the edge values
4. The edges of the depth map are enhanced by using a "depth reflow under confidence" algorithm. The specific implementation method comprises the following steps:
by minimizing the objective function, although noise and depth mismatch in weak texture regions can be effectively suppressed, it can cause diffusion of displacement values around the edges of high-confidence regions. In order to maintain the strength at the edges, an edge reinforcement is introduced here. The measures are further divided into confidence domain expansion and deep reflow.
The confidence domain dilation may be implemented by a maximum filter. We define ReThe confidence distribution map after expansion, in which any one pixel can be obtained by the following calculation.
Figure BDA0001609018670000071
In the formula, Pi,jIs a block centered at R (i, j), and the function of the maximum filter is to extract the neighborhood P of R (i, j)i,jThe maximum value in (b) is the result of the filter output. Through the operation of the maximum filter, the region with higher confidence coefficient in the original confidence coefficient distribution diagram expands, and the region with lower confidence coefficient contracts.
Because the blurring effect of the edge is mainly concentrated on one side of the edge with low confidence, and the other side is protected by the fidelity term, the optimization process is not greatly lost, and here, the edge enhancement can be performed by adopting a depth reflow strategy under confidence guidance, which can be specifically realized by minimizing the following objective function:
Figure BDA0001609018670000072
in the formula, λbIs a regularization weight, matrix HbIs a space variant filter operator in which the low confidence regions take more weight. The filtering process is detailed in algorithm 2 below.
Figure BDA0001609018670000073
Note that the boundary is processed by filling in the edge values

Claims (5)

1. A scene depth estimation method for a light field array camera, comprising the steps of:
1) in the refocusing process, calculating the variance under each focusing factor, comparing the variances under each focusing factor, and selecting the focusing factor corresponding to the minimum variance as the focusing factor corresponding to the depth of the pixel; estimate of the depth i in the u direction for a pixel with x on the abscissa
Figure FDA0002530741320000011
Wherein x is an abscissa value of a certain pixel in the scene depth map, and WDIs a neighborhood around x, | WDI represents the total number of pixels in the window;
Figure FDA0002530741320000012
Figure FDA0002530741320000013
i=1,2,…,N;u={u1,u2,…,uUis the position of the cameras in the array; n is the depth resolution; u shapeIs the total number of cameras in the u direction; s ═ s1,s2,…,sNIs the focusing factor; l (u, x-s)iu) image taken by a camera with coordinates u in x-s abscissaiThe image grey value at u;
2) calculating the confidence coefficient distribution of the scene depth by analyzing the second-order variance in the angle direction in the refocusing process;
3) filtering out noise and depth mismatch in the initial depth estimation map by using the confidence coefficient distribution;
4) and 3) reinforcing the edge of the depth estimation image processed in the step 3) to obtain an accurate depth estimation image of the current scene.
2. The scene depth estimation method for a light field array camera according to claim 1, wherein in step 2), the confidence distribution r (x) is calculated by the formula:
Figure FDA0002530741320000014
where a is attenuation coefficient, b is translation coefficient, η (x) is LW(x)/max{LW(x)},
Figure FDA0002530741320000015
Is a small amount of the water to be treated,
Figure FDA0002530741320000021
is V' (x, s)i) Is measured.
3. The scene depth estimation method for a light field array camera according to claim 2, wherein a is 0.3; the value of b is 0.5.
4. The scene depth estimation method for a light field array camera according to claim 1, wherein the detailed implementation procedure of step 3) comprises the following steps:
1) extract one from the initial depth estimation map XA block P of size ρ × ρ centered at (i, j)X(ii) a Extracting the corresponding block P from the confidence distributionR(ii) a (i, j) is initialized to (1, 1);
2) by normalization
Figure FDA0002530741320000022
Wherein P isR(x, y) is a block PRThe value of the confidence of the x row and the y column; generating a mask M through normalization;
3) will PXFilling the inner product of M into the filtered depth map XfIn, i.e.
Figure FDA0002530741320000023
4) Judging whether all pixels in the X are traversed, if so, outputting the depth map X after filteringf(ii) a Otherwise, returning to the step 1).
5. The scene depth estimation method for the light field array camera according to claim 4, wherein the detailed implementation procedure of step 4) includes:
1) from the filtered depth map XfExtracting a block P with a size of rho × rho by taking (i, j) as a centerX(ii) a From expanded confidence distributions ReTo extract a corresponding block PR
2) By confidence inversion and energy normalization
Figure FDA0002530741320000024
Generating a mask Mb
3) Will PXAnd MbInner product filling accurate depth estimation map XbIn, i.e.
Figure FDA0002530741320000025
4) If XfAll the pixels are traversed, and then the accurate depth estimation image X is outputb(ii) a Otherwise, returning to the step 1).
CN201810256154.5A 2018-03-27 2018-03-27 Scene depth estimation method for light field array camera Active CN108564620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810256154.5A CN108564620B (en) 2018-03-27 2018-03-27 Scene depth estimation method for light field array camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810256154.5A CN108564620B (en) 2018-03-27 2018-03-27 Scene depth estimation method for light field array camera

Publications (2)

Publication Number Publication Date
CN108564620A CN108564620A (en) 2018-09-21
CN108564620B true CN108564620B (en) 2020-09-04

Family

ID=63533407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810256154.5A Active CN108564620B (en) 2018-03-27 2018-03-27 Scene depth estimation method for light field array camera

Country Status (1)

Country Link
CN (1) CN108564620B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360235B (en) * 2018-09-29 2022-07-19 中国航空工业集团公司上海航空测控技术研究所 Hybrid depth estimation method based on light field data
CN110276371B (en) * 2019-05-05 2021-05-07 杭州电子科技大学 Container corner fitting identification method based on deep learning
CN110197506B (en) * 2019-05-30 2023-02-17 大连理工大学 Light field depth estimation method based on variable-height rotating parallelogram
CN110378946B (en) 2019-07-11 2021-10-01 Oppo广东移动通信有限公司 Depth map processing method and device and electronic equipment
CN110400342B (en) * 2019-07-11 2021-07-06 Oppo广东移动通信有限公司 Parameter adjusting method and device of depth sensor and electronic equipment
CN111028281B (en) * 2019-10-22 2022-10-18 清华大学 Depth information calculation method and device based on light field binocular system
CN111091601B (en) * 2019-12-17 2023-06-23 香港中文大学深圳研究院 PM2.5 index estimation method for real-time daytime outdoor mobile phone image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184808A (en) * 2015-10-13 2015-12-23 中国科学院计算技术研究所 Automatic segmentation method for foreground and background of optical field image

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9414048B2 (en) * 2011-12-09 2016-08-09 Microsoft Technology Licensing, Llc Automatic 2D-to-stereoscopic video conversion
CN103279961A (en) * 2013-05-22 2013-09-04 浙江大学 Video segmentation method based on depth recovery and motion estimation
CN104966289B (en) * 2015-06-12 2017-12-26 北京工业大学 A kind of depth estimation method based on 4D light fields
CN105023249B (en) * 2015-06-26 2017-11-17 清华大学深圳研究生院 Bloom image repair method and device based on light field
CN105139401A (en) * 2015-08-31 2015-12-09 山东中金融仕文化科技股份有限公司 Depth credibility assessment method for depth map
US10372968B2 (en) * 2016-01-22 2019-08-06 Qualcomm Incorporated Object-focused active three-dimensional reconstruction
CN106340041B (en) * 2016-09-18 2018-12-25 杭州电子科技大学 It is a kind of to block the light-field camera depth estimation method for filtering out filter based on cascade
CN107038719A (en) * 2017-03-22 2017-08-11 清华大学深圳研究生院 Depth estimation method and system based on light field image angle domain pixel

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184808A (en) * 2015-10-13 2015-12-23 中国科学院计算技术研究所 Automatic segmentation method for foreground and background of optical field image

Also Published As

Publication number Publication date
CN108564620A (en) 2018-09-21

Similar Documents

Publication Publication Date Title
CN108564620B (en) Scene depth estimation method for light field array camera
JP5178875B2 (en) Image processing method for corresponding point search
US6819318B1 (en) Method and apparatus for modeling via a three-dimensional image mosaic system
CN108564041B (en) Face detection and restoration method based on RGBD camera
WO2018000752A1 (en) Monocular image depth estimation method based on multi-scale cnn and continuous crf
CN107025660B (en) Method and device for determining image parallax of binocular dynamic vision sensor
CN108337434B (en) Out-of-focus virtual refocusing method for light field array camera
CN112541877B (en) Defuzzification method, system, equipment and medium for generating countermeasure network based on condition
CN112132958A (en) Underwater environment three-dimensional reconstruction method based on binocular vision
CN103473743B (en) A kind of method obtaining image depth information
RU2419880C2 (en) Method and apparatus for calculating and filtering disparity map based on stereo images
CN111145094A (en) Depth map enhancement method based on surface normal guidance and graph Laplace prior constraint
Lee et al. Improving focus measurement via variable window shape on surface radiance distribution for 3D shape reconstruction
Gaganov et al. Robust shape from focus via Markov random fields
CN112132771B (en) Multi-focus image fusion method based on light field imaging
CN107220945B (en) Restoration method of multiple degraded extremely blurred image
JP2022027464A (en) Method and device related to depth estimation of video
KR20140000833A (en) Stereo matching apparatus and its method
Mahmood Shape from focus by total variation
CN115147709B (en) Underwater target three-dimensional reconstruction method based on deep learning
CN111951339A (en) Image processing method for performing parallax calculation by using heterogeneous binocular cameras
He et al. A novel way to organize 3D LiDAR point cloud as 2D depth map height map and surface normal map
CN115631223A (en) Multi-view stereo reconstruction method based on self-adaptive learning and aggregation
CN110827343B (en) Improved light field depth estimation method based on energy enhanced defocus response
Zhang et al. Multi-view depth estimation with color-aware propagation and texture-aware triangulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant