CN110827343B - Improved light field depth estimation method based on energy enhanced defocus response - Google Patents

Improved light field depth estimation method based on energy enhanced defocus response Download PDF

Info

Publication number
CN110827343B
CN110827343B CN201911073694.0A CN201911073694A CN110827343B CN 110827343 B CN110827343 B CN 110827343B CN 201911073694 A CN201911073694 A CN 201911073694A CN 110827343 B CN110827343 B CN 110827343B
Authority
CN
China
Prior art keywords
light field
depth
defocus
depth map
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911073694.0A
Other languages
Chinese (zh)
Other versions
CN110827343A (en
Inventor
武迎春
程星
张娟
李素月
宁爱平
王安红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Science and Technology
Original Assignee
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Science and Technology filed Critical Taiyuan University of Science and Technology
Priority to CN201911073694.0A priority Critical patent/CN110827343B/en
Publication of CN110827343A publication Critical patent/CN110827343A/en
Application granted granted Critical
Publication of CN110827343B publication Critical patent/CN110827343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of light field image processing and depth estimation, and particularly discloses an improved light field depth estimation method based on energy enhanced defocus response, aiming at the problems that the existing DCDC algorithm has limited solved second derivative direction and all second derivatives and energies are mutually counteracted when a defocus response function is established, the defocus response function is improved, a light field depth estimation algorithm based on energy enhanced defocus response is provided, the influence of surrounding pixel points on the current position defocus is fully considered by the algorithm, the energy enhancement is realized by increasing the number and the direction of second derivatives with weight difference, the effectiveness of the method is verified through experiments, and the depth image vision effect obtained by the algorithm is better for light field images with complex depth information.

Description

Improved light field depth estimation method based on energy enhanced defocus response
Technical Field
The invention belongs to the field of light field image processing and depth estimation, and particularly relates to an improved light field depth estimation method based on energy enhanced defocus response.
Background
The continuous development of light field rendering theory and the continuous evolution of full-light functions, light field imaging becomes a hot topic of modern computing photography. Different from the structural design of the traditional camera, the micro-lens light field camera is based on a light field biplane representation model, and 4-dimensional (4D) light field data acquisition is realized by adding a micro-lens array between a main lens and an imaging surface. The 4D light field records not only the position information (x, y) of the spatial light obtained by conventional imaging, but also the direction information (s, t) of the light. The multi-dimensional space light information is recorded by single exposure, so that the light field image has wider application value in the later stage, such as digital refocusing, multi-viewpoint image extraction, full-focus image fusion and the like based on the light field image. The developed depth estimation algorithm based on the method can complete depth map estimation by only one light field image, and has attracted general attention of a plurality of students in the field of optical three-dimensional sensing in recent years.
Currently, depth estimation algorithms based on light field images are mainly divided into three categories: stereo matching, EPI (Epipolar-Plane Image) imaging, and defocus (Depth from Defocus, DFD). The stereo matching method is mainly based on parallax of the light field sub-aperture image, depth image estimation is completed through the corresponding relation between parallax and depth information, and the depth acquisition method based on sub-aperture image phase sub-pixel displacement calculation, such as Jeon and the like, is based on the principle, is limited by the structural design of a micro-lens light field camera, and has limited depth acquisition accuracy due to the fact that the number of viewpoints of the decoded sub-aperture image is limited, the resolution is limited and the like.
The EPI algorithm stacks a plurality of sub-aperture images with single-direction parallax to form a cube, cuts the cube along a certain direction to form an EPI section, utilizes the proportional relation between the polar slope of the EPI image and the depth of a target scene to finish depth estimation, and uses an EPI image structure tensor calculation and a sparse linear system to finish the acquisition of a smooth depth image, such as Li, and the like, and uses a self-rotating parallelogram to obtain the slope of the EPI by Zhang, so that the EPI depth reconstruction effect on a discontinuous region is better. The method needs to accurately estimate the slope of the polar line where each pixel point of the EPI image is located, and the partial incompleteness of the image features is a main reason for high algorithm complexity and poor real-time performance.
The DFD is used for completing depth estimation by comparing defocus degrees of a plurality of multi-focus images with different focus depths of the same target scene, and the regularized depth estimation based on non-local equalization filtering proposed by Favaro is based on the principle.
Aiming at the characteristics of the light field image, tao provides a depth acquisition algorithm (Depth from combining defocus and correspondence, DCDC) based on defocus evaluation and correlation evaluation after fully analyzing the advantages and disadvantages of various depth acquisition algorithms, the algorithm fully utilizes the advantages of strong local noise resistance of a defocus depth map and accurate global boundary of the correlation depth map, and the optimization of a final depth map is completed by adopting a Markov optimization theory. However, when defocus evaluation is performed to acquire depth, only the second derivatives in the horizontal and vertical directions of the spatial information are considered by the Laplacian operator, and the problem that energy is mutually counteracted when the second derivatives are summed exists, so that the depth acquisition precision of the algorithm applied to complex shooting scenes is affected. Based on the method, the defocus evaluation function is improved, and the quality of the depth map obtained by the defocus method is improved based on the energy-enhanced defocus evaluation method, so that the obtaining precision of the final depth map is improved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, solve the problems that the existing DCDC algorithm has limited second derivative direction and all second derivatives and energy are mutually offset when the existing DCDC algorithm builds a defocus response function, and provide an improved light field depth estimation algorithm based on energy enhancement defocus response.
In order to solve the technical problems, the technical scheme of the invention is as follows: an improved light field depth estimation algorithm based on energy enhanced defocus response proceeds as follows:
1) After the micro lens center of the light field camera is calibrated, a 4D light field L is decoded from the original image of the light field 0 (x 0 ,y 0 ,s 0 ,t 0 ) When changing the position of the imaging plane (image distance) of a light field camera, the light field at a new imaging depth (object distance) recorded by the camera is based on the light field biplane representation model and the digital refocusing theoryAnd the original light field L 0 (x 0 ,y 0 ,s 0 ,t 0 ) The coordinates of (a) have the following relationship:
wherein alpha is n A scaling factor representing the distance of movement of the image plane, and n=0, 1,2, & 255, representing different α n Index value corresponding to the value to beDouble integration is carried out along the s and t directions respectively to obtain different alpha n Refocusing images corresponding to the values:
wherein N is s ,N t Representing the magnitude of the 4D light field angular resolution;
2) 4D light fields at different depths of focusDefocus response of->By increasing the number of second derivatives with weight differenceAnd direction to achieve energy enhancement, the defocus response function is expressed as follows:
wherein ω' D Is the window size around the current pixel,represents an energy-enhancement-based second derivative operator, the expression of which is as follows:
wherein m and n represent the horizontal and vertical second derivative step sizes,representing the weight factor, wherein the closer to the center point, the larger the weight factor, the larger the contribution to the Laplace operator value, and conversely, the farther from the center point, the smaller the contribution to the Laplace operator value, M, N can only take an odd number, and M multiplied by N represents the window size of the refocusing image convolved with the second derivative operator;
3) 4D light fields at different depths of focusIs->By calculating the 4D light field angular variance:
4) Taking defocus responseMaximum time alpha n Form a defocus depth map D from the index values n of (2) 1 (x, y), taking the relatedness responseMinimum time alpha n Index value n forms a correlation depth map D 2 (x, y) as follows:
5) The fusion of the defocusing depth map and the correlation depth map is realized by using a Markov optimization theory, and the confidence degrees corresponding to different depth maps are calculated first:
C 1 (x,y)=D 1 (x,y)/D 1 ′(x,y) (8)
C 2 (x,y)=D 2 (x,y)/D 2 ′(x,y) (9)
wherein D is 1 'x, y' indicates defocus responseDepth map obtained by formula (6) when taking next largest value, D 2 ' (x, y) represents a correlation response +.>Taking a depth map obtained by the formula (7) when the value is small; the final fused depth map D (x, y) is obtained by solving the following optimization problem:
compared with the prior art, the invention improves the defocus response function in the existing defocus depth acquisition algorithm, and proposes an improved defocus response function based on energy enhancement, wherein the function fully considers the influence of surrounding pixel points on the current position power, and the energy enhancement is realized by increasing the number and the direction of second derivatives with weight difference. The effectiveness of the method provided by the invention is verified through experiments: aiming at some light field images with complex depth information, the depth map visual effect obtained by the algorithm provided by the invention is better; compared with the defocus evaluation combined correlation evaluation algorithm, the improved algorithm has the advantage that the mean reduction of root mean square error of the depth image is 3.95%.
Drawings
The invention is described in further detail below with reference to the accompanying drawings.
FIG. 1 is a data processing flow of an improved light field depth estimation algorithm based on energy enhanced defocus response of the present invention.
In fig. 2, (a) is a conventional laplace operator, and (b) is an energy-enhanced second derivative operator proposed by the present invention.
Fig. 3 shows a depth estimation effect contrast based on a "shoe" image, where (a) is a light field original image, (b) is a depth map obtained by an existing DCDC algorithm, and (c) is a depth map obtained by the algorithm of the present invention.
Fig. 4 shows a comparison of depth acquisition effects based on a "stabler" image, where (a) is a redundancy-removed light field original image, (b) is a depth map acquired by a DCDC algorithm, and (c) is a depth map acquired by the algorithm of the present invention.
FIG. 5 is a sample plot of a "benchmark" dataset, where (a) is a "boxes" scene and (b) is a "dio" scene.
Fig. 6 shows a comparison of depth acquisition effects of "boxes" scenes, where (a) is a true depth map, (b) is a depth map acquired by DCDC algorithm, and (c) is a depth map acquired by the algorithm of the present invention.
Fig. 7 shows a comparison of depth acquisition effects of a "dio" scene, where (a) is a standard depth map, (b) is a depth map acquired by a DCDC algorithm, and (c) is a depth map acquired by the algorithm of the present invention.
Detailed Description
The following detailed description of the invention refers to the accompanying drawings, which illustrate specific embodiments of the invention.
An improved light field depth estimation algorithm based on energy enhanced defocus response proceeds as follows:
1) After the micro lens center of the light field camera is calibrated, the 4D light field L can be decoded from the original image of the light field 0 (x 0 ,y 0 ,s 0 ,t 0 ) When changing the position of the imaging plane (image distance) of a light field camera, the light field at a new imaging depth (object distance) recorded by the camera is based on the light field biplane representation model and the digital refocusing theoryAnd the original light field L 0 (x 0 ,y 0 ,s 0 ,t 0 ) The coordinates of (a) have the following relationship:
wherein alpha is n A scaling factor representing the distance of movement of the image plane, and n=0, 1,2, & 255, representing different α n Index value corresponding to the value to beDouble integration is carried out along the s and t directions respectively to obtain different alpha n Refocusing images corresponding to the values:
wherein N is s ,N t Representing the magnitude of the 4D light field angular resolution;
2) 4D light fields at different depths of focusDefocus response of->By increasing the number and the direction of the second derivatives with weight difference, the energy increase is realizedThe expression of the strong defocus response function is as follows:
wherein ω' D Is the window size around the current pixel (to increase robustness),represents an energy-enhancement-based second derivative operator, the expression of which is as follows:
wherein m and n represent the horizontal and vertical second derivative step sizes,representing the weight factor, wherein the closer to the center point, the larger the weight factor, the larger the contribution to the Laplace operator value, and conversely, the farther from the center point, the smaller the contribution to the Laplace operator value, M, N can only take an odd number, and M multiplied by N represents the window size of the refocusing image convolved with the second derivative operator;
3) 4D light fields at different depths of focusIs->By calculating the 4D light field angular variance:
4) Taking defocus responseMaximum time alpha n Is a rope of (2)The index n forms a defocus depth map D 1 (x, y), taking the relatedness responseMinimum time alpha n Index value n forms a correlation depth map D 2 (x, y) as follows:
5) The fusion of the defocusing depth map and the correlation depth map is realized by using a Markov optimization theory, and the confidence degrees corresponding to different depth maps are calculated first:
C 1 (x,y)=D 1 (x,y)/D 1 ′(x,y) (8)
wherein D is 1 'x, y' indicates defocus responseDepth map obtained by formula (6) when taking next largest value, D 2 ' (x, y) represents a correlation response +.>Taking a depth map obtained by the formula (7) when the value is small; the final fused depth map D (x, y) is obtained by solving the following optimization problem:
compared with the traditional DCDC algorithm, the algorithm is mainly different in defocus response function in the step 2), and the DCDC algorithmDefocus response in the methodBy calculating refocusing images at different focus depths +.>The Laplace value of (2) is obtained:
wherein the method comprises the steps ofTo increase algorithm robustness, in ω, to Laplacian D And calculating the Laplacian mean value corresponding to the current pixel point for the window size.
The defocus response function established in the formula (11) adopts the traditional Laplace operator to finish focusing degree evaluation on each pixel point of the refocused image, and the main function of the Laplace operator is to calculate second derivatives in the horizontal and vertical directions:
wherein:
where δ represents the step size of the horizontal and vertical derivative.
As can be seen from formulas (13) and (14), the conventional laplace operator only evaluates the energy changes of the four pixels in the horizontal and vertical directions relative to the central pixel, as shown in fig. 2 (a), and the energy counteracts when the two derivative signs in the horizontal and vertical directions are summed up, so that the depth acquisition accuracy of the algorithm applied to a complex scene is reduced.
In order to solve the problem, the invention improves the defocus corresponding function in the defocus depth acquisition algorithm, proposes an improved defocus response function based on energy enhancement, a schematic diagram of an improved second derivative operator based on energy enhancement is shown in fig. 2 (b), and the schematic diagram is exemplified by s=5 and t=5, and the number and direction information of the second derivative are marked. When m=0, n=1, and m=1, n=0, this operator calculates the energy variation of the horizontal and vertical adjacent 4 pixel points (corresponding to the four solid circles connected by the solid lines in the figure) with respect to the current pixel point (corresponding to the hollow circle at the center-most position in the figure), functionally equivalent to the conventional laplace operator with a second derivative step size of 1. When m=0, n=2 and m=1, n=1, the vertical direction and 45 ° angular direction are calculated, the distance from the current pixel point isThe energy change … … of the pixel point (corresponding to the four open circles with solid lines connected by the dashed lines) of the current pixel point is calculated as follows, when m=2, n=2, and m=2, n=2, the 45 ° angular direction and 135 ° angular direction are calculated, and the distance from the current pixel point is +.>The energy of the pixel (corresponding to the four filled circles connected by the dashed lines in the figure) with respect to the current pixel. From the process, the improved second derivative operator realizes the purpose of energy enhancement by increasing the number and the direction of the second derivatives and continuously changing the weight of the second derivatives.
In order to demonstrate that the defocus response function based on energy enhancement has stronger robustness in the depth acquisition of complex scenes, images with complex textures in an experimental database are selected as test objects, and the DCDC algorithm and the algorithm of the invention are respectively adopted to carry out depth reconstruction on the scenes recorded by the micro lens array light field camera.
The 1 st test image is a shoe image, as shown in fig. 3 (a), dense holes exist on the surface of shoes recorded by the image, black and white color jump is reflected in the texture image, and depth value jump should exist in the reconstructed depth image at the holes according to priori knowledge of human brain. After the image is subjected to depth reconstruction by adopting a DCDC algorithm and the algorithm method of the invention, the corresponding depth image is shown in fig. 3 (B) and 3 (c), and two local areas (an area A where an upper rectangular frame is positioned and an area B where a lower square frame is positioned) of the enlarged image are subjected to detail comparison: the space arrangement of the holes is clearer in the method provided by the invention due to the fact that the holes of the shoes in the depth image obtained by the DCDC algorithm in the area A are blurred; the depth map obtained by the DCDC algorithm in the area B is smoother, and the depth map obtained by the method provided by the invention better reflects the change of local depth level. The depth map and the texture map obtained by the method have stronger consistency and accord with the prior judgment of human.
The 2 nd test image is "stabler", as shown in fig. 4 (a), and the depth maps obtained by using the DCDC algorithm and the algorithm of the present invention are shown in fig. 4 (b) and fig. 4 (c), respectively. As can be seen from the region a of the lower rectangular frame of fig. 4 (a), the middle region of the stapler body is black, the color contrast is not obvious, but there is a jump in the depth value, there is a boundary artifact in the locally enlarged region corresponding to the depth map of fig. 4 (b), and the corresponding region artifact of fig. 4 (c) is significantly improved. In the enlarged view of the area B where the rectangular frame above fig. 4 (a) is located, there is a clear depth jump between the pink cup and the background, the enlarged area of the depth map corresponding to fig. 4 (B) loses the depth information of the cup, and the detail outline of the cup can be seen in the enlarged area corresponding to fig. 4 (c). The experiment further demonstrates that the algorithm provided by the invention has higher depth reconstruction accuracy for the local area with complex depth information.
Because the existing image database shot by the light field camera based on the micro lens array does not have a corresponding standard depth map, the two groups of experiments can only compare the superiority of the algorithm provided by the invention in visual effect. In order to further quantitatively evaluate the accuracy of the depth reconstruction of the algorithm provided by the invention, experiments were carried out by using a "benchmark" dataset of the university of Stanford, which was photographed by an array light field camera, and each scene contained 81 multi-view images and 1 standard depth map corresponding thereto. 2 scenes with complex depth details are selected for the experiment, as shown in fig. 5. In the experimental process, 81 multi-view images are regarded as 81 sub-aperture images decoded from original images of a light field of the micro lens array light field camera 1 to construct 4D light field data. The scene depth estimation is carried out by using the DCDC algorithm and the method of the invention, and the obtained depth map and the corresponding standard depth map are shown in fig. 6 and 7.
Compared with the overall depth reconstruction effect, the depth map obtained by the algorithm is less in edge artifact compared with the DCDC algorithm, and the depth level change of the scene can be highlighted. Comparing the areas of the rectangular frames in fig. 6 (b) and fig. 6 (c), the depth map restored by the algorithm of the invention reflects slight jump of the depth level in the central area, is closer to the standard depth map, and the DCDC algorithm loses the depth detail. Comparing the areas of the rectangular frames in fig. 7 (b) and fig. 7 (c), the boundary of the tooth form model in the depth map restored by the algorithm of the invention is clearer and more approximate to the standard depth map.
Finally, taking the standard depth map as a reference, selecting peak signal-to-noise ratio (PSNR) and Mean Square Error (MSE) as evaluation indexes, and quantitatively evaluating the accuracy of the DCDC algorithm and the depth map acquired by the algorithm, wherein the evaluation results are shown in table 1. The PSNR of the depth map obtained by the algorithm is higher and the MSE is lower through comparing the data in the table.
Table 1 quantitative evaluation of depth map accuracy
Aiming at the problems that the defocus response function is limited in direction of solving a second derivative and the horizontal and vertical second derivatives and energy are mutually offset when the defocus depth is estimated in the existing DCDC algorithm, the defocus response function is improved by designing a second derivative operator based on energy enhancement, and the method is used for improving depth acquisition precision in a complex scene. The second derivative operator fully considers the influence of peripheral pixel points on the power of the current position, increases the number and the direction of the second derivative to realize energy enhancement, and realizes the equalization when the second derivative energy summation is realized by setting a weight coefficient. The effectiveness of the method provided by the invention is demonstrated through experiments: the depth map obtained by the method has clearer overall depth level, obvious inhibition on edge artifacts, more true local depth details, average improvement of PSNR (power supply noise ratio) by 0.3616dB and average reduction of mean square error by 3.95%.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (1)

1. An improved light field depth estimation method based on energy enhanced defocus response is characterized by the following steps:
1) After the micro lens center of the light field camera is calibrated, a 4D light field L is decoded from the original image of the light field 0 (x 0 ,y 0 ,s 0 ,t 0 ) According to the light field biplane representation model and the digital refocusing theory, when changing the position of the imaging plane of a light field camera, the light field at the new focusing depth recorded by the cameraAnd the original light field L 0 (x 0 ,y 0 ,s 0 ,t 0 ) The coordinates of (a) have the following relationship:
wherein alpha is n A scaling factor representing the distance of movement of the image plane, and n=0, 1,2, & 255, representing different α n Index value corresponding to the value to beDouble integration is carried out along the s and t directions respectively to obtain different alpha n Refocusing images corresponding to the values:
wherein N is s ,N t Representing the magnitude of the 4D light field angular resolution;
2) 4D light fields at different depths of focusDefocus response of->Energy enhancement is achieved by increasing the number and direction of second derivatives with weight differences, and the defocus response function is expressed as follows:
wherein ω' D Is the window size around the current pixel,represents an energy-enhancement-based second derivative operator, the expression of which is as follows:
wherein m and n represent the horizontal and vertical second derivative step sizes,representing the weight factor, wherein the closer to the center point, the larger the weight factor, the larger the contribution to the Laplace operator value, and conversely, the farther from the center point, the smaller the contribution to the Laplace operator value, M, N can only take an odd number, and M multiplied by N represents the window size of the refocusing image convolved with the second derivative operator;
3) 4D light fields at different depths of focusIs->By calculating the 4D light field angular variance:
4) Taking defocus responseMaximum time alpha n Form a defocus depth map D from the index values n of (2) 1 (x, y), taking the relatedness responseMinimum time alpha n Index value n forms a correlation depth map D 2 (x, y) as follows:
5) The fusion of the defocusing depth map and the correlation depth map is realized by using a Markov optimization theory, and the confidence degrees corresponding to different depth maps are calculated first:
C 1 (x,y)=D 1 (x,y)/D 1 ′(x,y) (8)
C 2 (x,y)=D 2 (x,y)/D 2 ′(x,y) (9)
wherein D is 1 'x, y' indicates defocus responseDepth map obtained by formula (6) when taking next largest value, D 2 ' (x, y) represents a correlation response +.>Taking a depth map obtained by the formula (7) when the value is small; the final fused depth map D (x, y) is obtained by solving the following optimization problem:
wherein,is a laplace operator.
CN201911073694.0A 2019-11-06 2019-11-06 Improved light field depth estimation method based on energy enhanced defocus response Active CN110827343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911073694.0A CN110827343B (en) 2019-11-06 2019-11-06 Improved light field depth estimation method based on energy enhanced defocus response

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911073694.0A CN110827343B (en) 2019-11-06 2019-11-06 Improved light field depth estimation method based on energy enhanced defocus response

Publications (2)

Publication Number Publication Date
CN110827343A CN110827343A (en) 2020-02-21
CN110827343B true CN110827343B (en) 2024-01-26

Family

ID=69552711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911073694.0A Active CN110827343B (en) 2019-11-06 2019-11-06 Improved light field depth estimation method based on energy enhanced defocus response

Country Status (1)

Country Link
CN (1) CN110827343B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950698B (en) * 2021-03-18 2024-03-26 北京拙河科技有限公司 Depth estimation method, device, medium and equipment based on binocular defocused image
CN115359108B (en) * 2022-09-15 2024-07-02 上海人工智能创新中心 Defocus-based depth prediction method and system under focus stack reconstruction guidance

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751502A (en) * 2008-12-18 2010-06-23 睿初科技公司 Method and system for correcting window maximized optic proximity effect in photoetching process
CN106340041A (en) * 2016-09-18 2017-01-18 杭州电子科技大学 Light field camera depth estimation method based on cascade shielding filtering filter
CN107292819A (en) * 2017-05-10 2017-10-24 重庆邮电大学 A kind of infrared image super resolution ratio reconstruction method protected based on edge details
CN107995424A (en) * 2017-12-06 2018-05-04 太原科技大学 Light field total focus image generating method based on depth map
WO2019042185A1 (en) * 2017-08-31 2019-03-07 深圳岚锋创视网络科技有限公司 Light-field camera-based depth estimating method and system and portable terminal
CN109993764A (en) * 2019-04-03 2019-07-09 清华大学深圳研究生院 A kind of light field depth estimation method based on frequency domain energy distribution
CN110036410A (en) * 2016-10-18 2019-07-19 弗托斯传感与算法公司 For obtaining the device and method of range information from view

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751502A (en) * 2008-12-18 2010-06-23 睿初科技公司 Method and system for correcting window maximized optic proximity effect in photoetching process
CN106340041A (en) * 2016-09-18 2017-01-18 杭州电子科技大学 Light field camera depth estimation method based on cascade shielding filtering filter
CN110036410A (en) * 2016-10-18 2019-07-19 弗托斯传感与算法公司 For obtaining the device and method of range information from view
CN107292819A (en) * 2017-05-10 2017-10-24 重庆邮电大学 A kind of infrared image super resolution ratio reconstruction method protected based on edge details
WO2019042185A1 (en) * 2017-08-31 2019-03-07 深圳岚锋创视网络科技有限公司 Light-field camera-based depth estimating method and system and portable terminal
CN107995424A (en) * 2017-12-06 2018-05-04 太原科技大学 Light field total focus image generating method based on depth map
CN109993764A (en) * 2019-04-03 2019-07-09 清华大学深圳研究生院 A kind of light field depth estimation method based on frequency domain energy distribution

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A multi-order derivative feature-based quality assessment model for light field image;Yu Tian et al.;《J. Vis. Commun. Image R.》;第212-217页 *
Depth from Combining Defocus and Correspondence Using Light-Field Cameras;Michael W.Tao et al.;《2013 IEEE International Conference on Computer VIsion》;第673-680页 *
Trust your Model: Light Field Depth Estimation with inline Occlusion Handling;Hendrik Schilling et al.;《CVPR》;第4530-4538页 *
基于小波域清晰度评价的光场全聚焦图像融合;谢颖贤 等;《北京航空航天大学学报》;第45卷(第9期);第1848-1854页 *
基于深度线索的光场相机深度估计研究;潘佳琪;《中国优秀硕士学位论文全文数据库 信息科技辑》;I138-340 *
聚焦性检测与彩色信息引导的光场图像深度提取;胡良梅 等;《中国图象图形学报》;第21卷(第2期);第155-164页 *

Also Published As

Publication number Publication date
CN110827343A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN112634341B (en) Method for constructing depth estimation model of multi-vision task cooperation
CN108682026B (en) Binocular vision stereo matching method based on multi-matching element fusion
Tao et al. Depth from combining defocus and correspondence using light-field cameras
CN108596965B (en) Light field image depth estimation method
WO2018000752A1 (en) Monocular image depth estimation method based on multi-scale cnn and continuous crf
Mehta et al. Structured adversarial training for unsupervised monocular depth estimation
CN108337434B (en) Out-of-focus virtual refocusing method for light field array camera
CN107038719A (en) Depth estimation method and system based on light field image angle domain pixel
Dellepiane et al. Flow-based local optimization for image-to-geometry projection
CN106340041B (en) It is a kind of to block the light-field camera depth estimation method for filtering out filter based on cascade
CN108564620B (en) Scene depth estimation method for light field array camera
CN110009693B (en) Rapid blind calibration method of light field camera
CN106651943B (en) It is a kind of based on the light-field camera depth estimation method for blocking geometry complementation model
Lee et al. Depth estimation from light field by accumulating binary maps based on foreground–background separation
CN110827343B (en) Improved light field depth estimation method based on energy enhanced defocus response
CN109949354B (en) Light field depth information estimation method based on full convolution neural network
CN106257537B (en) A kind of spatial depth extracting method based on field information
CN110322572A (en) A kind of underwater culvert tunnel inner wall three dimensional signal space method based on binocular vision
CN109064505A (en) A kind of depth estimation method extracted based on sliding window tensor
Martínez-Usó et al. Depth estimation in integral imaging based on a maximum voting strategy
CN108090920B (en) Light field image depth stream estimation method
CN112132771B (en) Multi-focus image fusion method based on light field imaging
CN109615650B (en) Light field flow estimation method based on variational method and shielding complementation
CN111260712A (en) Depth estimation method and device based on refocusing focal polar line diagram neighborhood distribution
Li et al. A Bayesian approach to uncertainty-based depth map super resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant