CN108389171A - A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable - Google Patents

A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable Download PDF

Info

Publication number
CN108389171A
CN108389171A CN201810189606.2A CN201810189606A CN108389171A CN 108389171 A CN108389171 A CN 108389171A CN 201810189606 A CN201810189606 A CN 201810189606A CN 108389171 A CN108389171 A CN 108389171A
Authority
CN
China
Prior art keywords
image
light field
sub
depth
aperture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810189606.2A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201810189606.2A priority Critical patent/CN108389171A/en
Publication of CN108389171A publication Critical patent/CN108389171A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

A kind of the light field deblurring and depth estimation method based on Combined estimator fuzzy variable proposed in the present invention, main contents include:Sub-aperture image, light field fuzzy model, the update of sub-image, the update of camera motion and depth map, its process is, reference angular position and timestamp of the median of the centre view and aperture time that first select sub-aperture image as sub-image, use the input sub-aperture image initial depth map of light field, camera motion obscures kernel and the initialization of initial scene depth from local linear, then combined optimization sub-image, depth map and camera motion finally obtain clearly image, light field, depth map and camera motion and are used as output.Combined estimator proposed by the invention realizes the light field deblurring and estimation of Depth of high quality simultaneously under arbitrary 6 degree of freedom camera motion and without constraint scene depth, the clarity for substantially increasing image, the operational excellence in the case where general camera motion and scene depth change.

Description

Light field deblurring and depth estimation method based on joint estimation fuzzy variable
Technical Field
The invention relates to the field of image restoration, in particular to a light field deblurring and depth estimation method based on joint estimation of a blur variable.
Background
In the digital information age, image deblurring is an important branch of image restoration technology, and is always a very challenging problem, and has great research value and social significance. Due to the influence of shooting operation or shooting environment factors and the like, the shot images often have the phenomena of blurring and distortion, or image information is lost or the quality is reduced due to the defects of an imaging system, a transmission medium or recording equipment and the like in the processes of generation, transmission, recording and storage. Image deblurring techniques are therefore of particular importance in many fields. In the security field, in the pictures recorded and shot by the monitoring video equipment, when a target person or object moves rapidly, the shot pictures are generally blurred, and clear images can be generated by an image deblurring technology, so that security guards or policemen and the like can conveniently obtain effective information. In the field of medical imaging, because the internal environment of each organ of a human body is complex and changeable, the shot internal images of the organs are often difficult to see clearly, and clear images can be recovered to a certain extent by applying an image deblurring technology, so that doctors are helped to diagnose and treat. In the fields of astronomical observation, space exploration, remote sensing prediction and the like, because the exploration field has unknown property, complexity and variability, an image deblurring technology is more needed to help researchers restore clear images so as to determine environmental information. However, existing image deblurring techniques are difficult to achieve with only a single observation to estimate the depth of the scene and still cannot fully handle the uneven blur caused by the scene depth variation.
The invention provides a light field deblurring and depth estimation method based on joint estimation of a blur variable. The joint estimation provided by the invention realizes high-quality light field deblurring and depth estimation under the motion of any 6-degree-of-freedom camera and the unconstrained scene depth, greatly improves the definition of an image, and works well under the condition of general camera motion and scene depth change.
Disclosure of Invention
Aiming at the problems that the scene depth is difficult to estimate only by single observation, and the like, the invention aims to provide a light field deblurring and depth estimation method based on joint estimation of a blur variable.
In order to solve the above problems, the present invention provides a light field deblurring and depth estimation method based on joint estimation of a blur variable, which mainly comprises:
(one) a sub-aperture image;
(II) a light field fuzzy model;
(III) updating the latent image;
(IV) updating of camera motion and depth maps.
Wherein, in the sub-aperture image, the pixels in the four-dimensional light field have four coordinates, namely (x, y) for space and (u, v) for angular coordinates; the light field can be viewed as a set of u × v multi-view images with a narrow baseline, commonly referred to as sub-aperture images I (x, u); wherein x ═ x, y and u ═ u, v; for each sub-aperture image, the blurred image B (x, u) is the sharp image I when the shutter is opentAverage value of (x, u) [ t ]0,t1](ii) a All blurred sub-aperture images are approximated by projecting a single latent image with three dimensional rigid motion.
Further, said approximation, selecting a central view of the sub-aperture image (c) and a median value of the shutter time (t)r) As reference angular positions and time stamps of the latent image; from each sub-aperture image to the latent imageThe pixel correspondence of (a) is expressed as follows:
wt(x, u) calculation from u to c and from t to trThe warped pixel location of (a); matrix arrayAndrepresenting a 6-degree-of-freedom camera pose and a timestamp at a respective angular position; dt(x, u) is the depth map at timestamp t;
in the proposed model, the blur operator Ψ (-) is defined by approximating the integral of B (x, u) as a finite sum of:
in the formula (2), tmIs the interval period t0,t1]The mth uniform sampling timestamp.
Further, said finite sum, determining only central viewpoint variablesNamely, it isAnd andis a variable related to u in the warping function (2); thus, byParameterization using center view variablesAnddue to the relative posture Pc→uChanges with time, so
Wherein exp and log represent exponential and logarithmic mappings between lie groups SE (3) and lie algebra SE (3) spaces; to minimize the viewpoint shift of the latent image, assumeWhen t ism=trWhen it is in use, makeForming a unit matrix;also consists ofRepresented by forward warping and interpolation.
Wherein, the light field fuzzy model needs to recover latent variables in order to estimate all fuzzy variables in the proposed light field fuzzy model, namelyAndthe energy function is modeled as follows:
the data item enforces the brightness consistency between the input blurred light field and the restored light field; the last two terms are total variation regularization of latent variables and depth maps, respectively.
Further, the energy function is modeled, in the energy model,andimplicitly contained in the warping function (2); optimizing the three latent variables in an alternating manner; minimize one variable while the other variables are fixed; optimizing the formula (4) for three variables in sequence; approximate L1 optimization using Iterative Reweighed Least Squares (IRLS); the optimization procedure converges with a small number of iterations (<10)。
Wherein, said updating of the latent image, the algorithm first updates the latent imageIn the data item, ifAndkeeping the same, simplifying the fuzzy operator (2) into linear matrix multiplication; updating the latent image is equivalent to minimizing (4) as follows:
is a vectorized image, andis a blurring operator in the form of a square matrix, where n is the number of pixels in the central view sub-aperture image; gross variation regularization eliminates artifacts as a precursor to latent images with sharp boundaries.
Further, the camera motion and depth map are updated as the formula (2) isAndfor efficient computation, it needs to be approximated in a linear form; the fuzzy operation (2) is approximately first-order expansion; let D0(x, c) andrepresenting the initial variables, equation (2) is then approximated as follows:
f is the motion flow generated by the warping function, andrepresents a six-dimensional vector on se (3);
once in useAndapproximation, equation (4) can be optimized using IRLS; obtainedAndare respectively the currentAndan increment value of (d); they are updated as follows:
wherein,by motion vectorsIs updated by the exponential mapping of (a).
Further, the camera motion, first, initializes a depth map using the input sub-aperture image of the light field; obtaining the initial assuming the camera does not move and minimizing equation (4)Minimizing equation (4) becomes a simple multi-view stereo matching problem; camera motionInitialized from the local linear blur kernel and the initial scene depth.
Further, the depth initialization is performed by first using a local linear blur kernel of the estimate B (x, c); then, the pixel coordinates shifted by the linear kernel and the coordinates re-projected by the warping function are fitted as follows:
wherein x isiIs the pixel position of the sample, l (x)i) Is xiA point moved by the end point of the linear kernel; by fitting withAnd l (·) moving xiTo obtainSince the scene depth is fixed on the initial depth map,is thatA unique variable of (a); random sample consensus (RANSAC) is used to find the camera motion that best describes the pixel linear kernel; n is the number of random samples, always 4.
Drawings
FIG. 1 is a system flow chart of a light field deblurring and depth estimation method based on joint estimation of a blur variable according to the present invention.
FIG. 2 is a joint estimation of the light field deblurring and depth estimation method based on the joint estimation of the blur variable.
FIG. 3 is an example of iterative joint estimation of a light field deblurring and depth estimation method based on joint estimation of a blur variable according to the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application can be combined with each other without conflict, and the present invention is further described in detail with reference to the drawings and specific embodiments.
FIG. 1 is a system flow chart of a light field deblurring and depth estimation method based on joint estimation of a blur variable according to the present invention. The method mainly comprises the steps of sub-aperture image updating, light field fuzzy model updating, latent image updating, camera motion updating and depth map updating.
The main process is as follows: the method comprises the steps of firstly selecting a central view of a sub-aperture image and a middle value of shutter time as a reference angle position and a time stamp of a latent image, initializing a depth map by using an input sub-aperture image of a light field, initializing camera motion from a local linear blurring kernel and an initial scene depth, then jointly optimizing the latent image, the depth map and the camera motion, and finally obtaining a clear image, the light field, the depth map and the camera motion as output.
A sub-aperture image, the pixels in the four-dimensional light field having four coordinates, namely (x, y) for space and (u, v) for angular coordinates; the light field can be viewed as a set of u × v multi-view images with a narrow baseline, commonly referred to as sub-aperture images I (x, u); wherein x ═ x, y and u ═ u, v; for each sub-aperture image, the blurred image B (x, u) is the sharp image I when the shutter is opentAverage value of (x, u) [ t ]0,t1](ii) a All blurred sub-aperture images are approximated by projecting a single latent image with three dimensional rigid motion.
Selecting a central view of the sub-aperture image (c) and an intermediate value of the shutter time (t)r) As reference angular positions and time stamps of the latent image; from each sub-aperture image to the latent imageThe pixel correspondence of (a) is expressed as follows:
wt(x, u) calculation from u to c and from t to trThe warped pixel location of (a); matrix arrayAndrepresenting a 6-degree-of-freedom camera pose and a timestamp at a respective angular position; dt(x, u) is the depth map at timestamp t;
in the proposed model, the blur operator Ψ (-) is defined by approximating the integral of B (x, u) as a finite sum of:
in the formula (2), tmIs the interval period t0,t1]The mth uniform sampling timestamp.
Determining central viewpoint variable onlyNamely, it isAnd andis a variable related to u in the warping function (2); thus, parameterization by using center view variablesAnddue to the relative posture Pc→uChanges with time, so
Wherein exp and log represent exponential and logarithmic mappings between lie groups SE (3) and lie algebra SE (3) spaces; to minimize the viewpoint shift of the latent image, assumeWhen t ism=trWhen it is in use, makeForming a unit matrix;also consists ofRepresented by forward warping and interpolation.
Light field blur model, in order to estimate all blur variables in the proposed light field blur model, latent variables need to be recovered, i.e.Andthe energy function is modeled as follows:
the data item enforces the brightness consistency between the input blurred light field and the restored light field; the last two terms are total variation regularization of latent variables and depth maps, respectively.
In the energy model, the energy of the object is measured,andimplicitly contained in the warping function (2); optimizing the three latent variables in an alternating manner; minimize one variable while the other variables are fixed; optimizing the formula (4) for three variables in sequence; approximate L1 optimization using Iterative Reweighed Least Squares (IRLS); the optimization procedure converges with a small number of iterations (<10)。
FIG. 2 is a joint estimation of the light field deblurring and depth estimation method based on the joint estimation of the blur variable. The algorithm jointly estimates the latent image \ depth map and camera motion from a single light field. Figure (a) is a central view of a blurred light field sub-aperture image; FIG. (b) is the blurred image of FIG. (a); map (c) is an estimated depth map; graph (d) shows the camera motion path and direction (6-degrees of freedom).
The algorithm first updates the latent imageIn the data item, ifAndkeeping the same, simplifying the fuzzy operator (2) into linear matrix multiplication; updating the latent image is equivalent to minimizing (4) as follows:
is a vectorized image, andis a blurring operator in the form of a square matrix, where n is the number of pixels in the central view sub-aperture image; gross variation regularization eliminates artifacts as a precursor to latent images with sharp boundaries.
Since the formula (2) isAndfor efficient computation, it needs to be approximated in a linear form; the fuzzy operation (2) is approximately first-order expansion; let D0(x, c) andrepresenting the initial variables, equation (2) is then approximated as follows:
f is the motion flow generated by the warping function, andrepresents a six-dimensional vector on se (3);
once in useAndapproximation, equation (4) can be optimized using IRLS; obtainedAndare respectively whenFront sideAndan increment value of (d); they are updated as follows:
wherein,by motion vectorsIs updated by the exponential mapping of (a).
Firstly, initializing a depth map by using an input sub-aperture image of a light field; obtaining the initial assuming the camera does not move and minimizing equation (4)Minimizing equation (4) becomes a simple multi-view stereo matching problem; camera motionInitialized from the local linear blur kernel and the initial scene depth.
First using a local linear blur kernel of estimate B (x, c); then, the pixel coordinates shifted by the linear kernel and the coordinates re-projected by the warping function are fitted as follows:
wherein x isiIs the pixel position of the sample, l (x)i) Is xiA point moved by the end point of the linear kernel; by fitting withAnd l (·) moving xiTo obtainSince the scene depth is fixed on the initial depth map,is thatA unique variable of (a); random sample consensus (RANSAC) is used to find the camera motion that best describes the pixel linear kernel; n is the number of random samples, always 4.
FIG. 3 is an example of iterative joint estimation of a light field deblurring and depth estimation method based on joint estimation of a blur variable according to the present invention. The proposed method converges in a small number of iterations. The graphs (a) and (b) input the blurred image and the deblurring result by iteration. The map (c) and the map (d) initially blur the depth map and the depth estimation result by iteration. The joint estimation provided by the invention realizes high-quality light field deblurring and depth estimation under the motion of any 6-degree-of-freedom camera and the unconstrained scene depth, greatly improves the definition of an image, and works well under the condition of general camera motion and scene depth change.
It will be appreciated by persons skilled in the art that the invention is not limited to details of the foregoing embodiments and that the invention can be embodied in other specific forms without departing from the spirit or scope of the invention. In addition, various modifications and alterations of this invention may be made by those skilled in the art without departing from the spirit and scope of this invention, and such modifications and alterations should also be viewed as being within the scope of this invention. It is therefore intended that the following appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.

Claims (10)

1. A light field deblurring and depth estimation method based on joint estimation fuzzy variable is characterized by mainly comprising a subaperture image I; a light field fuzzy model (II); updating the latent image (III); camera motion and depth map updates (iv).
2. Sub-aperture image (one) according to claim 1, characterized in that the pixels in the four-dimensional light field have four coordinates, namely (x, y) for space and (u, v) for angular coordinates; the light field can be regarded as a set u x v havingNarrow baseline multi-view images, commonly referred to as sub-aperture images I (x, u); wherein x ═ x, y and u ═ u, v; for each sub-aperture image, the blurred image B (x, u) is the sharp image I when the shutter is opentAverage value of (x, u) [ t ]0,t1](ii) a All blurred sub-aperture images are approximated by projecting a single latent image with three dimensional rigid motion.
3. Approximation according to claim 2, characterized in that the central view (c) of the sub-aperture image and the intermediate value (t) of the shutter time are selectedr) As reference angular positions and time stamps of the latent image; from each sub-aperture image to the latent imageThe pixel correspondence of (a) is expressed as follows:
wt(x, u) calculation from u to c and from t to trThe warped pixel location of (a); matrix arrayAndrepresenting a 6-degree-of-freedom camera pose and a timestamp at a respective angular position; dt(x, u) is the depth map at timestamp t;
in the proposed model, the blur operator Ψ (-) is defined by approximating the integral of B (x, u) as a finite sum of:
in the formula (2), tmIs the interval period t0,t1]The mth uniform sampling timestamp.
4. A finite sum based on claim 3, characterized in that only the central viewpoint variable is determinedNamely, it isAnd andis a variable related to u in the warping function (2); thus, parameterization by using center view variablesAnddue to the relative posture Pc→uChanges with time, so
Where exp and log represent lie group SE (3) and lie algebraExponential and logarithmic between spacesMapping; to minimize the viewpoint shift of the latent image, assumeWhen t ism=trWhen it is in use, makeForming a unit matrix;also consists ofRepresented by forward warping and interpolation.
5. The light field blur model (II) according to claim 1, characterized in that for estimating all blur variables in the proposed light field blur model, the latent variables, i.e. the latent variables, need to be recoveredAndthe energy function is modeled as follows:
the data item enforces the brightness consistency between the input blurred light field and the restored light field; the last two terms are total variation regularization of latent variables and depth maps, respectively.
6. Modeling of an energy function based on claim 5, characterized in that, in the energy model,andimplicitly contained in the warping function (2); optimizing the three latent variables in an alternating manner; minimize one variable while the other variables are fixed; optimizing the formula (4) for three variables in sequence; approximate L1 optimization using Iterative Reweighed Least Squares (IRLS); the optimization procedure converges with a small number of iterations (<10)。
7. Updating (III) of the latent image according to claim 1, characterized in that the algorithm first updates the latent imageIn the data item, ifAndkeeping the same, simplifying the fuzzy operator (2) into linear matrix multiplication; updating the latent image is equivalent to minimizing (4) as follows:
is a vectorized image, andis a blurring operator in the form of a square matrix, where n is the number of pixels in the central view sub-aperture image; gross variation regularization eliminates artifacts as a precursor to latent images with sharp boundaries.
8. Camera sports equipment according to claim 1Update of motion and depth maps (IV), characterized in that, since equation (2) isAndfor efficient computation, it needs to be approximated in a linear form; the fuzzy operation (2) is approximately first-order expansion; let D0(x, c) andrepresenting the initial variables, equation (2) is then approximated as follows:
f is the motion flow generated by the warping function, andto representA six-dimensional vector of (a);
once in useAndapproximation, equation (4) can be optimized using IRLS; obtainedAndare respectively the currentAndan increment value of (d); they are updated as follows:
wherein,by motion vectorsIs updated by the exponential mapping of (a).
9. Camera motion according to claim 8, characterized in that first a depth map is initialized using the input sub-aperture image of the light field; obtaining the initial assuming the camera does not move and minimizing equation (4)Minimizing equation (4) becomes a simple multi-view stereo matching problem; camera motionInitialized from the local linear blur kernel and the initial scene depth.
10. Depth initialization according to claim 9, characterized in that a local linear blur kernel of the estimate B (x, c) is first used; then, the pixel coordinates shifted by the linear kernel and the coordinates re-projected by the warping function are fitted as follows:
wherein x isiIs the pixel position of the sample, l (x)i) Is xiA point moved by the end point of the linear kernel; by fitting withAnd l (·) moving xiTo obtainSince the scene depth is fixed on the initial depth map,is thatA unique variable of (a); random sample consensus (RANSAC) is used to find the camera motion that best describes the pixel linear kernel; n is the number of random samples, always 4.
CN201810189606.2A 2018-03-08 2018-03-08 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable Withdrawn CN108389171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810189606.2A CN108389171A (en) 2018-03-08 2018-03-08 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810189606.2A CN108389171A (en) 2018-03-08 2018-03-08 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable

Publications (1)

Publication Number Publication Date
CN108389171A true CN108389171A (en) 2018-08-10

Family

ID=63067024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810189606.2A Withdrawn CN108389171A (en) 2018-03-08 2018-03-08 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable

Country Status (1)

Country Link
CN (1) CN108389171A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706346A (en) * 2019-09-17 2020-01-17 北京优科核动科技发展有限公司 Space-time joint optimization reconstruction method and system
CN111179333A (en) * 2019-12-09 2020-05-19 天津大学 Defocus fuzzy kernel estimation method based on binocular stereo vision
CN111191618A (en) * 2020-01-02 2020-05-22 武汉大学 KNN scene classification method and system based on matrix group
CN112150526A (en) * 2020-07-27 2020-12-29 浙江大学 Light field image depth estimation method based on depth learning
CN112184731A (en) * 2020-09-28 2021-01-05 北京工业大学 Multi-view stereo depth estimation method based on antagonism training

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105704371A (en) * 2016-01-25 2016-06-22 深圳市未来媒体技术研究院 Light field refocusing method
CN106803892A (en) * 2017-03-13 2017-06-06 中国科学院光电技术研究所 Light field high-definition imaging method based on light field measurement

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105704371A (en) * 2016-01-25 2016-06-22 深圳市未来媒体技术研究院 Light field refocusing method
CN106803892A (en) * 2017-03-13 2017-06-06 中国科学院光电技术研究所 Light field high-definition imaging method based on light field measurement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DONGWOO LEE: "Joint Blind Motion Deblurring and Depth Estimation of Light Field", 《ARXIV:1711.10918V1》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706346A (en) * 2019-09-17 2020-01-17 北京优科核动科技发展有限公司 Space-time joint optimization reconstruction method and system
CN110706346B (en) * 2019-09-17 2022-11-15 浙江荷湖科技有限公司 Space-time joint optimization reconstruction method and system
CN111179333A (en) * 2019-12-09 2020-05-19 天津大学 Defocus fuzzy kernel estimation method based on binocular stereo vision
CN111179333B (en) * 2019-12-09 2024-04-26 天津大学 Defocus blur kernel estimation method based on binocular stereo vision
CN111191618A (en) * 2020-01-02 2020-05-22 武汉大学 KNN scene classification method and system based on matrix group
CN112150526A (en) * 2020-07-27 2020-12-29 浙江大学 Light field image depth estimation method based on depth learning
CN112184731A (en) * 2020-09-28 2021-01-05 北京工业大学 Multi-view stereo depth estimation method based on antagonism training
CN112184731B (en) * 2020-09-28 2024-05-28 北京工业大学 Multi-view stereoscopic depth estimation method based on contrast training

Similar Documents

Publication Publication Date Title
CN108389171A (en) A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable
Yue et al. Image super-resolution: The techniques, applications, and future
Zhu et al. Removing atmospheric turbulence via space-invariant deconvolution
Su et al. Rolling shutter motion deblurring
CN103649998B (en) The method of the parameter set being defined as determining the attitude of photographing unit and/or design for determining the three dimensional structure of at least one real object
CN110945565A (en) Dense visual SLAM using probabilistic bin maps
JP2018206371A (en) Computer implemented image reconstruction system and image reconstruction method
CN113330486A (en) Depth estimation
CN112261315B (en) High-resolution calculation imaging system and method based on camera array aperture synthesis
CN113450396B (en) Three-dimensional/two-dimensional image registration method and device based on bone characteristics
CN111899282A (en) Pedestrian trajectory tracking method and device based on binocular camera calibration
CN114140510A (en) Incremental three-dimensional reconstruction method and device and computer equipment
CN110517211B (en) Image fusion method based on gradient domain mapping
Ma et al. An operational superresolution approach for multi-temporal and multi-angle remotely sensed imagery
CN112767467A (en) Double-image depth estimation method based on self-supervision deep learning
WO2019045722A1 (en) Methods, devices and computer program products for 3d mapping and pose estimation of 3d images
Gao et al. Variable exponent regularization approach for blur kernel estimation of remote sensing image blind restoration
Panin Mutual information for multi-modal, discontinuity-preserving image registration
KR20170037804A (en) Robust visual odometry system and method to irregular illumination changes
Molini et al. Deep learning for super-resolution of unregistered multi-temporal satellite images
Ghosh et al. Super-resolution mosaicing of unmanned aircraft system (UAS) surveillance video frames
Palaniappan et al. Non-rigid motion estimation using the robust tensor method
Sahay et al. Shape extraction of low‐textured objects in video microscopy
Camargo et al. GPU-CPU implementation for super-resolution mosaicking of unmanned aircraft system (UAS) surveillance video
WO2020110738A1 (en) Motion vector generation device, projection image generation device, motion vector generation method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20180810

WW01 Invention patent application withdrawn after publication