CN108389171A - A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable - Google Patents

A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable Download PDF

Info

Publication number
CN108389171A
CN108389171A CN201810189606.2A CN201810189606A CN108389171A CN 108389171 A CN108389171 A CN 108389171A CN 201810189606 A CN201810189606 A CN 201810189606A CN 108389171 A CN108389171 A CN 108389171A
Authority
CN
China
Prior art keywords
image
sub
light field
fuzzy
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810189606.2A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201810189606.2A priority Critical patent/CN108389171A/en
Publication of CN108389171A publication Critical patent/CN108389171A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

A kind of the light field deblurring and depth estimation method based on Combined estimator fuzzy variable proposed in the present invention, main contents include:Sub-aperture image, light field fuzzy model, the update of sub-image, the update of camera motion and depth map, its process is, reference angular position and timestamp of the median of the centre view and aperture time that first select sub-aperture image as sub-image, use the input sub-aperture image initial depth map of light field, camera motion obscures kernel and the initialization of initial scene depth from local linear, then combined optimization sub-image, depth map and camera motion finally obtain clearly image, light field, depth map and camera motion and are used as output.Combined estimator proposed by the invention realizes the light field deblurring and estimation of Depth of high quality simultaneously under arbitrary 6 degree of freedom camera motion and without constraint scene depth, the clarity for substantially increasing image, the operational excellence in the case where general camera motion and scene depth change.

Description

A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable
Technical field
The present invention relates to image restoration fields, and mould is removed more particularly, to a kind of light field based on Combined estimator fuzzy variable Paste and depth estimation method.
Background technology
In digital age, an important branch of the image deblurring as image restoration technology always is one Extremely challenging problem has great researching value and social effect.Due to shooting operation or shooting environmental factor etc. Influence, often there is the phenomenon that fuzzy and distortion in shooting image, or during generation, transmission, record, storage due at As the reasons such as not perfect of system, transmission medium or recording equipment cause that image information is lost or quality reduces.Therefore image is gone Fuzzy technology is all particularly important in many fields.Such as in security field, monitoring video equipment record and the picture shot, When target person or object quickly move, the picture absorbed generally can be relatively fuzzyyer, then may be used by Smear-eliminated technique of image To generate more clearly image, facilitate the effective information of the acquisitions such as security personnel or police.In Medical Imaging, since human body is each A organ internal environment is complicated and changeable, therefore the organ internal image shot is often difficult to it is clear that by application image deblurring Technology can then restore clear image to a certain extent, to help diagnosis and treatment.It is visited in astronomical observation, space In the fields such as rope, remote sensing prediction, since the field of exploration has non-intellectual, complexity and variability, gone with greater need for image Fuzzy technology come help researcher restore clear image, so that it is determined that environmental information.However, existing Smear-eliminated technique of image It is difficult to realize only estimate scene depth with single observation, and still cannot handle completely uneven caused by scene depth variation It is fuzzy.
The present invention proposes a kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable, first selects The reference angular position and timestamp of the centre view of sub-aperture image and the median of aperture time as sub-image, use light field Sub-aperture image initial depth map is inputted, camera motion obscures kernel and the initialization of initial scene depth from local linear, connects Combined optimization sub-image, depth map and camera motion, clearly image, light field, depth map and camera motion conduct are finally obtained Output.Combined estimator proposed by the invention realizes simultaneously under arbitrary 6- degree of freedom camera motion and without constraint scene depth The light field deblurring and estimation of Depth of high quality, substantially increase the clarity of image, in general camera motion and scene depth Operational excellence in the case of variation.
Invention content
The problems such as being difficult to only estimate scene depth with single observation, the purpose of the present invention is to provide one kind based on connection The light field deblurring and depth estimation method of ambiguous estimation variable are closed, the centre view and aperture time of sub-aperture image are first selected Reference angular position and timestamp of the median as sub-image, use the input sub-aperture image initial depth map of light field, phase Machine movement obscures kernel and the initialization of initial scene depth from local linear, then combined optimization sub-image, depth map and camera fortune It is dynamic, it finally obtains clearly image, light field, depth map and camera motion and is used as output.
Estimate to solve the above problems, the present invention provides a kind of light field deblurring and depth based on Combined estimator fuzzy variable Meter method, main contents include:
(1) sub-aperture image;
(2) light field fuzzy model;
(3) update of sub-image;
(4) update of camera motion and depth map.
Wherein, the sub-aperture image, there are four coordinates for the pixel tool in four-dimensional light field, that is, are used for (x, y) in space With (u, v) for angular coordinate;Light field, which can be considered as one group of u × v, has the multi-view image of narrow baseline, commonly referred to as sub-aperture Diameter image I (x, u);Wherein x=(x, y) and u=(u, v);For each sub-aperture image, blurred picture B (x, u) is in shutter Clear image I when openingtAverage value [the t of (x, u)0,t1];By projecting the single sub-image moved with Three-dimensional Rigidity come close Like all fuzzy sub-aperture images.
Further, the approximation selects the median (t of the centre view (c) and aperture time of sub-aperture imager) Reference angular position and timestamp as sub-image;Then from each sub-aperture image to sub-imagePixel correspondence expression It is as follows:
wt(x, u) is calculated from u to c and from t to trWarpage location of pixels;MatrixWithIndicate respective corners Spend the 6- degree of freedom camera postures and timestamp at position;Dt(x, u) is the depth map at timestamp t;
In the model proposed, fuzzy operator Ψ () by by the integral approach of B (x, u) for following finite sum come Definition:
In formula (2), tmIt is interim [t0,t1] m-th of uniform sampling timestamp.
Further, the finite sum determines there was only central viewpoint variableI.e.With WithIt is the variable related with u in warping function (2);Therefore, by using Centre view variable parameterizesWithDue to relative pose Pc→uTime to time change, so
Wherein, exp and log indicates the index and logarithmic mapping between Lie group SE (3) and Lie algebra se (3) space;In order to The viewpoint offsets of sub-image are reduced to the maximum extent, it is assumed thatWork as tm=trWhen so thatAs unit matrix;Also byIt is indicated to warpage and interpolation by preceding.
Wherein, the light field fuzzy model, in order to estimate all fuzzy variables in proposed light field fuzzy model, It needs to restore latent variable, i.e.,WithEnergy function is modeled as follows:
Data item forces the fuzzy brightness uniformity between light field and the light field of recovery of input;Last two are creep respectively The total variation regularization of amount and depth map.
Further, energy function modeling, in energy model,WithImplicitly it is included in warpage letter In number (2);Optimize three latent variables in an alternating fashion;A variable is minimized, and its dependent variable is fixed;Needle successively To three variable optimization formula (4);Carry out approximate L1 using iteration again weighted least-squares method (IRLS) to optimize;Optimization process with A small amount of iteration convergence (<10).
Wherein, the update of the sub-image, the algorithm update sub-image firstIn data item, if WithIt remains unchanged, then fuzzy operator (2) is reduced to linear matrix multiplication;Update sub-image is equal to the following institute of minimum (4) Show:
It is vectorial images, andIt is the fuzzy operator of square matrix form, wherein n is The quantity of pixel in centre visual angle sub-aperture image;Total variation regularization is eliminated as prior to the sub-image with clear boundary Artifact.
Further, the update of the camera motion and depth map, since formula (2) isWithIt is non-thread Property function needs to approach it in linear form to efficiently calculate;Fuzzy operation (2) is approximately single order extension;Enable D0(x,c) WithIndicate initializaing variable, then formula (2) is approximately as described below:
F is the movement stream generated by warping function, andIndicate the six-vector on se (3);
Once usingWithApproximation, formula (4) can be optimized using IRLS;It obtains WithIt is current respectivelyWithIncrement size;They update as follows:
Wherein,Pass through motion vectorIndex mapping update.
Further, the camera motion uses the input sub-aperture image initial depth map of light field first;It is false Determine camera not move and minimize formula (4) to obtain initiallyIt is simple more as one to minimize formula (4) View stereo matching problem;Camera motionKernel and the initialization of initial scene depth are obscured from local linear.
Further, depth initialization, first using the local linear fuzzy core of estimation B (x, c);Then, intend The pixel coordinate moved by linear kernel and the coordinate projected again by warping function are closed, as follows:
Wherein, xiIt is the location of pixels of sampling, l (xi) it is xiThe point moved by the terminal of linear kernel;By fitting byThe mobile x with l ()iTo obtainSince scene depth is fixed on initial depth figure,It isOnly One variable;Random sampling unification algorism (RANSAC) is for searching the camera motion that can most describe the linear kernel of pixel;N is random The quantity of sample is always 4.
Description of the drawings
Fig. 1 is a kind of system stream of light field deblurring and depth estimation method based on Combined estimator fuzzy variable of the present invention Cheng Tu.
Fig. 2 is that the present invention a kind of light field deblurring and combining for depth estimation method based on Combined estimator fuzzy variable are estimated Meter.
Fig. 3 is the iteration connection of the present invention a kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable Close estimation example.
Specific implementation mode
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase It mutually combines, invention is further described in detail in the following with reference to the drawings and specific embodiments.
Fig. 1 is a kind of system stream of light field deblurring and depth estimation method based on Combined estimator fuzzy variable of the present invention Cheng Tu.Include mainly sub-aperture image, light field fuzzy model, the update of sub-image, the update of camera motion and depth map.
Main process is:Reference of the median of the centre view and aperture time that first select sub-aperture image as sub-image Angle Position and timestamp, using the input sub-aperture image initial depth map of light field, camera motion is fuzzy interior from local linear Core and the initialization of initial scene depth, then combined optimization sub-image, depth map and camera motion, finally obtain clearly image, Light field, depth map and camera motion are as output.
Sub-aperture image, there are four coordinates for the pixel tool in four-dimensional light field, that is, are used for (x, y) in space and are used for angular coordinate (u, v);Light field can be considered as one group of u × v have narrow baseline multi-view image, commonly referred to as sub-aperture image I (x, u);Wherein x=(x, y) and u=(u, v);For each sub-aperture image, blurred picture B (x, u) is clear when shutter is opened Clear image ItAverage value [the t of (x, u)0,t1];By projecting the single sub-image moved with Three-dimensional Rigidity come approximate all fuzzy Sub-aperture image.
Select the median (t of the centre view (c) and aperture time of sub-aperture imager) reference angular position as sub-image And timestamp;Then from each sub-aperture image to sub-imagePixel correspondence expression it is as follows:
wt(x, u) is calculated from u to c and from t to trWarpage location of pixels;MatrixWithIndicate respective corners Spend the 6- degree of freedom camera postures and timestamp at position;Dt(x, u) is the depth map at timestamp t;
In the model proposed, fuzzy operator Ψ () by by the integral approach of B (x, u) for following finite sum come Definition:
In formula (2), tmIt is interim [t0,t1] m-th of uniform sampling timestamp.
Determine there was only central viewpoint variableI.e.With With It is the variable related with u in warping function (2);Therefore, it is parameterized by using centre view variableWith Due to relative pose Pc→uTime to time change, so
Wherein, exp and log indicates the index and logarithmic mapping between Lie group SE (3) and Lie algebra se (3) space;In order to The viewpoint offsets of sub-image are reduced to the maximum extent, it is assumed thatWork as tm=trWhen so thatAs unit matrix;Also byIt is indicated to warpage and interpolation by preceding.
Light field fuzzy model needs to restore latent to estimate all fuzzy variables in proposed light field fuzzy model In variable, i.e.,WithEnergy function is modeled as follows:
Data item forces the fuzzy brightness uniformity between light field and the light field of recovery of input;Last two are creep respectively The total variation regularization of amount and depth map.
In energy model,WithImplicitly it is included in warping function (2);Optimize three in an alternating fashion Latent variable;A variable is minimized, and its dependent variable is fixed;It is directed to three variables successively and optimizes formula (4);Using repeatedly In generation, weighted least-squares method (IRLS) carried out approximate L1 optimizations again;Optimization process with a small amount of iteration convergence (<10).
Fig. 2 is that the present invention a kind of light field deblurring and combining for depth estimation method based on Combined estimator fuzzy variable are estimated Meter.The algorithm from single light field Combined estimator sub-image depth map and camera motion.Figure (a) is fuzzy light field sub-aperture image Centre view;Figure (b) is the blurred picture for scheming (a);Figure (c) is the depth map of estimation;It is camera motion path and direction to scheme (d) (6- degree of freedom).
The algorithm updates sub-image firstIn data item, ifWithIt remains unchanged, then obscures and calculate Sub (2) are reduced to linear matrix multiplication;It is as follows that update sub-image is equal to minimum (4):
It is vectorial images, andIt is the fuzzy operator of square matrix form, wherein n is The quantity of pixel in centre visual angle sub-aperture image;Total variation regularization is eliminated as prior to the sub-image with clear boundary Artifact.
Since formula (2) isWithNonlinear function need to approach in linear form to efficiently calculate It;Fuzzy operation (2) is approximately single order extension;Enable D0(x, c) andIndicate initializaing variable, then formula (2) is approximately as described below:
F is the movement stream generated by warping function, andIndicate the six-vector on se (3);
Once usingWithApproximation, formula (4) can be optimized using IRLS;It obtains WithIt is current respectivelyWithIncrement size;They update as follows:
Wherein,Pass through motion vectorIndex mapping update.
First, using the input sub-aperture image initial depth map of light field;It is assumed that camera does not move and minimizes public affairs Formula (4) is initial to obtainMinimizing formula (4) becomes a simple multiple view stereo matching problem;Camera motionKernel and the initialization of initial scene depth are obscured from local linear.
First using the local linear fuzzy core of estimation B (x, c);Then, fitting is moved by linear kernel pixel coordinate and The coordinate projected again by warping function, as follows:
Wherein, xiIt is the location of pixels of sampling, l (xi) it is xiThe point moved by the terminal of linear kernel;By fitting byThe mobile x with l ()iTo obtainSince scene depth is fixed on initial depth figure,It isOnly One variable;Random sampling unification algorism (RANSAC) is for searching the camera motion that can most describe the linear kernel of pixel;N is random The quantity of sample is always 4.
Fig. 3 is the iteration connection of the present invention a kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable Close estimation example.The method proposed restrains in a small amount of iteration.Figure (a) and figure (b) input blurred picture by iteration and go Fuzzy result.Scheme (c) and figure (d) initially obscures depth map and depth estimation result by iteration.Joint proposed by the invention Estimate in arbitrary 6- degree of freedom camera motion and without the light field deblurring and depth for constraining under scene depth while realizing high quality Degree estimation, substantially increases the clarity of image, the operational excellence in the case where general camera motion and scene depth change.
For those skilled in the art, the present invention is not limited to the details of above-described embodiment, in the essence without departing substantially from the present invention In the case of refreshing and range, the present invention can be realized in other specific forms.In addition, those skilled in the art can be to this hair Bright to carry out various modification and variations without departing from the spirit and scope of the present invention, these improvements and modifications also should be regarded as the present invention's Protection domain.Therefore, the following claims are intended to be interpreted as including preferred embodiment and falls into all changes of the scope of the invention More and change.

Claims (10)

1. a kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable, which is characterized in that include mainly Sub-aperture image (one);Light field fuzzy model (two);The update (three) of sub-image;The update (four) of camera motion and depth map.
2. based on the sub-aperture image (one) described in claims 1, which is characterized in that there are four the pixel tools in four-dimensional light field Coordinate is used for (x, y) in space and (u, v) for angular coordinate;Light field, which can be considered as one group of u × v, has narrow baseline Multi-view image, commonly referred to as sub-aperture image I (x, u);Wherein x=(x, y) and u=(u, v);For each sub-aperture figure Picture, blurred picture B (x, u) are the clear image I when shutter is openedtAverage value [the t of (x, u)0,t1];Have three by projection The single sub-image for tieing up rigid motion carrys out approximate all fuzzy sub-aperture images.
3. based on the approximation described in claims 2, which is characterized in that the centre view (c) and shutter of selection sub-aperture image Median (the t of timer) reference angular position and timestamp as sub-image;Then from each sub-aperture image to sub-image's The expression of pixel correspondence is as follows:
wt(x, u) is calculated from u to c and from t to trWarpage location of pixels;MatrixWithIndicate respective angles position Set the 6- degree of freedom camera postures and timestamp at place;Dt(x, u) is the depth map at timestamp t;
In the model proposed, fuzzy operator Ψ () is by determining the integral approach of B (x, u) for following finite sum Justice:
In formula (2), tmIt is interim [t0,t1] m-th of uniform sampling timestamp.
4. based on the finite sum described in claims 3, which is characterized in that determine there was only central viewpoint variableI.e.With WithIt is the variable related with u in warping function (2); Therefore, it is parameterized by using centre view variableWithDue to relative pose Pc→uTime to time change, institute With
Wherein, exp and log indicates Lie group SE (3) and Lie algebraIndex between space and logarithmic mapping;For maximum limit Reduce the viewpoint offsets of sub-image in degree ground, it is assumed thatWork as tm=trWhen so thatAs unit matrix; Also byIt is indicated to warpage and interpolation by preceding.
5. based on the light field fuzzy model (two) described in claims 1, which is characterized in that in order to estimate proposed light field mould All fuzzy variables in fuzzy model need to restore latent variable, i.e.,WithEnergy function is modeled It is as follows:
Data item forces the fuzzy brightness uniformity between light field and the light field of recovery of input;Last two be respectively latent variable and The total variation regularization of depth map.
6. based on the energy function modeling described in claims 5, which is characterized in that in energy model,WithIt is hidden Formula is included in warping function (2);Optimize three latent variables in an alternating fashion;One variable of minimum, and its dependent variable It is fixed;It is directed to three variables successively and optimizes formula (4);Carry out approximate L1 using iteration again weighted least-squares method (IRLS) Optimization;Optimization process with a small amount of iteration convergence (<10).
7. the update (three) based on the sub-image described in claims 1, which is characterized in that the algorithm updates sub-image firstIn data item, ifWithIt remains unchanged, then fuzzy operator (2) is reduced to linear matrix multiplication; It is as follows that update sub-image is equal to minimum (4):
It is vectorial images, andIt is the fuzzy operator of square matrix form, wherein n is center The quantity of pixel in the sub-aperture image of visual angle;Total variation regularization eliminates puppet as prior to the sub-image with clear boundary Shadow.
8. the update (four) based on camera motion and depth map described in claims 1, which is characterized in that due to formula (2) It isWithNonlinear function need to approach it in linear form to efficiently calculate;Fuzzy operation (2) is approximately Single order extends;Enable D0(x, c) andIndicate initializaing variable, then formula (2) is approximately as described below:
F is the movement stream generated by warping function, andIt indicatesOn six-vector;
Once usingWithApproximation, formula (4) can be optimized using IRLS;It obtainsWithPoint It is not currentWithIncrement size;They update as follows:
Wherein,Pass through motion vectorIndex mapping update.
9. based on the camera motion described in claims 8, which is characterized in that first, use the input sub-aperture image of light field Initialize depth map;It is assumed that camera does not move and minimizes formula (4) to obtain initiallyMinimize formula (4) at For a simple multiple view stereo matching problem;Camera motionKernel is obscured from local linear and initial scene depth is initial Change.
10. based on the depth initialization described in claims 9, which is characterized in that first using the local line of estimation B (x, c) Property fuzzy core;Then, the pixel coordinate moved by linear kernel and the coordinate projected again by warping function, following institute are fitted Show:
Wherein, xiIt is the location of pixels of sampling, l (xi) it is xiThe point moved by the terminal of linear kernel;By fitting by The mobile x with l ()iTo obtainSince scene depth is fixed on initial depth figure,It isUnique variable; Random sampling unification algorism (RANSAC) is for searching the camera motion that can most describe the linear kernel of pixel;N is random sample Quantity is always 4.
CN201810189606.2A 2018-03-08 2018-03-08 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable Withdrawn CN108389171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810189606.2A CN108389171A (en) 2018-03-08 2018-03-08 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810189606.2A CN108389171A (en) 2018-03-08 2018-03-08 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable

Publications (1)

Publication Number Publication Date
CN108389171A true CN108389171A (en) 2018-08-10

Family

ID=63067024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810189606.2A Withdrawn CN108389171A (en) 2018-03-08 2018-03-08 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable

Country Status (1)

Country Link
CN (1) CN108389171A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706346A (en) * 2019-09-17 2020-01-17 北京优科核动科技发展有限公司 Space-time joint optimization reconstruction method and system
CN111179333A (en) * 2019-12-09 2020-05-19 天津大学 Defocus fuzzy kernel estimation method based on binocular stereo vision
CN111191618A (en) * 2020-01-02 2020-05-22 武汉大学 KNN scene classification method and system based on matrix group
CN112150526A (en) * 2020-07-27 2020-12-29 浙江大学 Light field image depth estimation method based on depth learning
CN112184731A (en) * 2020-09-28 2021-01-05 北京工业大学 Multi-view stereo depth estimation method based on antagonism training

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105704371A (en) * 2016-01-25 2016-06-22 深圳市未来媒体技术研究院 Light field refocusing method
CN106803892A (en) * 2017-03-13 2017-06-06 中国科学院光电技术研究所 A kind of light field high-resolution imaging method based on Optical field measurement

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105704371A (en) * 2016-01-25 2016-06-22 深圳市未来媒体技术研究院 Light field refocusing method
CN106803892A (en) * 2017-03-13 2017-06-06 中国科学院光电技术研究所 A kind of light field high-resolution imaging method based on Optical field measurement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DONGWOO LEE: "Joint Blind Motion Deblurring and Depth Estimation of Light Field", 《ARXIV:1711.10918V1》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706346A (en) * 2019-09-17 2020-01-17 北京优科核动科技发展有限公司 Space-time joint optimization reconstruction method and system
CN110706346B (en) * 2019-09-17 2022-11-15 浙江荷湖科技有限公司 Space-time joint optimization reconstruction method and system
CN111179333A (en) * 2019-12-09 2020-05-19 天津大学 Defocus fuzzy kernel estimation method based on binocular stereo vision
CN111179333B (en) * 2019-12-09 2024-04-26 天津大学 Defocus blur kernel estimation method based on binocular stereo vision
CN111191618A (en) * 2020-01-02 2020-05-22 武汉大学 KNN scene classification method and system based on matrix group
CN112150526A (en) * 2020-07-27 2020-12-29 浙江大学 Light field image depth estimation method based on depth learning
CN112184731A (en) * 2020-09-28 2021-01-05 北京工业大学 Multi-view stereo depth estimation method based on antagonism training

Similar Documents

Publication Publication Date Title
CN108389171A (en) A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable
Reinbacher et al. Real-time panoramic tracking for event cameras
Gallego et al. Accurate angular velocity estimation with an event camera
JP6563609B2 (en) Efficient canvas view generation from intermediate views
Im et al. All-around depth from small motion with a spherical panoramic camera
US10334168B2 (en) Threshold determination in a RANSAC algorithm
Chin et al. Star tracking using an event camera
CN110945565A (en) Dense visual SLAM using probabilistic bin maps
WO2012083982A1 (en) Method for determining a parameter set designed for determining the pose of a camera and/or for determining a three-dimensional structure of the at least one real object
CN105069753B (en) A kind of shake Restoration method of blurred image of facing moving terminal
US11887256B2 (en) Deferred neural rendering for view extrapolation
US11049313B2 (en) Rendering an object
Kim et al. Real-time panorama canvas of natural images
Dellaert et al. Super-resolved texture tracking of planar surface patches
CN111192308B (en) Image processing method and device, electronic equipment and computer storage medium
Xian et al. Neural Lens Modeling
Peng et al. PDRF: progressively deblurring radiance field for fast scene reconstruction from blurry images
JP6341540B2 (en) Information terminal device, method and program
Gilbert et al. Inpainting of wide-baseline multiple viewpoint video
Cao et al. Make object connect: A pose estimation network for UAV images of the outdoor scene
Lin et al. Reinforcement learning-based image exposure reconstruction for homography estimation
KR102298098B1 (en) Method and Apparatus for Generating 3D Model through Tracking of RGB-D Camera
Lee et al. High dynamic range imaging via truncated nuclear norm minimization of low-rank matrix
Guo et al. Single Image based Fog Information Estimation for Virtual Objects in A Foggy Scene
Toklu et al. 2-D mesh-based synthetic transfiguration of an object with occlusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20180810

WW01 Invention patent application withdrawn after publication