CN108492239B - Structured observation and sparse representation collaborative optimization method for light field camera - Google Patents

Structured observation and sparse representation collaborative optimization method for light field camera Download PDF

Info

Publication number
CN108492239B
CN108492239B CN201810222647.7A CN201810222647A CN108492239B CN 108492239 B CN108492239 B CN 108492239B CN 201810222647 A CN201810222647 A CN 201810222647A CN 108492239 B CN108492239 B CN 108492239B
Authority
CN
China
Prior art keywords
light field
observation
dictionary
structured
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810222647.7A
Other languages
Chinese (zh)
Other versions
CN108492239A (en
Inventor
尹宝才
宿建卓
施云惠
丁文鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201810222647.7A priority Critical patent/CN108492239B/en
Publication of CN108492239A publication Critical patent/CN108492239A/en
Application granted granted Critical
Publication of CN108492239B publication Critical patent/CN108492239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Abstract

The invention discloses a collaborative optimization method for structured observation and sparse representation facing a light field camera, which is used for compressing four-dimensional light field data, adopts a compressed sensing theory framework, comprehensively analyzes the signal distribution characteristics of light field signals in a space angle coordinate system and in a two-dimensional image, and provides a compressed sensing model facing the light field image.

Description

Structured observation and sparse representation collaborative optimization method for light field camera
Technical Field
The invention relates to a structured observation and sparse representation collaborative optimization method for a light field camera, which can be applied to a four-dimensional light field signal compression reconstruction scene.
Background
Light field photography has gained a number of important research efforts over the last two decades, recording all the light rays that pass through a camera in a three-dimensional scene. In recent years new techniques for the application of light fields have been proposed, which are used for the synthesis of new viewpoints[1]Three-dimensional depth mapping and shape estimation[2]And the problem of biomedical microscopes using light field techniques to improve aperture depth focusing[3]And so on.
Commercial light field cameras are ubiquitous today. Light field imaging initially used cumbersome optical devices such as mobile robotic arms, camera arrays to address densely sampled angular viewsColumn, etc[4,5]However, these devices either cannot operate in real time or must process large amounts of data. In addition, the full-field camera designed by Adelson et al in 1992 acquires multi-view images by using a main lens, a microlens array and a phase plane, but there is still a problem that the light field camera based on the microlens array actually records angle and space information of light rays on a two-dimensional sensor by using a spatial multiplexing technology, and the spatial resolution and the angular resolution are compromised due to the limited resolution of the two-dimensional sensor, but the spatial resolution is sacrificed while obtaining higher angular resolution.
In order to reduce the dimensionality problem in light field sampling, we consider the theory of compressed sensing. Compressed sensing theory shows that when the signal is sparse in some domains, it can be decoded from fewer observations than proposed in Nyquist's sampling law theory. In fact, processing light field data by using the compressed sensing theory is different from the traditional imaging method of a camera, namely, what you see, and the "what you see" (light field) of the light field imaging method needs to be obtained through a corresponding digital processing algorithm, so the light field imaging is a computational imaging technology and comprises two processes: acquisition of light fields and processing of light field data. Some scholars do meaningful research work aiming at the problem in the aspect of acquiring the light field image in a compressed sensing framework, Ashok et al firstly try to utilize spatial and angular cross correlation of the light field, and prove that compared with a non-compressed scheme, the measurement times of capturing the light field are reduced by four times, and the time of capturing the light field is shortened by three times[6]. In 2012, Babacan et al proposed a new camera sampling model based on compressive sensing theory, placing a non-refractive encodable mask in front of the camera aperture, and reconstructing a light field image by a Bayesian method[7]. 2013, Marwah[8]The method includes the steps of providing a light field camera framework based on compressed sensing, compressing and collecting a light field by introducing a compressed sensing theory, compressing and recording information of the whole light field on one image, greatly reducing data volume needing to be stored and transmitted, learning an over-complete light field dictionary by utilizing a light field training sample, and adopting compressionThe sensing theory optimizes the collected coded image, restores the original light field and obtains the light field with high quality and high resolution, and the light field dictionary has high dimension, large atom number and low dictionary training and sparse coding efficiency, so that the structure dictionary needs to be constructed by combining the characteristics of light field data. The invention comprehensively analyzes the performance of compression imaging by combining the influence of compression sampling on the image reconstruction quality on the basis of the performance analysis of the traditional imaging mode and is based on the camera model of Babacan[7]And optimizing compressed light field imaging.
Disclosure of Invention
The invention provides a light field camera-oriented structured observation and sparse representation collaborative optimization method based on a compressed sensing theory, which is used for compressing four-dimensional light field data.
In the compressed sensing theory, the acquisition and reconstruction of signals are influenced by the observation matrix and the dictionary, and the acquisition capability of information can be improved to the maximum extent by optimizing the observation matrix and the dictionary. In the traditional method for reconstructing the four-dimensional light field signal by using the compressed sensing theory, a fixed observation matrix and then an optimized dictionary are mostly adopted, or a fixed dictionary and then an optimized observation matrix are adopted, and the cooperative optimization of the observation matrix and the dictionary is not considered. Theoretically, the non-correlation model of the cooperative observation matrix and the dictionary is created, so that the reconstruction precision of the four-dimensional light field signal can be greatly improved, and the acquisition capacity and the reconstruction capacity of the light field are improved to the maximum extent. The invention adopts a compression perception theory framework, comprehensively analyzes the signal distribution characteristics of light field signals under a space angle coordinate system and in a two-dimensional image, provides a compression perception model facing to the light field image, and not only fully excavates the similarity relation between light field image data, but also improves the reconstruction precision through the compression perception model and the cooperative optimization of an observation matrix and a dictionary, thereby achieving the aim of improving the light field acquisition and reconstruction capability.
Drawings
Fig. 1 is a schematic diagram of an analog light field structured sampling process.
Detailed Description
The invention provides a light field camera-oriented collaborative optimization method for structured observation and sparse representation, which is used for compressing four-dimensional light field data.
For the sake of understanding, the following description will first provide a conventional observation model of a four-dimensional light field, and the four-dimensional light field image is represented by I, where IjIs the light field image of the jth view. In consideration of the difficulty and the running speed of the simulation experiment, the invention utilizes the light field image blocks to replace the whole light field picture for carrying out the experiment. As shown in figure 1 of the drawings, in which,
Figure BDA0001600400240000021
for the light field image block set of N views, the invention samples and observes the image block of the light field from Y, and x(j)Expressed in a more compact form as
Figure BDA0001600400240000022
Where sxs is the size of the sampled light field image block. In the same way, set
Figure BDA0001600400240000023
Is a light field sample observation set, then yiCan be expressed as:
Figure BDA0001600400240000024
wherein 0 is more than or equal to aij≤1,aijThe light radiation quantity parameter representing the reaching of the jth light field view to the ith observation, and M represents the observation times. The goal of light field reconstruction is to recover the light field x from a sample observation image block y, where y is Px, P is the observation matrix, y isi=Pix. Considering the similarity between light field views and the inherent similarity of each view, we can denote x as
Figure BDA0001600400240000025
Wherein
Figure BDA0001600400240000026
After the M observations were made,
Figure BDA0001600400240000027
the ith observation PiCan be expressed as
Figure BDA0001600400240000028
Wherein the vector
Figure BDA0001600400240000029
And q isi=[ai,1,...,ai ,k,...,ai,N]. We know that y is Px, so y is y ═ y1,y2,...,yM]TThen P may be represented as P ═ P1T,...,PiT]T,i=1,...,M。
For each light-field sample, its sparseness is denoted x ═ D α, due to our method and [9 ═ D α]Similarly, the dictionary and the sparse coefficient may be expressed as D ═ diag { Φ.,. Φ }, α ═ α ·1,...αN]Where D is represented by N Φ.
Given a set of training samples X ═ X (X)1,...,XL),X∈RO×LWhere O is N × s × s, and L is the number of samples of the sample. The sparse coefficient can be expressed as B, X ═ DB, since each image block can be expressed by a dictionary and an observation matrix, and considering the incoherence principle of optimizing observation P and dictionary D in the compressed sensing theory, an algorithm for cooperatively optimizing light field structured observation and structured dictionary is proposed, and the reconstruction model can be expressed as:
Figure BDA0001600400240000031
where X is a light field sample set, P is a light field camera based observation matrix, Y is a sample observation data set, μ { PD } is the correlation of the observation matrix P and the dictionary D, λ1Is that
Figure BDA0001600400240000039
Controlling the error term parameter, bkIs a sparse matrix BkK-th column of (1), T0Is the sparseness.
The solution of the formula (2) can be divided into two parts of dictionary learning and observation learning, and the two parts are alternately and iteratively solved until the condition is met. The detailed solving steps are as follows:
1. dictionary learning
The structured observation matrix P is fixed in the dictionary learning process, and the model and the solving process of the structured dictionary D and the sparse coefficient are solved:
Figure BDA0001600400240000032
transforming the first term of the above equation, the model becomes:
Figure BDA0001600400240000033
where E is the identity matrix. Suppose that
Figure BDA0001600400240000034
Using the alternating Direction method (ADMM), an auxiliary variable is introduced
Figure BDA0001600400240000035
Equation (4) can be rewritten as:
Figure BDA0001600400240000036
Figure BDA0001600400240000037
Figure BDA0001600400240000038
Figure BDA0001600400240000041
wherein, the formula (5) can be solved by using an OMP algorithm to obtain B, the formula (6) can be solved by derivation to obtain W, and the formula (7) can be substituted by the solved formula B and W to obtain a structured dictionary D.
2. Observation learning
In the observation learning process, fixing a structured dictionary D, solving a model of a structured observation matrix P, and solving:
Figure BDA0001600400240000042
in view of the structural properties, the first term of the above equation can be converted into:
Figure BDA0001600400240000043
the first term of equation (8) can be expressed as
Figure BDA0001600400240000044
For the ith observation, the second term of formula (8) can be expressed as
II=μ{PiD}(11)
Since C is PD, can be
Figure BDA0001600400240000045
Wherein, CiFrom N to N
Figure BDA0001600400240000049
Composition of CiIs shown as
Figure BDA0001600400240000046
Matrix CiCan be expressed as: .
Figure BDA0001600400240000047
Since there is very high similarity between light-field images, we can assume that u ═ v ═ 1.., s × s, then equation (18)
Figure BDA0001600400240000048
By deriving equations (10) and (11), q can be obtained, and a structured observation matrix P can be obtained.
In order to verify the effectiveness of the proposed scheme, the structured optimization model and the existing random model are respectively compared on databases provided by Stanford university computational graphics laboratory, wherein humvee, stone, truck and dkc data sets are respectively adopted. The parameters were chosen as follows: the light field scale is 3 multiplied by 3, the sampling block size is 6 multiplied by 6, and PSNR is adopted in the experiment to evaluate the light field reconstruction quality.
GD represents a general light field dictionary, SD represents structured observation, OSD represents an optimized light field structured dictionary, COSPD represents the model (10) provided by the invention, and RP represents common random observation.
Table 1: PSNR (dB) reconstructed by different models for 3X 3 light field
Figure BDA0001600400240000051
In the experiment: GD is a common light field dictionary, SD is a structured dictionary, and OSD is an optimized structured dictionary. The collaborative optimization algorithm provided by the invention is represented by COSPD. Table 1 shows PSNR results of the above four methods for reconstructing 3 × 3 light fields, respectively. It can be seen that, under the stone data set, the reconstruction result of the COSPD of the invention is superior to the OSD result, and is improved by 5.53 dB.
Table 2: PSNR (dB) reconstructed by COSPD model with 2 × 2 × 4 × 4 and 2 × 2 × 6 × 6 light fields
Figure BDA0001600400240000061
As can be seen from table 2, in the optical field experiment of 3 × 3, the COSPD optimization algorithm of the present invention has a higher PSNR when a block size of 4 × 4 is adopted, and the enhancement effect is more obvious compared to a block size of 6 × 6.
Example 1
The invention provides a light field camera-oriented structured observation and sparse representation collaborative optimization method, which comprises the following steps:
1. input NXN, N-2 optical field signal
Figure BDA0001600400240000062
For arranging in the order of the set of pixel points
Figure BDA0001600400240000063
2. As can be seen from the analysis, the image similarity between the viewpoints is extremely high, so that the method can be used for processing the images
Figure BDA0001600400240000064
Arranged in the view diagram order as x, i.e.
Figure BDA0001600400240000065
3. Through the structural analysis and calculation, the structure can be used for cooperatively optimizing the light field observation and dictionary, in the algorithm for cooperatively optimizing the light field structured observation and dictionary, a random observation matrix and a common dictionary are initially given, an observation matrix P and an optimized structured dictionary D are solved step by iteration by using a model (10), and when the observation matrix P is solved, the gradient descent method can be used for optimizing and solving the result.
The collaborative optimization light field structured observation and dictionary algorithm is as follows:
Figure BDA0001600400240000071
it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the invention without departing from the spirit and scope of the invention.
Reference to the literature
[1]A.Levin and F.Durand.Linearview synthesis using a dimensionality gap lightfield prior.In Computer Vision and Pattern Recognition(CVPR),2010IEEE Conference on,pages 1831–1838.IEEE,2010.
[2]Michael W Tao,Pratul P Srinivasan,Sunil Hadap,Szymon Rusinkiewicz,Jitendra Malik,and Ravi Ramamoorthi,“Shape estimation from shading,defocus,and correspondence using light-field angular coherence,”IEEEtransactions on pattern analysis and machine intelligence,vol.39,no.3,pp.546–560,2017.
[3]Marc Levoy,Ren Ng,Andrew Adams,Matthew Footer,and Mark Horowitz,“Light field microscopy,”in ACM Transactions on Graphics(TOG).ACM,2006,vol.25,pp.924–934.
[4]Bennett Wilburn,Neel Joshi,VaibhavVaish,Eino-Ville Talvala,Emilio Antunez,Adam Barth,AndrewAdams, Mark Horowitz,and Marc Levoy,“High performance imaging using large camera arrays,”in ACM Transactions on Graphics(TOG).ACM,2005,vol.24,pp.765–776.
[5]Kartik Venkataraman,Dan Lelescu,Jacques Duparré,Andrew McMahon,Gabriel Molina,Priyam Chatterjee,Robert Mullis,and Shree Nayar,“Picam:An ultrathin high performance monolithic camera array,” ACM TransactionsonGraphics(TOG),vol.32,no.6,pp.166,2013.
[6]A.Ashokand M.A.Neifeld,Compressive lightfield imaging,in Proc.SPIE,2010.
[7]S.D.Babacan,R.Ansorge,M.Luessi,P.R.Matar Ma,R.Molina,and A.K.Katsaggelos,Compressive light field sensing,IEEE Transactions on Image Processing,vol.21,no.12,2012.
[8]Marwah K,Wetzstein G,Bando Y,et al.Compressive light field photography using overcomplete dictionaries and optimized projections[J].Acm Transactions on Graphics,2013,
32(4):96-96.
[9]Xiuhuan Zang,Yunhui Shi,Jin Wang,Wenpeng Ding,and Baocai Yin,“Optimizing collaborative sparse dictionary for compressive light field photography,”in Visual Communications and Image Processing(VCIP), 2016.IEEE,2016,pp.1–4.

Claims (1)

1. A structured observation and sparse representation collaborative optimization method facing a light field camera is characterized by comprising the following steps:
step 1: input NXN, N-2 optical field signal
Figure FDA0003555881610000011
For arranging in the order of the set of pixel points
Figure FDA0003555881610000012
Step 2: as can be seen from the analysis, the image similarity between the viewpoints is extremely high, so that the method can be used for processing the images
Figure FDA0003555881610000013
Arranged in the view diagram order as x, i.e.
Figure FDA0003555881610000014
And step 3: in the algorithm of collaborative optimization light field structured observation and dictionary, a random observation matrix and a common dictionary are initially given, an observation matrix P and an optimized structured dictionary D are solved step by step in an iterative manner, and when P is solved, the optimized observation matrix P and the optimized structured dictionary D can be obtained by utilizing a gradient descent method;
in particular, for compressing four-dimensional light field data, let the four-dimensional light field image be represented by I, IjIs the light field image of the jth view
Figure FDA0003555881610000015
For a set of light field image blocks of N views, sampling image blocks of the observation light field from Y, and dividing x(j)Expressed in a more compact form as
Figure FDA0003555881610000016
sxs is the size of the sampled light field image block; in the same way, set
Figure FDA0003555881610000017
Is a light field sample observation set, then yiCan be expressed as:
Figure FDA0003555881610000018
wherein, a is more than or equal to 0ij≤1,aijRepresenting the quantity of light radiation from the jth light field view to the ith observation, M representing the number of observations, the goal of light field reconstruction being to recover the light field x from the sampled observation image block y, where y is Px, P is the observation matrix, y isi=Pix, x can be represented as
Figure FDA0003555881610000019
Wherein
Figure FDA00035558816100000110
After the M observations were made,
Figure FDA00035558816100000111
the ith observation PiCan be expressed as
Figure FDA00035558816100000112
Wherein the vector qi,k=qiK is 1,.., s × s, and q isi=[ai,1,...,ai,k,...,ai,N](ii) a y is Px, so y is [ y ═ y1,y2,...,yM]TThen P may be represented as P ═ P1T,...,PiT]T,i=1,...,M;
For each light-field sample, its sparsity is denoted x-D alpha,the dictionary and the sparse coefficients can be expressed as D diag { Φ1,...αN]Where D is represented by N Φ;
given a set of training samples X ═ X (X)1,...,XL),X∈RO×LThe method includes the steps that O is Nxsxs, L is the sampling number of samples, each image block can be represented by a dictionary and an observation matrix, sparse coefficients formed by all samples can be represented as B, X is DB, meanwhile, the incoherence principle of optimizing observation P and a dictionary D in a compressed sensing theory is considered, an algorithm for cooperatively optimizing light field structured observation and a structured dictionary is provided, and a reconstruction model can be represented as:
Figure FDA0003555881610000021
where X is a light field sample set, P is a light field camera based observation matrix, Y is a sample observation dataset, μ { PD } is the correlation of the observation matrix P and the dictionary D, λ1Is that
Figure FDA00035558816100000211
Controlling the error term parameter, bkIs a sparse matrix BkK-th column of (1), T0In order to be sparse in degree,
the solution of the formula (2) can be divided into two parts of dictionary learning and observation learning, the two parts are alternately and iteratively solved until the condition is met, and the detailed solution steps are as follows:
step (1) dictionary learning
The structured observation matrix P is fixed in the dictionary learning process, and the model and the solving process of the structured dictionary D and the sparse coefficient are solved:
Figure FDA0003555881610000022
transforming the first term of the above equation, the model becomes:
Figure FDA0003555881610000023
where E is the identity matrix, let us assume
Figure FDA0003555881610000024
Using the alternating direction method ADMM, an auxiliary variable is introduced
Figure FDA0003555881610000025
Equation (4) can be rewritten as:
Figure FDA0003555881610000026
Figure FDA0003555881610000027
Figure FDA0003555881610000028
wherein, the formula (5) can be solved by using an OMP algorithm to obtain B, the formula (6) can be solved by solving a derivative to obtain W, and the B and the W are solved and can be substituted into the formula (7) to obtain a structured dictionary D;
step (2) observation learning
In the observation learning process, fixing a structured dictionary D, solving a model of a structured observation matrix P, and solving:
Figure FDA0003555881610000029
in view of the structural properties, the first term of the above equation can be converted into:
Figure FDA00035558816100000210
the first term of equation (8) can be expressed as
Figure FDA0003555881610000031
For the ith observation, the second term of formula (8) can be expressed as
II=μ{PiD} (11)
If C is PD, can be provided
Figure FDA0003555881610000032
Wherein, CiFrom N to N
Figure FDA0003555881610000033
Composition of CiIs shown as
Figure FDA0003555881610000034
Matrix CiCan be expressed as:
Figure FDA0003555881610000035
since there is very high similarity between light-field images, we can assume that u ═ v ═ 1.., s × s, then equation (13) can be expressed as:
Figure FDA0003555881610000036
let Φ be [ d ]1,d2,...,dn]M is then diag { q
Figure FDA0003555881610000037
By deriving equations (10) and (11), q can be obtained, and a structured observation matrix P can be obtained.
CN201810222647.7A 2018-03-19 2018-03-19 Structured observation and sparse representation collaborative optimization method for light field camera Active CN108492239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810222647.7A CN108492239B (en) 2018-03-19 2018-03-19 Structured observation and sparse representation collaborative optimization method for light field camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810222647.7A CN108492239B (en) 2018-03-19 2018-03-19 Structured observation and sparse representation collaborative optimization method for light field camera

Publications (2)

Publication Number Publication Date
CN108492239A CN108492239A (en) 2018-09-04
CN108492239B true CN108492239B (en) 2022-05-03

Family

ID=63339742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810222647.7A Active CN108492239B (en) 2018-03-19 2018-03-19 Structured observation and sparse representation collaborative optimization method for light field camera

Country Status (1)

Country Link
CN (1) CN108492239B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113965758B (en) * 2021-10-21 2024-02-27 上海师范大学 Light field image coding method, device and storage medium based on block low rank approximation

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008146190A2 (en) * 2007-05-30 2008-12-04 Nxp B.V. Method of determining an image distribution for a light field data structure
CN104036489A (en) * 2014-05-09 2014-09-10 北京工业大学 Light field acquisition method
WO2015054797A1 (en) * 2013-10-20 2015-04-23 Mtt Innovation Incorporated Light field projectors and methods
CN104966314A (en) * 2015-05-15 2015-10-07 北京工业大学 Light field camera film optimizing method and dictionary training method based on compressed sensing
CN105634498A (en) * 2015-12-25 2016-06-01 北京工业大学 Observation matrix optimization method
CN105654119A (en) * 2015-12-25 2016-06-08 北京工业大学 Dictionary optimization method
CN106651778A (en) * 2016-05-25 2017-05-10 西安电子科技大学昆山创新研究院 Spectral imaging method based on self-adaptive coupling observation and non-linear compressed learning
CN107064005A (en) * 2017-06-16 2017-08-18 中国科学技术大学 The fast illuminated imaging system and algorithm for reconstructing of a kind of EO-1 hyperion light field
CN107203968A (en) * 2017-05-25 2017-09-26 四川大学 Single image super resolution ratio reconstruction method based on improved subspace tracing algorithm
CN107622515A (en) * 2017-09-06 2018-01-23 郑州大学 The physical re-organization method of squeezed light field

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9152881B2 (en) * 2012-09-13 2015-10-06 Los Alamos National Security, Llc Image fusion using sparse overcomplete feature dictionaries
US9380221B2 (en) * 2013-02-27 2016-06-28 Massachusetts Institute Of Technology Methods and apparatus for light field photography
JP2018533066A (en) * 2015-10-09 2018-11-08 ヴィスバイ カメラ コーポレイション Holographic light field imaging device and method of use thereof

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008146190A2 (en) * 2007-05-30 2008-12-04 Nxp B.V. Method of determining an image distribution for a light field data structure
WO2015054797A1 (en) * 2013-10-20 2015-04-23 Mtt Innovation Incorporated Light field projectors and methods
CN104036489A (en) * 2014-05-09 2014-09-10 北京工业大学 Light field acquisition method
CN104966314A (en) * 2015-05-15 2015-10-07 北京工业大学 Light field camera film optimizing method and dictionary training method based on compressed sensing
CN105634498A (en) * 2015-12-25 2016-06-01 北京工业大学 Observation matrix optimization method
CN105654119A (en) * 2015-12-25 2016-06-08 北京工业大学 Dictionary optimization method
CN106651778A (en) * 2016-05-25 2017-05-10 西安电子科技大学昆山创新研究院 Spectral imaging method based on self-adaptive coupling observation and non-linear compressed learning
CN107203968A (en) * 2017-05-25 2017-09-26 四川大学 Single image super resolution ratio reconstruction method based on improved subspace tracing algorithm
CN107064005A (en) * 2017-06-16 2017-08-18 中国科学技术大学 The fast illuminated imaging system and algorithm for reconstructing of a kind of EO-1 hyperion light field
CN107622515A (en) * 2017-09-06 2018-01-23 郑州大学 The physical re-organization method of squeezed light field

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A distributed stream computing architecture for dynamic light-field acquisition and rendering system;ZHOU W等;《Transactions on Edutainment XIII.Berlin:Springer》;20171231;123-132 *
A switchable light field camera architecture with angle sensitive pixels and dictionary-based sparse coding;HIRSCH M 等;《Computational Photography(ICCP)》;20141231;1-10 *
光场成像技术及其在计算机视觉中的应用;张驰等;《中国图象图形学报》;20160316(第03期);5-23 *

Also Published As

Publication number Publication date
CN108492239A (en) 2018-09-04

Similar Documents

Publication Publication Date Title
Wang et al. The light field attachment: Turning a DSLR into a light field camera using a low budget camera ring
Gupta et al. Compressive light field reconstructions using deep learning
CN108416723B (en) Lens-free imaging fast reconstruction method based on total variation regularization and variable splitting
Guo et al. Deep spatial-angular regularization for light field imaging, denoising, and super-resolution
CN107067380B (en) High-resolution image reconstruction method based on low-rank tensor and hierarchical dictionary learning
Viola et al. A graph learning approach for light field image compression
Wang et al. High-fidelity view synthesis for light field imaging with extended pseudo 4DCNN
Vollmer Infrared thermal imaging
Zhang et al. Nonlocal low-rank tensor factor analysis for image restoration
Kamal et al. Tensor low-rank and sparse light field photography
CN107622515B (en) Physical reconstruction method of compressed light field
Pandey et al. A compendious study of super-resolution techniques by single image
Kato et al. Double sparsity for multi-frame super resolution
Jin et al. Light field super-resolution via attention-guided fusion of hybrid lenses
Hardeep et al. A survey on techniques and challenges in image super resolution reconstruction
CN108492239B (en) Structured observation and sparse representation collaborative optimization method for light field camera
Barnard et al. High-resolution iris image reconstruction from low-resolution imagery
Zhang et al. Development of lossy and near-lossless compression methods for wafer surface structure digital holograms
Shankar et al. Ultra-thin multiple-channel LWIR imaging systems
Fan et al. A multi-view super-resolution method with joint-optimization of image fusion and blind deblurring
Su et al. Collaborative projection and reconstruction using intrinsic structure of compressive light field sensing
Rasti et al. Reducible dictionaries for single image super-resolution based on patch matching and mean shifting
Yuan Various total variation for snapshot video compressive imaging
CN108648143B (en) Image resolution enhancement method using sequence image
Elias et al. Graph Fourier transform for light field compression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant