GB2501950A - Synthesizing an image from a light-field image using pixel mapping - Google Patents

Synthesizing an image from a light-field image using pixel mapping Download PDF

Info

Publication number
GB2501950A
GB2501950A GB1216108.9A GB201216108A GB2501950A GB 2501950 A GB2501950 A GB 2501950A GB 201216108 A GB201216108 A GB 201216108A GB 2501950 A GB2501950 A GB 2501950A
Authority
GB
United Kingdom
Prior art keywords
micro
lens
image
lenses
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1216108.9A
Other versions
GB2501950B (en
GB201216108D0 (en
Inventor
Benoit Vandame
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Publication of GB201216108D0 publication Critical patent/GB201216108D0/en
Publication of GB2501950A publication Critical patent/GB2501950A/en
Application granted granted Critical
Publication of GB2501950B publication Critical patent/GB2501950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0025Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 for optical correction, e.g. distorsion, aberration
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0075Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/0006Arrays
    • G02B3/0037Arrays characterized by the distribution or form of lenses
    • G02B3/0043Inhomogeneous or irregular arrays, e.g. varying shape, size, height
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/0006Arrays
    • G02B3/0037Arrays characterized by the distribution or form of lenses
    • G02B3/0056Arrays characterized by the distribution or form of lenses arranged along two different directions in a plane, e.g. honeycomb arrangement of lenses
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/18Focusing aids
    • G03B13/24Focusing screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/232Image signal generators using stereoscopic image cameras using a single 2D image sensor using fly-eye lenses, e.g. arrangements of circular lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

A method of synthesizing an image from a light-field image of a scene. The light-field image comprises a plurality of micro-images of the scene, obtained through an array of micro-lenses. A projection mapping of micro-image pixels is defined, and associated projection parameters are selected which constrain the resulting image coordinates to be integers. In this way interpolation is not required, and the resultant image is output at reduced calculation cost. The invention also relates to an imaging device for obtaining a synthesized image from a light field image. The method might also comprise displacing the micro-lenses relative to a regular lattice.

Description

TITLE OF THE INVENTION
EFFICIENT IMAGE SYNTHESIZING FOR LIGHT FIELD CAMERA
The invention relates to light-field camera and to light-field image processing.
Light-Field cameras record 4D (four dimensional) light-field data which can be transformed into various reconstructed images like re-focused images with freely selected focal distance that is the depth of the image plane which is in focus. A re-focused image is built by projecting the various 4D light-field pixels into a 2D (two dimensional) image. Unfortunately the resolution of a re-focused image varies with the focal distance.
The publication "Super-resolution with plenoptic camera 2.0" Todor Georgiev and Andrew Lumsdaine, April 2009, Adobe Technical Report April, describes the re-focused image for certain focalization distances. The authors observed that the resolution is better for particular focal distances.
Particular aspects of light-field cameras will first be reviewed.
Light-Field cameras design.
We consider light-field cameras which record a 4D light-field on a single sensor like a 2D regular array of pixels. Such light-Field cameras can be for instance: 1) a plenoptic camera comprising a main lens, an array of lenses and a sensor 12; or 2) a multi-camera array comprising an array of lenses and a sensor, but without main lens.
The array of lenses is often a micro-device, which is commonly named a micro-lens array.
Figure 1 illustrates a plenoptic camera 1 with three major elements: the main lens 10, the micro-lens array 11 and the sensor 12. Figure 2 illustrates a multi-camera array 2 with two major elements: the micro-lens array 11 and the single sensor 12.
Optionally spacer or spacer material may be located between the micro-lens array around each lens and the sensor to prevent light from one lens overlapping with the light of other lenses at the sensor side.
It is worth noting that the multi-camera array can be considered as a particular case of plenoptic cameras where the main lens has an infinite focal length. Indeed, a lens with an infinite focal length has no impact on the rays of light. The present invention is applicable to plenoptic cameras as well as multi camera arrays.
4D Light-Field data
Figure 4 illustrates the image which is recorded at the sensor. The sensor of a light-field camera records an image of the scene which is made of a collection of 2D micro-images, also called small images, arranged within a 2D image. Each small image is produced by a lens from the array of lenses. Each small image is represented by a circle, the shape of that small image being function of the shape of the micro-lens. A pixel of the sensor is located by its coordinates (x,y). p is the distance in pixels between two centres of contiguous micro lens images. The micro-lenses are chosen such as p is larger than a pixel width. A micro-lens image is referenced by its coordinates (i,j). Some pixels might not receive any light from any micro-lens; those pixels are discarded. Indeed, the space between the micro-lenses can be masked to prevent photons falling outside of a lens (if the micro-lenses are square or another close packed shape, no masking is needed). However most of the pixels receive the light from one micro-lens. The pixels are associated with four coordinates (x,y) and (i,j). The centre of the micro-lens image (i,j) on the sensor is labelled (x;1,y:1). Figure 4 illustrates the first micro-lens image (0,0) centred on (x00,y00). The pixels of the sensor 12 are arranged in a regular rectangular lattice.
The micro-lenses are arranged in a regular rectangular lattice. The pixels lattice and the micro-lenses lattice can be relatively rotated by an angle 9. In the illustration 9=0. The coordinate (x7,y,1) can be written in function of the 4 parameters: p, 6 and (x00,y00): Jx = pcosOi-psinOj+x00 (1) = p sin 6-i + pcosO.1 + Figure 4 also illustrates how an object, represented by the black squares 3, in the scene is simultaneously visible on numerous micro-lens images. The distance w between two consecutive imaging points of the same object 3 on the sensor is known as the disparity. The disparity depends on the physical distance between the camera and the object. w converges to p as the object becomes closer to the camera.
Depending on the light-field camera design, w is either larger or smaller than p, as ci is respectively larger or smaller than f, (see Figure 3 and next sub-section about the geometrical property of the light-field camera). Figure 4 illustrates a case where w is smaller than p. An important characteristic is the replication number r of consecutive lenses through which an object is imaged. r is in units of /. It is estimated by considering the cumulated disparity on consecutive lenses: wr-pr<p. One obtains the following characteristic for the number of replications, r: p (2) p-H Where LaJ denotes the ceiling value of a. This equation is an estimation which assumes that the micro-lens images are squared with no left-over space (i.e. close packed or abutting). r is given for one dimension, an object is therefore visible in r2 micro-lens images considering the 2D grid of micro-lens. Without the ceiling function, r would be a non-integer value, r is in fact an average approximation. In practice, an object can be seen r or r-i-1 times depending on rounding effect. For example, in figure 4, the black square object (3) is replicated five times in the x direction and three or four times in the y direction.
Geometrical property of the light-field camera
The previous section introduced w the disparity of a given observed object, and p the distance between two consecutive micro-lens images. Both distances are defined in pixel units. They are converted into physical distances (meters) W and P by multiplying respectively w and p by the pixels size 8 of the sensor: W = Ow and P = Op.
The distances W and Pcan be computed knowing the characteristics of the plenoptic camera. Figure 3 gives a schematic view of the plenoptic camera with the following elements: * The main lens 10 is an ideal thin lens with a focal distance F. * The micro lens array 11 is made of micro-lenses having a focal distance f.
The pitch of the micro-lenses is 4. The micro-lens array is located at the fixed distance D from the main lens. The micro-lenses might have any shape like circular or squared. The diameter of the shape is smaller or equal to 0. One can consider the particular case where the micro-lenses are pinholes. In this case the following equation remains valid with f = d.
* The sensor 12 is made of a squared lattice of pixels having each a physical size of ö. B is in unit of meter per pixel. The sensor is located at the fix distance c/from the micro-lens array.
* The object (not visible in the figure 3) is located at the distance z of the main lens. This object is focused by the main lens at a distance z' from the main lens. The disparity of the object between two consecutive lens is equal to IV.
The distance between 2 micro-lens image centres is P. Following the mathematics of thin lenses we have: 11 1 (3) zz'F From the Thales law we can derive that: D-z'D-z+d (4) 0 W Mixing the 2 previous equations the following equation is easily demonstrated: [Fj (5) This equation gives the relation between the physical object located at distance z from the main lens and the disparity W of the corresponding views of that object.
This relation is build using geometrical considerations and does not assume that the object is in focus at the sensor side. The focal length j of the micro-lenses and other properties such as the lens apertures determine whether the micro-lens images observed on the sensor are in focus. In practice, one typically tunes the distance D and ci once for all using the relation: 1 11 -I--fl-(6) D-z' d f The micro-lens images observed on the sensor of an object located at distance z from the main lens appears in focus so long as the circle of confusion is smaller than the pixel size 5. In practice the range [z,zq] of distances z which allows observing in focus micro-images is large and can be optimized depending on the focal length f the apertures of the main lens and the micro-lenses, the distances D and ci: for instance one can tune the micro-lens camera to have a range of z from 1 meter to infinity [1,cc].
Also from the Thales law one derives P: D+d D (7) P=q4e The ratio e defines the enlargement between the micro-lens pitch and the micro-lens images pitch projected at the sensor side.
Variation ot the disparity The light-field camera being designed, the values D, d, f and F are tuned and fixed. The disparity W varies with z, the object distance. One can note particular values of W: * Wfocus is the disparity for an object at distance z103 such that the micro-lens images are exactly in focus, it corresponds to equation (6) . Mixing equations (4) and (6) one obtains: = (8) w is the disparity for an object located at distance z =aF from the main lens. According to equation (5) one obtains: aF� i tI (9)
F a-i
The variation of disparity is an important property of the light-field camera. The ratio W. /W,c is a good indicator of the variation of disparity. Indeed the micro-lens images of objects located at z are sharp and the light field camera is designed to observed objects around z7 which are also in focus. The ratio is computed with equations (8) and (9): War = 1+ (10) d D-a F a-i The ratio is very close to one. In practice the variations of disparity is typically within few percent around Wfr,.
Image re-focusing computation A synthesized or re-focused image is computed by projecting the 4D light-field pixels (x,y,i,j) into the projected coordinate (X,Y) of the re4ocused image. Most of the projected coordinates are non-integer and interpolations are required to accumulate the 4D light-field value into the re4ocus image. While 4D light-field pixels are projected into the re-focus image, a weight-map records the number of accumulated projected pixels. The weight-map also records the interpolation performed on the non-integer coordinates. All 4D light-field pixels are projected into the re-focused image and the weight-map, the re-focus image is divided by the weight-map so taht each re-focused pixels received the same average contribution.
Limitation of current re-focus algorithm An object of certain aspects of the invention is to provide synthesized images with limited mathematical operations. A synthesized image corresponds to an image computed from image data such as those collected from a photo-sensor in a light-field (4D) imaging device. In embodiments, generating a synthesized image is based on a re-focusing algorithm including for example interpolation and/or weight-map processing.
In accordance with one aspect of the present invention there is provided a method of synthesizing an image from a light field image of a scene comprising a plurality of micro-images of the scene, the micro-images being obtained on a photo-sensor having an array of pixels through an array of micro-lenses, each micro-lens producing an image of the scene on an associated region of the photo sensor forming a micro-image, each pixel having pixel coordinates (x,y) relative to the photo sensor, and each micro-lens having micro-lens coordinates (i,j) relative to the micro-lens array, wherein the method comprises: defining projected pixel coordinates (X,Y) of the synthesized image, and = ux-vz+K (i,j) defining a projection mapping according to equation: = uy-iy+L1,(i,j) where it is an enlarging value and -v is a shift value for enlarging and shifting the micro-images, K(i, j) and L(i,j) being integer values dependent upon the micro-lens coordinates and the micro-lens arrangement, selecting an enlarging value it and a shift value v such that the projected pixel coordinates (X, Y) are integer values, and projecting pixel values from photo-sensor into projected pixel coordinates (X,Y) according to said projection mapping to generate said synthesized image.
Using the simple equation while maintaining the projected pixel coordinates (X,Y) integer values, allows to generate synthesized images to be generated with low calculation requirements and thus with short computation time.
According to another particular embodiment, the enlarging value u and the shift value v are selected as integers. These features further reduce the computation load and speed up the computation process.
Advantageously, the enlarging is realized by pixel up-sampling with zero insertion simplifying therewith the synthesized image construction.
According to a further embodiment, the method comprises, before the projecting step, a step of identifying a set of possible synthesized images which can be generated. Therefore possible synthesized images can rapidly be obtained.
More precisely, the identifying step consists in determining a set of disparities w\.
such as -c+n,A, where c is a given integer, whatever n and N integer value such that 0<17< N «= r, r being the number of replications of an imaged object and such as v = This particular feature allows to simply define all the disparities for which high definition synthesized images can be generated.
Advantageously, the projecting step is implemented for a plurality of the identified synthesized images. Therefore, a plurality of high resolution synthesized images can rapidly be generated.
Advantageously, the method comprises a step of determining the total number of synthesized images which can be generated.
In a particular embodiment, the array of micro-lenses has a regular arrangement such that K(i, /) and L(i,/) take a zero value. Zeroing the parameters allows simplifying the equation defining the projected pixel coordinates, which in turn simplifies the computation of the projected coordinate.
According to a further embodiment, the enlarging value u is selected such that a maximum of projected pixel coordinates (x,Y) receive at least one pixel projection.
Therefore, synthesized images can be obtained with low computation requirement.
Such images will readily be generated without time consuming computation process such as interpolation process.
More precisely, the enlarging value u is selected such that it = N where it is gcd(n, N) an integer value, whatever n and N integer values such that 0< n <N i, r being the number of replications of an imaged object. This particular feature allows defining in a simple way a regular sampling of projected pixels to generate a high definition synthesized image.
In a further particular embodiment the micro-lenses are arranged on the array with displacements relative to a regular lattice, the displacements being defined by the integer values K(L,j) and L1(i,j) . This feature allows to substantially improve the sampling of the projected coordinates in comparison with regularly arranged micro-lenses. Synthesized images having high resolution can be obtained with low computation cost. The computation of synthesized images can further be obtained without interpolation. Advantageously, the micro-lens array comprises a plurality of micro-lens subsets, each sub-set comprising a two dimensional array of N by N micro lenses, the enlarging value it being selected equal to /V. This feature further simplifies the computation of synthesized images.
The present invention also extends to an imaging device for obtaining a synthesized image by implementing the above mentioned method. According to a particular embodiment, the micro-lenses of each sub-set is displaced relative to the regular lattice according to a common pattern, the common pattern defining different displacements for each micro-lens of the sub-sets.
The common pattern defines a displacement model whereby each micro-lens of a sub-set is located according to the common pattern but differently located with respect to the regular lattice. The pattern, in certain embodiments, defines a number of possible displacements and hence positions, for each micro-lenses. Although it is simpler for the actual displacements of micro-lenses in each sub set to be the same, embodiments of the invention allow different subsets of the plurality to have different displacements, while still adhering to the same, common pattern or model of displacements. However, the pattern or model is such that even with a certain degree of flexibility provided for each lens displacement, the relative displacements between micro-lenses of a subset adhere to a controlled relationship, and such relationship is observed similarly across subsets of the plurality.
Thanks to these characteristics, the resolution of an image reconstructed from the micro-images is improved over conventional light-field cameras. In particular a good diversity of sampling is obtained while variations of resolution are avoided. Thus a more regular resolution is obtained for any focalization distance. Typically, all subsets of the micro-lens array share a common pattern. However, the plurality of subsets need not encompass the whole micro-lens array. It could be envisaged for example that a first common pattern could apply to a first plurality of subsets, and a second common pattern could apply to a second plurality.
According to another particular embodiment the micro-lens array comprises a plurality of micro-lens sub-sets, each sub-set comprising an array of N by N micro lenses, each micro-lens of the subset having a focal distance f, wherein micro-lenses of each sub-set are displaced relative to the regular lattice according to a displacement pattern, said displacement pattern defining the displacement of each micro-lens as integer multiples of unit vectors, said unit vectors having a magnitude r wherein the magnitude z-is a function off/N.
These features allow reducing the super-position or the clustering of pixels when reconstructing an image from the micro-images. Advantageously, the magnitude has a fixed value representing characteristics of the imaging device.
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which: Figure 1 is a schematic view of a light-field camera; Figure 2 is a schematic view of a particular light-field camera; Figure 3 is a detailed view of a light-field camera made of perfect lenses; Figure 4 is a schematic view of the 4D light-field data recorded by the 2D image
sensor of a light-field camera;
Figure 5 is an illustration of the coordinates of the projected 4D Light-Field pixels into the 2D projected image in a normal case; Figure 6 is an illustration of the coordinates of the projected 4D Light-Field pixels into the 20 projected image in a case associated with a particular disparity; Figure 7 illustrates the projection of a 4D light-field data into a re-focus image for a given disparity according to an embodiment of the invention.
Figure 8 is a schematic view of a micro-lens array according to an embodiment of the invention with displaced micro-lenses for a super-resolution factor N=2; Figure 9 is a schematic view of the a micro-lens array according to an embodiment of the invention with displaced micro-lenses for a super-resolution factor N3; Figure 10 illustrates a normalized maximum sampling step of the projected! reconstructed from 4D light-field data obtained with the micra-lens array of figure 8 with displaced micro-lenses for a super-resolution factor of N=2; Figure 11 illustrates a normalized maximum sampling step of the projected! reconstructed from 4D light-field data obtained with the micro-lens array of figure 8 with displaced micro-lenses for a super-resolution factor of N=3.
Image re-focusing method A major interest of the light-field cameras is the ability to compute 2D images where the focal distance is freely adjustable. To compute a 2D image also called a synthesized image out of the 4D light-field, the small images observed on the sensor are zoomed, shifted and summed. A given pixel (x,y) of the sensor associated with the micro-lens (i,j) is projected into a 2D image according to the following equation: 1x = 11 1 = s Where (X,Y) is the coordinate of the projected pixel on the 2D re-focused image.
The coordinate (X,Y) is not necessarily integer. The pixel value at location (x,y) is projected into the 2D refocused image using common image interpolation technique.
Parameter controls the size of the 2D refocused image, and g controls the plane which is in focus (the plane perpendicular to the optical axis, for which the 2D image is in focus) as well as the zoom performed on the small images. The output image is s2 times the sensor image size. In this formulation the size of the re-focused image is independent from the parameter g, and the small images are zoomed by sg.
The previous equation can be reformulated due to the regularity of position of the centres of the micro-lens images.
= sgx+sp(i-g)(cosOi-sinO/)+s(1-g)x00 Y = sgv + sp(1 -g)(sin 0.1 + cosO.1) + s(1 -The parameter g can be expressed as function of p and w. It is computed by simple geometry. It corresponds to the zoom that must be performed on the micro-lens images, using their centres as reference, such that the various zoomed views of a same objects get superposed. One deduces the following relation: p (13) p -iN This relation is used to select the distance z of the objects in focus in the projected image. The value of g can be negative depending on the light-field camera design. A negative value means that the micro-lens images need to be inverted before being summed. One notices that r = JgJ.
Including this last relation into equation (12) one rewrites the projection equation: X = sgx-sv(cos0 /-sinG.j)+ SgThV cv (14) Y = s'-sv(shu0i+cos8J)+ p The last formulation has the great advantage to simplify the computation of the projected coordinate by splitting the pixel coordinates (x,y) and the lens coordinates
(i,j) of the 4D light-field.
Sampling property of the refocused image The different pixels of the light-field image are projected into the re-focused image according to the above described method and define a set of projected pixel coordinates (X,Y) into the grid of the refocused image. It has been recognized by the present inventors that the distribution of the set of projected coordinates is an important property which can be used to characterise the resolution level of the refocused image, and in particular, the regularity or homogeneity of the distribution.
As will be explained later the present invention addresses this homogeneity.
It is not trivial to characterise the homogeneity of the projected 4D light-field pixels into the 2D re-focused image. To study this property one considers a simple projection equation assuming that the rotation angle 0 is zero, and the coordinate of the first micro-lens centre (x00,y00) is equal to(O,O). This assumption does not impact the proposed study. One obtains the following simplified projection equation with u=sg: = ía-vi= (x-w;) (15) Y = uy-V= p -w Where v=uvt'.
This set of equation shows a simple relation between the 4 dimensions x,y,i,j and the projected pixel coordinates (X,Y). The value ii = sg is a constant independent of w if s=k/g where k is any constant. In this condition, the size of the re-focused image is function of w and is equal to k/g times the size of the original image.
Aspects of the present invention derive, at least in part, in constructing the elegant projection mapping shown in equation (15) above. It is further proposed to exploit this mapping to generate synthesized images with low calculation requirements and thus short computation time. This is realized in embodiments by defining projected pixel coordinates (X,Y) having integer values.
In the simplified equation (15), u corresponds to an enlarging value whereby each micro-image is enlarged by multiplying each pixel coordinate (x,y) by is while v isa shift value whereby each micro-image is shifted by multiplying each lens coordinate (i,j) by -v. The enlarging value and the shift value are selected such that the projected pixel coordinates (X,Y) are integer values. A synthesized image is than generated by summing the enlarged and shifted micro-images.
One can notice that the 4 coordinates x,y,i,j of relation (15) are integer values.
Therefore, selecting the shifting value v and the enlarging value ii as integer values will further help reducing the calculation requirements since the projected coordinates (X,Y) will directly take integer values. The number of images that can be synthesized will however be limited.
Figure 5 illustrates the 1D projected coordinate X for a particular settings: s=0.5, g = 7.677, w =151.05, u =3.83 and p =173.67. The x-axis shows the projected coordinatesX, the y-axis indicates the micro-lens coordinates i of the projected pixels. One notices that 8 micro-lenses contribute to the observed projected coordinatesX, which in this case is equal to r-i-1. The distribution of the projected points X is not homogeneous since the values Ii and H representing respectively the minimum and the maximum sampling steps between 2 consecutive projected coordinates Xare substantially different from each other. In this example, the projected coordinates are nearly superposed, clustered in groups of eight.
Figure 6 illustrates the same view with the same settings except that w=151.25. The distribution of the projected points X is homogeneous, the projected points being distributed with equal spacing along the axis X. One notices this ideal case where Ii = H = u/{w}. Where (a} denotes the fractional part of a. {w plays a major role in the maximum sampling steps between the projected coordinates.
Several cases of h and H occur depending on {w} 1. With {w} = 0: h = 0 and H = u. On average, r projected coordinates X overlap. The distance between 2 non-overlapped consecutive X is constant and equal to H. The projected coordinates define a perfect sampling with a constant sampling step equal to H = ii.
2. With {w}=n/N where ii and N are positive integers such as 0<n< N«=r.
In this case the number of overlapped projected coordinates X is equal, on average, to rgcd(n, N)! N where gcd(n, N) refers to the greatest common divisor between ii and N. The projected coordinates define a perfect sampling with a constant sampling step equal to H=ugcd(n,N)/V. The sampling step is smaller if N is a prime number. Indeed, if N is not a prime number, the number of overlapped coordinates increase as well as the sampling step. The perfect sampling of the projected coordinates X with the smallest sampling step is obtained for {w} = ii / r and gcd(n, r) = 1.
3. With {w} = n / N where n and N are positive integers such as 0< n < N and N > r. In this case there are no overlapped projected coordinates X. But the sampling defined by the projected coordinates is not perfect: Ii =ugcd(n,N)/N and H=u-h. The projected pixels are clustered, in other word some projected pixels are sampled with a small sampling step equal to h, where other pixels are sampled with a larger sampling step H: the projected coordinates appear clustered. In such a case a synthesized image will suffer from superposition of pixels and/or pixel gaps leading to a low resolution image.
The first and second cases correspond to a regular sampling of the projected coordinates (X,Y). The sampling steps of the projected coordinate is equal to H =ugcd(n,N)/N with {w)=n/N where ii and N are positive integers such as 0<n<N«=r. The projected coordinates are equal to integer values if H is an integer. The projected coordinates cover all integer values (from 0 to the size of the re-focused image) if H = 1. With those conditions, one deduces that:
N
U = (16) gcd(n, iv) Where u is an integer value whatever ii and N such that O<n<N«=r.
Consequently, knowing that u = sg, the parameter s is equal to:
N PW (17)
gcd(n,N) p The previous equations define the parameters u and s such that the projected coordinates (X,Y) are integer values and receive a regular sampling of projected pixels. As explained above, obtaining a regular sampling is important to reduce pixels clustering. A regular sampling is obtained when each projected pixel coordinates (X,Y) receives at least one projected pixel. It must be noted that certain projected pixel coordinates do not always receive a projected pixel, as will be explained in relation with Figure 7.
By using the above mentioned particular properties, synthesized images having high resolution can be obtained with low computation requirement. Such images will therefore be readily obtained without time consuming computation process such as interpolation process. This might be of particular interest when looking to rapidly obtain a plurality of high resolution images of a scene or for video applications but also for coding or transmission of light field data.
These properties are valid for any disparity w belonging to the set of disparities WCnN defined such that wc,fl,N}=n/N with 0c n < N«= r and LwJ=c where c is a given integer. In otherwords: =c+n/N: = c + n/N (18) By determining the set of disparities (*rV it is possible to identify all the high resolution images which can potentially be generated, before generating them. These images are identifyied by determining their disparity. The synthetization of each or a plurality of the identified high resolution images can then be implemented using the method described above.
Number of refocused-image computed with no interpolation It is important to evaluate the total number of distinct disparities which belongs to the set For a given c, the total number of disparities belonging to W(.rN is equal to r(r+1)/2. being a rational number, some value are identical, for instance w26 =c+4/6=c--2/3=w03. To account the distinct values of one needs to select a n and N such that gcd(n,N) = 1, or equivalently n must be prime toN. For a given N the number of n such that gcd(n, N) = I is given by the Euler's totient function p(Y). For a given c, the number q of distinct is equal to: (19) q is the number of distinct re-focus images that can be computed without interpolations for any disparity belonging to the set = c + n / N for a given c. The following table gives the value of q for various number of object replications r.
:2::: Y: :: :%::: ::?:::: :: :H?e:::: 1 1 6 12 11 42 16 80 21 140 2 2 7 18 12 46 17 96 22 150 3 4 8 22 13 58 18 102 23 172 4 6 9 28 14 64 19 120 24 180 10 10 32 15 72 20 128 25 200 The re-focus images are computed for a range of disparities [wJthfl,wflX] which often correspond to the object at distance [Zmn,Zrnaxj which are observed in focus on the micro-lens images. The number Q of re-focused images which are computed without interpolation are obtain by summing the various q for c varying from LwrnnJ to rwrnaxl 1 = (20) o Lw,iJn 1 It is worth noting that r, given by equation (2), might varies slightly depending on [, W=].
By computing the number Q it is therefore possible to determine the number of high resolution images of a scene which can potentially be generated.
Computation of the re-focused images This section describe the computation of a refocus image for a disparity w=c+n/N with: c, n and N being integer such as O«=n<N«=r. The re-focused image computed for the given disparity w corresponds to a focus point located at distance z from the camera. By inverting equation (5) on deduces z from w: 1-(21) D+d -5w The input image recording the 4D light-field has a size of N by N, pixels. The re-tocused image size depends on the selected disparity and has a size of.V by N pixels: N1 = sN, = uN -uw p 22 N. N1 = sN, = ui\' -uw/ p The 4D coordinate (x,y,i,j) varies between (0,0,0,0) to (N -1,N -l,N/p,N.Ip). If the light-field camera is mounted such that d <f, then the disparity w is larger than p. In this case the projected coordinate (X,V) given by equation (15). Are negative; they are multiplied by -I to be projected into the re-focused image with positive coordinates.
Figure 7 illustrates the projection of the 4D light-field for a given disparity w = 4 + I / 2 = 4.5 into a re-focused image; equation (16) gives u = 2. The 4D light-field pixel filled in white are not projected because they do not belong entirely to a micro-lens. Projected pixels are projected and displayed according to the defined mapping.
it = 2 acts as a zoom factor for a given micro-lens image, while -v = -uw = -9 acts as a translation between 2 consecutive projected micro-lens images. For example pixel P(1,0) belonging to lens (0,0) has coordinates (x,y,i,j) = (1,0,0,0) in the 4D light-field and is projected into coordinates (X,Y)=(2,0) in the re-focused image by applying equation (15). The black pixel P(12,1) from the third horizontal lens (2,0), having coordinates (12,1,2,0) in the 4D light-field projects into pixel having coordinates (6,2) in the re-focused image, which coordinates receives also pixels from other location of the 40 light-field such as for example pixel having coordinates (21,1,4,0) in the 4D light-field. The projected pixels are summed into the re-focused image. The summing consists in summing the values collected by the photo-sensor in each pixel of the array. The values can be for example color components (luminence, chrominance). The numbers of projected pixels are recorded into a weight-map having the size of the re-focused image. A pixel (X,Y) from the weight-map is incremented by one when a 4D light-field pixel is projected into the re-focus image at location (x,Y). When all pixels are projected, the re-focused image is divided by the weight-map. Some pixels might receive no contribution and have a null weight, which can occur: 1) at the boundary of the re-focused image; and 2) if some 4D light-field pixels are not covered fully by a micro-lens. This event occurs especially when N is converging to,.
Pixels from the re-focused image receive in average the contribution of (r/N)2 4D light-field pixels. The size of the re-focused image is optimal for any disparity belonging to WcflN because: 1) all projected pixels have integer coordinates; and 2) the sampling step of the re-focused is the smallest possible which ensure that all re-focused pixel received at least the contribution from one 4D light-field pixel assuming all 4D light-field pixels are valid (thus are fully covered by a micro-lens).
The enlarging of the micro-image obtained by multiplying each pixel coordinate (x,y) by u is advantageously completed by pixel up-sampling with zero insertion. This step consists in inserting a zero value in the pixels generated between pixels receiving projection from the 4D light-field. Such pixels are present on Figure 7 between pixels P(0,0) and P(1,0) or between pixels (P(1,0) and P(2,O) of the re-focused image.
Constraint for designing the camera The optimal projection with no interpolation described above assumed that the squared pixel lattice and squared micro-lens lattice have no rotation (0=0) and both lattice are distant by d. The constraint on 0=0 can be relaxed to 0ctan( 1/ K7trp with for instance t = 4 such that the misplacement of consecutive projected pixels does not exceed lIt. This constraint gives an upper limit to the rotation angle between the 2 lattices.
In the following a particular embodiment of the present invention will be described which further allows to improve the synthesizing of a light field image. This embodiment consists in displacing the micro-lenses on the micro-lens array. By using displaced micro-lenses a more regular resolution is obtained for any focalization distance while the computation of the synthesized images is made simpler and faster as will be described below.
Micro-lens array with displaced micro-lenses The pixels of the 4D light-field image are projected into a re-focus image. As described above, the maximum sampling step of the projected coordinates depends on fiv} the fractional part of the disparity. The variations of sampling step are due to the superposition or clustering of the projected coordinates for certain values of (w} as illustrated in figure 5.
To decrease the superposition or the clustering of the projected coordinates (X,Y), the micro-lens images are displaced as compared to a regular array, so as to reduce or prevent overlapping or clustering of projected pixels. In other words, in embodiments of the invention the centre of a given micro-lens (i,j) is shifted by the given shift (A,(i,j),A1(i,j)) so that the modified projected coordinates (X',Y') of this
new light-field camera would become:
fx' = -uw + A1(/,j)) = X -uwA1(i.j) 23 1 = -11W + A1 (1,1)) = Y -uwA1 (i,j) (A1(i,j),A1(/,j)) are shifts given in unit of distance between the micro-lens centres, or the micro-lens image centres. Equation (23) can also be written with i' = uw: = ux-vi+K,,(i,j) 1 Y' = uv-/+L(i,j) where K12(i,j) and L12(i,j) are integer values characterizing the displacement of a micro-lens having (/,j) coordinates.
The motivation of moving the micro-lenses is to have a perfect and constant sampling of the projected coordinates (X',Y') for any w=Lwj+n/N,where N is a selected positive integer smaller or equal to, and n is any integer such as ii e [0, N[ In conventional light-field, the sampling step is a function of n for a given N. If the sampling step is made independent of n, then N acts as a super-resolution factor.
Equation (23) becomes: = x' = U (24) = Y' = N(v-LwJj)-nj-A,(i,j)(NLwJ+n) (X",Y") are normalized projected coordinates such that (X".Y") are integers for a perfect sampling of the projected coordinates (X',Y'). For a perfect sampling A,(i,j)(PiLwJ+n) and A1(i,j)(NLwJ+n) must also be integers respectively equal to k(i,j) and i(i,j). These constraints give us the following values for A *\ kQ,j) A *\_ i(i,j) I I I I (25) NLwJ + n NLWJ + 12 The displacement of the micro-lens images depends on w. In other words, for a given micro-lens displacement (&(i,/),A1(i,/)), the shift of the corresponding micro-lens image depends on the disparity w. The previous equations can be approximated by taking into consideration the two considerations: 1) w>> 1V; and 2) the variations of w are small. w can be considered constant and equal to Indeed, it has been shown (cf. equation (10)) that the ratio "focus' which is equal to w/w1, is typically very close to 1. In this condition, equation (25) can be approximated by: A:(i,j) k(i,j) A (1,1) 1 1(i,j) . . . (26) A.(i,j) k(i,j) A (ii) a 1(i,j) /ocws N / The second line of the previous equation is given knowing that = where ö is the physical size of a pixel. The approximation of (A(i,j),A3(i,j)) does not depend on the disparity w. Thus, by using this approximation, it is possible to build a micro-lens array with an irregular grid of micro-lenses such as the projected coordinates (X'.Y') do not overlap or cluster as it happens for the projected coordinates (X, Y) of conventional light-field imaging devices.
Condition for an optimal micro-lens displacement for optimal homogeneity A remaining question is how to define the 2 functions k(/.j) and l(i,j) to have S optimum micro-lens displacements such that the projected coordinate (X",Y") have a minimum clustering, and a perfect sampling when w=LwJ+n/N.
Equation (24) can be simplified considering equation (26) and w>> N: r,, , . X = -x = N(x -LwJi)-in -k(i,j) (27) = -F = N(-Lwjj)-n/-/(i,/) To obtain a perfect sampling the set of projected coordinates (X",Y") defined by the various lens coordinates (I]) must have all possible integer values whatever n, and also the number of contiguous lenses to obtain the perfect sampling must be minimum and equal to N (considering one dimension). This constraint can be reformulated by taking into consideration modular arithmetic modulo N JX" N -LwJi)-ni-k(i,j) -ni-k(i,j) (modN) F' iv(v -Lwiñ-n/ -/(i,j) -n/-/(i,j) (niodN) k(i,j) and /(i,j) are 2 periodic functions, one period is defined with (/,j)e[O,N[2, into [o, p4. One is searching k(i,j) and /(i,j) such that for any given [o, t4, all the set of projected coordinates (r'modv,V"modv) defined by 2 b=N-1 b=N-1.
(z modN, j modN) E [o, N[ is equal to b where S is the Dirac function located at (a,b) with (a,b) being integer numbers.
Theoretical solution To solve equation (28) the following linear solutions are considered: fkQ,j) Ai+Bj-f-K (modN) 29 Ci-f-Ej-f-L (modiv) Equation (28) becomes: JX -ni-Ai-B/-K (modN) (30) -n/-Ci-Ej-L (modN) For the second member of the previous equation, one derives the value of Ci, which also appear in the first member after multiplying it by C: JCX'e -Cni-CAi-CBJ-CK (rnodN) (31) Ci _Y'Lnj_Ej_L (modN) Replacing the second member into the first member (31) becomes: Xe_J(n2 +n.(A + E)+ AE-BC)-(FL)(n. + A)-CK (modT) (32) The set of projected coordinates located at (x"modN,Y"modiv) must cover all b-N 1 b-N 1. . 2 coordinates, o, for (lmodN,JmodN)E[O,N[ whatever ne[o,iv[. With equation (32) one deduces that X'(modN) must have any possible values whatever Y"(modl'vT). Thus the second order polynomial function rn(n)=n2 +n(A+E)+AE-BC must verify: gcd(m(n)modN,N)=1 Vn e [0,N[ gcd(n2 +n(A+E)+ AE-BC(mod N),N)= 1 Vn e[0,N[ The NxN micro-lenses defines a sub-set of micro-lenses. For a given sub-set the values A,B,C,E,K,L are freely selected according to the previous equation. The parameters A,B,C,E,K,L may take different values in the different sub-sets.
Parameters K,L define which if any micro-lens from a given subset is not displaced with respect to the regular lattice.
Experimental solution Many values Z4,B,C,E verify equation (33). The special case: A=O, B=T, C=1, E = 1, K = 0 and L = 0 is detailed in this section. The proposed solution has the following form: fk(i,j) 77 (mod N) t'Qi) 1+1 (modN) T is a free parameter which has been experimentally determined for various values N. The experimentation consists in testing various values of FE [o, zv[ such that the constraint gcd(m(n)modN,N)=1 is respected for any ne[o,iv[. The following table indicates the smallest value of T according to that constraint: EY:H:HH1:HH H:MY 1 0 6 1 11 3 16 4 21 1 2 1 7 1 12 3 17 1 22 3 3 1 8 1 13 1 18 1 23 1 4 1 9 1 14 1 19 3 24 1 3 10 3 15 1 20 3 25 3 It follows that the periodic functions k(i,j) and l(i,j) are fully characterized and thus the shifts (A1(i,j),A1(i,j)) of the micro-lens image versus the regular grid are also fully characterized. The shifts are given in unit of (Li). To convert the shifts into physical unit at the micro-lens side, the shifts must be multiplied by. The physical shifts (A:(i,j),Aj(i,j)) at the micro-lens side are computed easily by combining equation (8) and (26): (f,J)f.±.k(f,J) A1(i,J)=L±.1(f,J) (35) div dN The physical shifts can be decomposed in the increment r = which is multiplied by the integers values given by k(i,/) and 1(1,1) to obtain the physical shifts. The increment r is independent from the characteristics of the main lens.
Thus the main lens can be replaced by any optical system which delivers a focus images located perpendicularly to the main optical axis at location z' (as illustrated in Figure 3). This invention applies to many light-field cameras such as: an array of cameras (as illustrated in Figure 2); a plenoptic camera (as illustrated in Figure 1); a plenoptic camera where the main lens is a zoom which delivers zoomed images in focus at location 1.
The design of the micro-lens array is therefore defined by: * The focal distance f of the micro-lenses.
* The average pitch between consecutive lenses.
* The distance ci between the micro-lens array and the sensor.
* The pixel size 3 of the sensor.
* The super-resolution factor N which is freely selected between [i,r].
* The micro-lens centres (p,p1) are located following the equation: f3 = ci N (36) ipj = It should be recalled that the functions k(i,/) and I(LJ) are defined modulo N: thus the centers (ii:,p,) are valid as well as +a1<,pj +at/O whatever a being an integer. Consequently the displacements can be negative.
The micro-lens array is designed according to the previous settings. If the size of the micro-lenses is equal to the pitch, then the micro-lenses might have a very small overlaps due to displacement the micro-lenses versus the squared lattice. This issue is solved by designing micro-lenses such that the micro-lens size is smaller than 0f3(N1),/ The shape of the micro-lenses can be circular, rectangular or any shape without any modification of the previous equations. The number of micro-lens (i, J) to be designed in the micro-lens array is defined such that (I�e,Je) is equal to the physical size of the sensor. The micro-lens array being designed, it is located at distance ci from the sensor. It is interesting to note that the above demonstration remains valid whatever the coordinates of the first lens are and whatever the angular position between the micro-lens array and the lattice of pixels is.
Micro-lens array design An imaging device including the above proposed arrangement as well as a micro-lens array will now be described. The following values are chosen for the different parameters: Symbols Values Comments F 70mm Main focal distance 2mm Micro-lens focal distance d 2.3mm Distance between the micro-lens array and the sensor 1mm Micro-lens pitch B 0.004mm/pixel Physical size of pixel from the sensor 5000mm Object is located at 5 meters from Jo (:z.v the main lens 70.994mm Distance between the main lens of the focus plan of object z1 D 86.327mm Distance between the main lens and the micro-lens array such that images on sensor is in focus.
D-z' 15.33mm Distance between: the focus plan of the object z observed through the main lens, and the micro-lens array.
e 1.0266 Enlargement P 1. 0266mm Pitch in physical unit of the micro-lens images projected on the sensor P 256.S6pixel Pitch in pixel unit of the micro-lens images projected on the sensor 1.15mm Disparity in physical unit observed on the sensor of the object located at distance z from the main lens.
wJo 287.5pixel Disparity in pixel unit observed on the sensor of the object located at distance from the main lens.
r 8 Averaged number of replications for an object located at distance z from the main lens.
Figure 8 illustrates a case where the super-resolution factor is chosen to be N=2 which also corresponds to the size of the N by N sub-set. In this case the increment = is equal to r = 1.74pm. FigureS shows the displacements of the micro-lenses versus a regular lattice. The regular lattice is defined by the equidistant dashed lines 0,1,2,3 extending in both directions / and j. The directions / and jare preferably perpendicular. In a conventional micro-lens array the centres of the micro-lenses are located at the intersections of the lines defining the regular lattice to form a regular grid of equidistant micro-lenses. According to an embodiment of the present invention the micro-lenses are arranged in the following way on the array 11 in accordance with formula 29. Blocks of N by N, N being the super-resolution factor having here the value 2, micro-lenses indicated by the bold dashed squares form a subsets of micro-lenses 200, 220, 240, 202, 222, 242.... The blocks or micro-lens sub-sets are replicated in / and j directions such that the micro-lens subsets are adjacently disposed in the two directions: subsets 200,220,240.. .and subsets 202,222,242... are adjacently disposed in direction i, while subsets 200,202... and subsets 220,222... are adjacently disposed in direction j. The micro-lens array is therefore formed of a plurality of micro-lens sub-sets disposed in a tiling form. In the present example the micro-lenses of each sub-set are all identical displaced in view of simplifying calculation of a reconstructed image. However, different arrangements can be given to the micro-lens sub-sets over the micro-lens array. This can be done for example by choosing different values for A,B,C,E in different micro-lens sub-sets.
The bold arrows indicate the displacement as a shift vector of the micro-lens centres.
The amount of displacement is given by a multiple of a fixed increment in accordance with formula 29 and the table at the end of this paragraph. The arrows displayed in the figure have been artificially zoomed for illustration purpose.
A plurality of micro-lenses are shifted with respect to the regular lattice: in the illustration micro-lenses ClO, C30, C12, C32... are shifted by r in the directionj.
Micro-lenses Cli, C31... are shifted by r in the direction /. Micro-lenses COl, C21... are shifted by r in the directions / and j. It follows that micro-lenses are set out of regular alignment in a particular way that reduces the superposition or the clustering of pixels in a reconstructed image. Preferably each micro-lens in each micro-lens sub-set is displaced by a different shift vector to increase the resolution of a reconstructed image. Each shift vector has a shift magnitude and a direction.
Optionally at least one micro-lens in each subset is not displaced with respect to the regular lattice: in the illustration the centre of micro-lenses COO, C20, C02, C22...
belonging respectively to subsets 200,220,202,222... are located on the intersection lines of the regular lattice, here at the top left corner of each subset (K = L = 0). In practice, the micro-lenses which are not displaced can be any of the lenses of a sub-set. The position of a micro-lens which is not displaced can vary from one sub-set to another. Preferably one micro-lens is not displaced in each of the sub-set and the other micro-lenses of the sub-set have relative displacements with respect to the un-displaced micro-lens. The displaced micro-lenses in each sub-set have displacements which are defined relative to an un-displaced micro-lens.
As described above, the displacements are determined such that the superposition or the clustering of pixels in a reconstructed image is decreased (see also figures 10 andll).
The values k(i,j) , /(i,/) and (p1,p1) for the first sub-set of 2x2 micro-lenses illustrated in figure 8 are given in the following table: 0 0 0 0 0 0 1 0 0 1 V 0 1 1 1 V 1 1 1 0 The micro-lens array illustrated in Figures 8 and 9 is made unitarily of glass or synthetic glass. Possible processes for forming the micro-lenses on a glass plate includes lithography and/or etching and/or melting techniques.
Figure 9 is similar to figure 8 but illustrates a case with a super-resolution factor of N=3. In this case the increment is r =1.lGjini. The micro-lenses are arranged in a similar way as in figure 8 except that the subsets of micro-lenses are composed of 3 by 3 micro-lenses. The subsets 300,330,303,333 of micro-lenses are adjacently disposed in the two directions I and j. The displacement of the micro-lenses versus the regular lattice is similar to figure 8.
A plurality of micro-lenses are shifted with respect to the regular lattice: micro-lenses dO.., are shifted by r in the directionj. Micro-lenses C12... are shifted by 2r in the direction /. Micro-lenses COl, C31... are shifted by in the direction / and j.
Micro-lenses C20... are shifted by 2r in the direction j. Micro-lenses C12... are shifted by 2r in the direction i. Micro-lenses C02, C32... are shifted by 2r in the direction / and j. Micro-lenses Cli... are shifted by v in direction I and by 2v in direction j. Micro-lenses C22... are shifted byr in directionj and by 2v in direction Optionally at least one micro-lens in a sub-set is not displaced with respect to the regular lattice: in the illustration the center of micro-lenses COO, C30... belonging respectively to subsets 300,330... are located on the intersections of the regular lattice, preferably at the top left corner of each subset (K=L=O). In practice, the micro-lenses which are not displaced can be any of the lenses of a sub-set. The position of the micro-lens which is not displaced can vary from one sub-set to another. Preferably one micro-lens is not displaced in each of the sub-set and the other micro-lenses of the sub-set have relative displacements with respect to the un-displaced micro-lens. The displaced micro-lenses in each sub-set have displacements which are defined relative to the un-displaced micro-lens.
As described above, the displacements are determined such that the superposition or the clustering of pixels in a reconstructed image is reduced (see also figure 11).
The values k(},/), U,j) and (p;,p1) for the first sub-set of 3x3 micro-lenses illustrated in figure 9 are given in the following table: j fc(t,j) l(i,j) p, o o 0 0 0 0 1 0 0 1 V 2 0 0 2 2' 2v o i 1 1 v 1 1 1 2 2 1 1 0 2'+z-o 2 2 2 Zr 22r 1 2 2 0 +2r 2 2 2 2 1 2+2r 2+t It is important to note that the relative positions of the micro-lenses in each N by N size sub-set (N being the number of micro-lenses in each directions /,!) is defined modulo N. Thus all displacements modulo N will also be solutions. This means that if a given displacement (p;,pj) is a solution, then the displacement (u1+aN,u+bN) with a and b integers is also solution.
Resolution of the projected image The resolution of the projected image can be estimated by computing its maximum sampling step H as for the conventional light-field camera made of a lens array arranged following a square lattice as presented in figure 7.
The projected coordinates (X',Y'), obtained by the proposed micro-lens array with a super-resolution factor N, defines a set of points in the 2D projected/reconstructed.
The set of points according to the proposed micro-lens array is characterized by the maximum sampling step H'. The values of H' have a simple expression for projected coordinates obtained with a disparity having the form: {w}= n/(iVM) with n and Al being positive integers such as o«=n<iiI«=Lr/vJ. The maximum sampling step is equal to H'=ugcd(n,M)/(NM). The largest value of H' is u/N, one recalls that the largest value if obtained for the conventional square lattice micro-lens array is equal to u.
Figure 10 illustrates the normalized H'/u values with the super-resolution factor N=2 as a function of the fractional part of the disparity {w}. The corresponding characteristic parameters at the light-field camera are the one given above. The dashed line recalls the normalized H/u values obtained with a conventional light-field camera equipped with a regular square lattice micro-lens array.
Similarly, Figure 11 illustrates the normalized H/u values with the super-resolution factor N=3.
On can observe in figures 10 and 11 that the resolution of the reconstructed image varies less than a conventional (dash lines) with regular square lattice. Therefore, a more regular resolution is obtained with the proposed micro-lens array. The regularity of the resolution increases with the value of the super-resolution factor N. It may be noted on these figures, that the sampling step between the projected pixel coordinates is nearly constant regarding the disparity value w. The sampling step curve regarding the disparity has been flattened when compared to the curve obtained with a conventional light-field camera (dash lines). Therefore the worst cases corresponding to w values of 0, 0.5 and 1 have been eliminated and the sampling step becomes nearly constant for any fractional part of disparity. The sampling step is substantially constant and maximum for disparities having values 0, 1/3, 2/3 and 1 in Figure 11.
The present invention also applies for light-field cameras made of an array of lenses and one sensor as illustrated in figure 2. The array of lenses is designed with the equation (36).
It is interesting to note that the maximum sampling step H' is independent from the rotation angle 0 between the pixel lattice and the micro-lens array. Therefore, the rotation angle U has no impact on the resolution of the re-focus image.
Image re-focusing with the displaced micro-lenses The computation of a re4ocused image from a 4D light-field data obtained with the shifted micro-lenses is given by equation (23). This re-focus equation is simplified considering the approximation given by equation (26) and becomes: = .k(i,J) where K(i,j) = and L(i,f) = u i(z,j) (37) Y = 111)-V/-U
N
One recalls that N is a given integer value which controls the shifted micro-lens design N is chosen by the user such as N«=r. The array with diplaced micro-lens being designed, the parameter N becomes a fix constant.
In this particular embodiment we consider the re-focus images computed for disparities equal to we,, = c + n/N where c is an integer and n is an integer such that O«=n< N. For this given set of disparities the re-focus images which are computed without interpolation and a complete sampling are obtained with u = N. Equation (37) simplifies into: JX = Nx-vi-k(i,j) = Ny-ij-1(i,j) The value of u is independent of n compared to equation (16) thanks to the shifted micro-lenses. Equivalently the parameter s which relates to the size of the re-focused image is equal to s = N(p - )/p. The size of the re-focus image is given by equation Error! Reference source not found.. Thanks to the shifted micro-lenses the size of a re-focus image regularily decreases while the disparity of the re-focus image increases. To the countrary, with a regular lattice of micro-lenses, the size of the re-focus image evolves erratically (according to gcd(n,N)) while the disparity of the re-focus image increases. In other words, the resolution of the re-focused images computed without interpolation is almost constant with shifted micro-lenses.
The advantage of this particular embodiment is to deliver a set of re-focused images with an almost constant resolution (u being constant and equal to N) for any disparities equal to Wc,N and computed without interpolation, whereas with a regular lattice, the set of re-focus images computed with no interpolation undergo strong variation of resolution (since u is not constant as defined in equation 16).
It will be understood that the present invention has been described above purely by way of example, and modification of detail can be made within the scope of the invention. Each feature disclosed in the description, and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination.

Claims (27)

  1. CLAIMS1. A method of synthesizing an image from a light field image of a scene comprising a plurality of micro-images of the scene, the micro-images being obtained on a photo-sensor having an array of pixels through an array of micro-lenses, each micro-lens producing an image of the scene on an associated region of the photo sensor forming a micro-image, each pixel having pixel coordinates (x,y) relative to the photo sensor, and each micro-lens having micro-lens coordinates (i,j) relative to the micro-lens array, wherein the method comprises: defining projected pixel coordinates (x,Y) of the synthesized image, and rx = ux-vi+K (i,j) defining a projection mapping according to equation: L Y = uy -vj +L11(i,j) where ii is an enlarging value and -v is a shift value for enlarging and shifting the micro-images, K(i, j) and L(i, j) being integer values dependent upon the micro-lens coordinates and the micro-lens arrangement, the method further comprising selecting an enlarging value u and a shift value v such that the projected pixel coordinates (X,Y) are integer values, and projecting pixel values from photo-sensor into projected pixel coordinates (X,Y) according to said projection mapping to generate said synthesized image.
  2. 2. The method of claim 1, wherein the enlarging value it and the shift value v are selected as an integer.
  3. 3. The method of claim 1, wherein the enlarging is realized by pixel up-sampling with zero insertion.
  4. 4. The method of claim 1, comprising before the projecting step, a step of identifying a set of possible synthesized images which can be generated.
  5. 5. The method of claim 4, wherein the identifying step consists in determining a set of disparities WcN such as =c+n/N, where c is a given integer, whatever n and N integer value such that 0< n -<N «= r, r being the number of replications of an imaged object and such as V=UWCVN.
  6. 6. The method of claims 4 or 5, wherein the projecting step is implemented for a plurality of the synthesized images of the identified set.
  7. 7. The method of claim 1, comprising a step of determining the total number of synthesized images which can be generated.
  8. 8. The method of any of the preceding claims wherein the array of micro-lenses has a regular arrangement such that K(i,j) and L2(i,j) take a zero value.
  9. 9. The method of claim 8, wherein the enlarging value u is selected such that a maximum of projected pixel coordinates (x,Y) receive at least one pixel projection.
  10. 10. The method of claims 8 or 9, wherein the enlarging value u is selected such that u = where u is an integer value, whatever n and N integer values such gcd(n, iv) that 0< n <N «= r, r being the number of replications of an imaged object.
  11. 11. The method of any of the preceding claims wherein the micro-lenses are arranged with displacements relative to a regular lattice, the displacements being defined by the integer values K(i,j) and L2(i,j)
  12. 12. The method of claim 11, wherein the micro-lens array comprises a plurality of micro-lens subsets, each sub-set comprising a two dimensional array of N by N micro lenses, the method further consisting in selecting the enlarging value u to be equal to N
  13. 13. An imaging device for obtaining a synthesized image from a light field image of a scene, the imaging device comprising an array of micro-lenses and a photo-sensor having an array of pixels, wherein each micro-lens produces an image of the scene onto a corresponding region of the photo sensor forming a micro-image, each pixel being located on the photo-sensor by pixel coordinates (x,y), each micro-lens being located relative to the array of micro-lenses by micro-lens coordinates (1,]), the imaging device further comprising a processor coupled to the photo sensor, the processor being configured to: define projected pixel coordinates (X,Y) of the synthesized image, define a JX = ux-vi+K (I,!) projection mapping according to equation: . p t Y = where u is an enlarging value and -v is a shift value for enlarging and shifting the micro-images, K(i,j) and L(i,j) being integer values depending from the micro-lens coordinates and the micro-lens arrangement, select the enlarging value ii and the shift value v such that the projected pixel coordinates (X,Y) are integer values, project pixel values obtained from photo-sensor into projected pixel coordinates (X, 1') according to said projection mapping to generate said synthesized image.
  14. 14. The imaging device of claims 13, wherein the processor is further configured to select the enlarging value u and the shift value v as an integer.
  15. 15. The imaging device of claims 13, wherein the processor is further configured to realize the enlarging by pixel up-sampling with zero insertion.
  16. 16. The imaging device of claims 13, wherein the processor is further configured to comprise before the projecting step, a step of identifying a set of synthesized images which can be generated.
  17. 17. The imaging device of claims 16, wherein the processor is further configured to determine during the indentifying step a set of disparities Wrr\T such as wiry = c + n / N, where c is a given integer, whatever n and N integer value such that 0< n <N «= r, r being the number of replications of an imaged object.
  18. 18. The imaging device of claims 16 or 17, wherein the processor is further configured to implement the projecting step for a plurality of synthesized images of the identified set.
  19. 19. The imaging device of claims 14, wherein the processor is further configured to determine the total number of synthesized images which can be generated.
  20. 20. The imaging device of any of the preceding claims wherein the array of micro-lenses has a regular arrangement such that K(i,j)and L(4j) take the zero value.
  21. 21. The imaging device of claim 20, wherein the processor is further configured to select the enlarging value u such that a maximum of projected pixel coordinates (X,Y) receive at least one pixel projection.
  22. 22. The imaging device of claims 20 or 21, wherein the processor is further configured to select the enlarging value u such that u = N where u is an gcd(n, iv) integer value, whatever n and N integer values such that 0< n <N «= r, r being the number of replications of an imaged object.
  23. 23. The imaging device of any one of claims 13 to 22, wherein the micro-lenses are arranged with displacement relative to a regular lattice, the displacements being defined by the integer values K11(i,j) and L11(i,j)
  24. 24. The imaging device of claim 23, wherein the micro-lens array comprises a plurality of micro-lens subsets, each sub-set comprising a two dimensional array of N by N micro lenses.
  25. 25. The imaging device of claim 24, wherein the enlarging value u is equal to N
  26. 26. The imaging device of claims 24 or 25 wherein the micro-lenses of each sub-set is displaced relative to the regular lattice according to a common pattern, the common pattern defining different displacements for each micro-lens of the sub-sets.
  27. 27. The imaging device of claim 23, wherein: the micro-lens array comprises a plurality of micro-lens sub-sets, each sub-set comprising an array of N by N micro lenses, each micro-lens of the subset having a focal distance f, wherein micro-lenses of each sub-set are displaced relative to the regular lattice according to a displacement pattern, said displacement pattern defining the displacement of each micro-lens as integer multiples of unit vectors, said unit vectors having a magnitude r wherein the magnitude r is a function off/N.
GB201216108A 2012-05-11 2012-09-10 Efficient image synthesizing for light field camera Active GB2501950B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1208317.6A GB2501936B (en) 2012-05-11 2012-05-11 Micro lens array and imaging apparatus

Publications (3)

Publication Number Publication Date
GB201216108D0 GB201216108D0 (en) 2012-10-24
GB2501950A true GB2501950A (en) 2013-11-13
GB2501950B GB2501950B (en) 2015-04-29

Family

ID=46458694

Family Applications (2)

Application Number Title Priority Date Filing Date
GB1208317.6A Active GB2501936B (en) 2012-05-11 2012-05-11 Micro lens array and imaging apparatus
GB201216108A Active GB2501950B (en) 2012-05-11 2012-09-10 Efficient image synthesizing for light field camera

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB1208317.6A Active GB2501936B (en) 2012-05-11 2012-05-11 Micro lens array and imaging apparatus

Country Status (2)

Country Link
GB (2) GB2501936B (en)
WO (1) WO2013167758A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10055856B2 (en) 2016-03-14 2018-08-21 Thomson Licensing Method and device for processing lightfield data
US10362291B2 (en) 2015-06-08 2019-07-23 Interdigital Ce Patent Holdings Light field imaging device

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015169804A (en) * 2014-03-07 2015-09-28 株式会社リコー Lens array, image display device, and moving body
KR102228456B1 (en) * 2014-03-13 2021-03-16 삼성전자주식회사 Image pickup apparatus and image pickup method of generating image having depth information
WO2015137635A1 (en) * 2014-03-13 2015-09-17 Samsung Electronics Co., Ltd. Image pickup apparatus and method for generating image having depth information
EP3023826A1 (en) * 2014-11-20 2016-05-25 Thomson Licensing Light field imaging device
JP6747769B2 (en) * 2014-12-15 2020-08-26 デクセリアルズ株式会社 Optical element, display device, master, and method for manufacturing optical element
EP3094076A1 (en) 2015-05-13 2016-11-16 Thomson Licensing Method for obtaining a refocused image from a 4D raw light field data using a shift correction parameter
EP3098778A1 (en) 2015-05-29 2016-11-30 Thomson Licensing Method for obtaining a refocused image from 4d raw light field data
FR3040798B1 (en) 2015-09-08 2018-02-09 Safran PLENOPTIC CAMERA
EP3166073A1 (en) 2015-11-06 2017-05-10 Thomson Licensing Method for obtaining a refocused image from 4d raw light field data
WO2018024490A1 (en) * 2016-08-05 2018-02-08 Thomson Licensing A method for obtaining at least one sub-aperture image being associated with one view
WO2018103819A1 (en) * 2016-12-05 2018-06-14 Photonic Sensors & Algorithms, S.L. Microlens array
JP6814832B2 (en) * 2019-02-26 2021-01-20 デクセリアルズ株式会社 Manufacturing method of optical element, display device, master, and optical element
USD936892S1 (en) 2020-11-13 2021-11-23 Hgci, Inc. Lens cover for light fixture for indoor grow application
CN115086550B (en) * 2022-05-30 2023-04-28 元潼(北京)技术有限公司 Meta imaging system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6023523A (en) * 1996-02-16 2000-02-08 Microsoft Corporation Method and system for digital plenoptic imaging
WO2011065738A2 (en) * 2009-11-27 2011-06-03 Samsung Electronics Co., Ltd. Image processing apparatus and method
GB2488905A (en) * 2011-03-10 2012-09-12 Canon Kk Image pickup apparatus, such as plenoptic camera, utilizing lens array

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5662401A (en) * 1995-12-13 1997-09-02 Philips Electronics North America Corporation Integrating lens array and image forming method for improved optical efficiency
AU2002258402A1 (en) * 2001-02-07 2002-10-08 Corning Incorporated High-contrast screen with random microlens array
JP4845290B2 (en) * 2001-06-20 2011-12-28 キヤノン株式会社 Micro lens array, optical equipment and optical viewfinder
JP2004361750A (en) * 2003-06-05 2004-12-24 Seiko Epson Corp Colored layer forming method, microlens substrate, transparent screen, and rear-type projector
JP3731593B2 (en) * 2003-09-08 2006-01-05 セイコーエプソン株式会社 Method for manufacturing transmissive screen member, transmissive screen member, transmissive screen, and rear projector
US7476562B2 (en) * 2003-10-09 2009-01-13 Aptina Imaging Corporation Gapless microlens array and method of fabrication
JP2006227244A (en) * 2005-02-17 2006-08-31 Seiko Epson Corp Display device and projector using the same
US7092166B1 (en) * 2005-04-25 2006-08-15 Bright View Technologies, Inc. Microlens sheets having multiple interspersed anamorphic microlens arrays
JP4638815B2 (en) * 2005-12-22 2011-02-23 嶋田プレシジョン株式会社 Light guide plate having light lens array, light irradiation device, and liquid crystal display device
US20100265385A1 (en) * 2009-04-18 2010-10-21 Knight Timothy J Light Field Camera Image, File and Configuration Data, and Methods of Using, Storing and Communicating Same
US8680451B2 (en) * 2007-10-02 2014-03-25 Nikon Corporation Light receiving device, focus detection device and imaging device
JP5463718B2 (en) 2009-04-16 2014-04-09 ソニー株式会社 Imaging device
JP5515396B2 (en) * 2009-05-08 2014-06-11 ソニー株式会社 Imaging device
WO2012008209A1 (en) * 2010-07-12 2012-01-19 富士フイルム株式会社 Solid-state image pickup device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6023523A (en) * 1996-02-16 2000-02-08 Microsoft Corporation Method and system for digital plenoptic imaging
WO2011065738A2 (en) * 2009-11-27 2011-06-03 Samsung Electronics Co., Ltd. Image processing apparatus and method
GB2488905A (en) * 2011-03-10 2012-09-12 Canon Kk Image pickup apparatus, such as plenoptic camera, utilizing lens array

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10362291B2 (en) 2015-06-08 2019-07-23 Interdigital Ce Patent Holdings Light field imaging device
US10055856B2 (en) 2016-03-14 2018-08-21 Thomson Licensing Method and device for processing lightfield data

Also Published As

Publication number Publication date
GB2501950B (en) 2015-04-29
GB2501936B (en) 2016-11-30
WO2013167758A1 (en) 2013-11-14
GB2501936A (en) 2013-11-13
GB201208317D0 (en) 2012-06-27
GB201216108D0 (en) 2012-10-24

Similar Documents

Publication Publication Date Title
GB2501950A (en) Synthesizing an image from a light-field image using pixel mapping
JP4171786B2 (en) Image input device
US8749620B1 (en) 3D light field cameras, images and files, and methods of using, operating, processing and viewing same
US9204067B2 (en) Image sensor and image capturing apparatus
US8619177B2 (en) Digital imaging system, plenoptic optical device and image data processing method
US8416284B2 (en) Stereoscopic image capturing apparatus and stereoscopic image capturing system
US9581787B2 (en) Method of using a light-field camera to generate a three-dimensional image, and light field camera implementing the method
CN103945115A (en) Photographing device and photographing method for taking picture by using a plurality of microlenses
CN103533227A (en) Image pickup apparatus and lens apparatus
JP5218611B2 (en) Image composition method and imaging apparatus
US8537256B2 (en) Image pickup device and solid-state image pickup element
CN102111544A (en) Camera module, image processing apparatus, and image recording method
EP3035285A1 (en) Method and apparatus for generating an adapted slice image from a focal stack
US20130193106A1 (en) Diffraction-type 3d display element and method for fabricating the same
CN111201782B (en) Imaging device, image processing apparatus, image processing method, and recording medium
GB2521429A (en) Visual Servoing
GB2505954A (en) Micro lens array with displaced micro-lenses suitable for a light-field colour camera
CN107564068B (en) Calibration method for aperture coding super-resolution optical transfer function
JP5747679B2 (en) Presentation method of 3D image
CN115086550B (en) Meta imaging system
EP3700187B1 (en) Signal processing device and imaging device
JP4377656B2 (en) Integral photography apparatus and integral photography display apparatus
CN212460211U (en) 3D display device based on compound rectangle many pinholes array
GB2540922B (en) Full resolution plenoptic imaging
US20080204547A1 (en) Process and system used to discover and exploit the illusory depth properties inherent in an autostereoscopic image