AUTOSTEREOSCOPIC DISPLAY
This invention relates to imaging arrangements.
The invention is particularly concerned with three-dimensional images, or, rather, data representing three-dimensional images, and to manipulating such data to yield viewable images.
Imaging usually involves optical reflection or refraction arrangements, three-dimensional imaging involving multiple imaging, as in multi-view imaging, where two or more images are made from different viewpoints or integral imaging, where information is encoded using an encoding screen for viewing using a decoding screen. As compared to multi-view, integral imaging can be used to create a true orthoscopic three-dimensional image which gives real perspective viewed from a continuum of viewing positions within a viewing zone, without any zones of confusion or dead zones.
Optical arrangements suffer from manufacturing imperfections and aberrations, which tend to limit their usefulness, but are principally costly to manufacture and relatively inflexible in use.
The present invention provides techniques by which limitations of optical equipment can be avoided.
The invention comprises a method for transferring data representing a three- dimensional image into an image viewable within a viewing zone, without requiring a lenticular decoding or other optical decoding arrangement, comprising simulating, by means of a data processing arrangement, the direction-selective properties of a decoding
lenticular array by sampling a lenslet-encoded spatial distribution (LeSD) at defined locations.
The LeSD may be extracted from a unidirectional integral image or from an omnidirectional integral image, and the extraction process may be, respectively, carried out according to Algorithm I or Algorithm II - see below.
The method is, of course best carried out in data processing equipment, and may include displaying an image produced thereby.
The image displayed may be a cyclopean view, that is to say, a two- dimensional image as seen without the depth perception of binocular vision.
However, on account of the continuum of information afforded by integral imaging, as compared to multi-view imaging, images are displayable from a continuum of positions within the viewing zone which correspond to cyclopean views of the scene represented by the data from corresponding positions relative to the scene.
Two such images may then be processed to yield a volumetric optical model, from which a three-dimensional image may be generated.
The data may, of course, be derived in the first instance from actual optical imaging. In integral imaging, the data can be recorded at a capture or recording plane, as a photographic film, or a CCD array to form a pixel image, the information in that plane containing, by virtue of the integral imaging, depth information, which is representable in a three-dimensional array of intensity values.
However, such a three-dimensional array of intensity values can be created without any optical input - a synthetic image.
The use of data processing enables any such three-dimensional array of data to be manipulated, and even for two or more such arrays to be combined.
The processing can be carried out, moreover, not only to produce still images, but also to produce moving images, and the volumetric object scene can be reconstructed for three-dimensional image processing, video mixing, movement of viewer position, object depth sensing in machine vision application, and other processing tasks.
Methods for transforming data according to the invention will now be described with reference to the accompanying drawings, in which:
Figure 1 is a small section of an LeSD representing a recorded unidirectional integral image;
Figure 2 is a view of an area of a pixel array within a LeSD;
Figure 3 is a diagram showing the geometry of the displacement of a lenslet encoded spatial component;
Figure 4 is a diagram showing the geometry of a linear interpolation technique used in sampling;
Figure 5 is a diagram showing sampling of an LeSD representing an omnidirectional integral image;
Figure 6 is a diagram showing the geometry of a bilinear interpolation technique used in sampling; and
Figure 7 is a set of twenty views extracted from a unidirectional LeSD data set.
The figures illustrate methods for transforming data representing a three- dimensional image into an image viewable within a viewing zone, without requiring a lenticular decoding array or other optical decoding arrangement, comprising simulating, by means of a data processing arrangement, the direction-selective properties of a decoding lenticular array by sampling a lenslet-encoded spatial distribution (LeSD) at defined locations.
Techniques disclosed herein allow the data representing a three-dimensional integral image to be transformed into a continuum of two-dimensional planar images within the viewing zone. The planar images represent perspective views of the volumetric object space from any angle or angles within the viewing zone. Figure 7 shows a series of twenty such images - the numbers are values of a displacement S from a central ( S = O) position and it will be seen that with increasing values of S, the viewpoint is changing from right to left of the scene.
The following acronyms will be used hereafter:
II Integral Image
Oil Omnidirectional Integral Image - An integral image with omnidirectional parallax, recorded by means of a square - or hex- based microlens array
UII Unidirectional Integral Image - An integral image with unidirectional parallax, recorded by means of a lenticular array VOM Volumetric Optical Model
UII Unidirectional Integral Image - An integral image with unidirectional parallax, recorded by means of a lenticular array VOM Volumetric Optical Model LeSD Lenslet-encoded Spatial Distribution - The encoded representation of a volumetric optical model produced by a microlens or lenticular array LeSC Lenslet-encoded Spatial Component - A portion of LeSD generated by an individual lenslet
Replay of an integral image makes use of a microlens or lenticular array to decode the intensity data in the LeSD of a UII or an Oil respectively. The continuity of the intensity information stored within each component of the LeSD combined with the directional selectivity of the decoding array enables intersecting pencil beams to reconstruct the original VOM with continuous parallax within the viewing zone in the direction(s) perpendicular to the lenslets of the decoding array.
The 3D image experienced by a human viewer is the result of stereo vision making use of two horizontally displaced viewpoints to perceive the reconstructed VOM with a natural depth sensation. If a single viewpoint is considered at a certain angle from the normal to the decoding array, the scene may be termed a cyclopean view, a planar image which is a perspective projection of the VOM at that angle.
A method has been developed by the authors for extracting a cyclopean view from any angle in the denned field of view directly from the LeSD without requiring a microlens or lenticular array. The process may be carried out efficiently in software, requiring few operations to produce a view. Broadly, it operates by simulating the direction-selective properties of a decoding microlens or lenticular array by sampling the LeSD at precisely defined locations.
Consider a section of a LeSD representing a recorded UII (Figure 1). This is comprised of 8 pixels per LeSG, to give a total for three LeSCs of (8 x 3)2 = 576 pixels.
Definition 1 Let N represent the number of LeSCs recorded (and hence also the number of lenslets in the encoding lenticular array). Let I represent the number of pixels recorded per LeSC, and let us define a displacement from the geometric centre of the LeSC, s, expressed as a proportion of the LeSC travelled from the centre (Figure 2) such that s e . and is defined on the interval [—0.5, 0.5).
Definition 2 A sampling of a LeSC at a displacement s is the value of the intensity of the LeSC at that point. A sampling of a LeSD at a displacement s is the ensemble of intensity
values found at the displacements s within each individual LeSC comprising the LeSD. Such a LeSD sampling will contain N intensity values.
The ensemble of intensity values resulting from a LeSD sampling, when arranged in order identical to that of the LeSCs from which they were drawn, represents a perspective projection image of the VOM represented by the LeSD, which we term a cyclopean view. The angle from which the view is taken is directly related to the sampling displacement, s, as follows:
(i)
where ? is the refractive index of the decoding lenslet array, pi is the lenslet pitch in millimetres, t is the thickness of the decoding lenslet array at the central axis of a lenslet in millimetres, and s is the sampling displacement. Conversely,
The geometry of the relationship is shown in Figure 3.
The sampling point in a LeSC does not necessarily, and in fact is not likely to fall in the centre of a pixel. In order to produce a continuum of cyclopean views, it must be possible to retrieve intensity values from points at arbitrary locations between pixel centres. The use of linear interpolation between pixel centres is in agreement with the direction-selective behaviour of a lenticular decoding array, and so is used to generate views where the sampling point occurs between pixel centres.
For two pixels x\ and x , with intensity values f\ and 2 respectively, the linearly interpolated value at point xa, fa = (f2 - fl)(Xa ~ Xl) + fl. (3)
Algorithm 1 describes the overall cyclopean view extraction process for a UII. Note that the aspect ratio of the resulting view image will not be the same as that of the LeSD, since the horizontal sampling resolution is l/l times the vertical sampling resolution. For viewing, corrective scaling of l/l is required in the vertical.
Figure 7 shows 20 cyclopean images extracted from a computer generated LeSD "Standard" and a photographically recorded LeSD, "Plane" respectively.
The LeSD may also be extracted from an Oil.
The procedure described above can be applied to an Oil, by sampling in both the horizontal and vertical dimensions since LeSCs run in both directions. In this case the displacement on each LeSC has two components: sx (horizontal displacement from the central axis) and sy (vertical displacement from the central axis). The definition of the viewing position also has two components: x (horizontal) and y (vertical). Equations 1 and 2 provide the mapping between displacement and viewpoint angle for the horizontal and vertical directions independently.
The sampling geometry for a LeSD produced by hexagonal based microlens arrays is shown in Figure 5.
Note that the LeSCs are rectangular. This is due to the camera system's aperture being rectangular. The aspect ratio of a hex-based LeSC (horizontal to vertical) is
ARhex = tan 30°. (4)
To enable retrieval of intensity values sampled at any point s
x, s
y within each LeSC, bilinear interpolation is used. For four pixels of values /i, /
2,
3 and /
4, the bilinearly interpolated value at point x
a, y
a within the LeSD image, fa = (/a
~ fl)Xa + (/s
~ Λ + (Λ
~ f
~ fs +
+ fl- (5)
The geometry of the bilinear interpolation technique is shown in Figure 6.
Algorithm 2 describes the cyclopean view extraction process for an OIL Note that the aspect ratio of the resulting view image will not be the same as that of the LeSD, but will be that of the camera system aperture (and individual LeSC). For viewing, on a display with square pixels, corrective scaling of 2tan 30* is required in the horizontal.
From the foregoing, it is apparent that:
A method has been developed for extraction of perspective projections (views) of three dimensional object space from the encoded representation of an integral image. These views can be extracted at arbitrary viewpoint angles from the norm, within the allowable viewing zone for the integral image.
• We demonstrate that an integral image has continuous parallax, since we can extract a unique projection at any angle within a continuum of views.
Methods have been devised to produce planar two dimensional and stereoscopic image data from, integral images. This suggests that the omnidirectional integral imaging system is the general case of incoherent imaging systems, and hence that planar two dimensional and stereoscopic incoherent images are subsets of the omnidirectional integral image.
• A method has been developed for providing interactivity of viewpoint (interactive look- around) within the angle of view of an integral image, whether presented natively as an integral image, as a planar two dimensional or a stereoscopic image.
A method has been developed for' trans-conversion of integral image types, ie.. conversion of an Oil to .a UII. This is useful where a UII display is considered more appropriate (eg. for economic reasons) than an Oil display, but integral images or video have been captured with omnidirectional parallax.
Algorithm 1 Cyclopean view extraction from a UII.
Require: LeSD, Ensure: Cyclopean view at angle Y - = Number of pixel rows in LeSD N -<= Number of LeSCs across LeSD Compute s from (equation 2) for y = 1 to Y do for i = 1 to N do Compute sampling pixel horizontal position xa in LeSD Compute intensity fa at position xa, y (equation 3) Store intensity /α in output image array at position i, y end for end for
Algorithm "2 Cyclopean view extraction from an OIL Require: LeSD, x, ay Ensure: Cyclopean view at angle pair x, y Nx = Number of LeSCs horizontally in LeSD Ny = Number of LeSCs vertically in LeSD Compute sx from ax (equation 2) Compute sy from ay (equation 2) for j ' = 1 to Ny do for i = l to Nx do Compute sampling pixel position xa, ya in LeSD Compute intensity fa at position xa, ya (equation 5) Store intensity fa in output image array at position i, j end for end for