WO2002017647A1 - Autostereoscopic display - Google Patents

Autostereoscopic display Download PDF

Info

Publication number
WO2002017647A1
WO2002017647A1 PCT/GB2001/003777 GB0103777W WO0217647A1 WO 2002017647 A1 WO2002017647 A1 WO 2002017647A1 GB 0103777 W GB0103777 W GB 0103777W WO 0217647 A1 WO0217647 A1 WO 0217647A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lesd
decoding
images
recorded
Prior art date
Application number
PCT/GB2001/003777
Other languages
French (fr)
Inventor
Neil Davies
Malcolm Mccormick
Original Assignee
Demontfort University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Demontfort University filed Critical Demontfort University
Priority to AU2001282312A priority Critical patent/AU2001282312A1/en
Publication of WO2002017647A1 publication Critical patent/WO2002017647A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/286Image signal generators having separate monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • This invention relates to imaging arrangements.
  • the invention is particularly concerned with three-dimensional images, or, rather, data representing three-dimensional images, and to manipulating such data to yield viewable images.
  • Imaging usually involves optical reflection or refraction arrangements, three-dimensional imaging involving multiple imaging, as in multi-view imaging, where two or more images are made from different viewpoints or integral imaging, where information is encoded using an encoding screen for viewing using a decoding screen.
  • integral imaging can be used to create a true orthoscopic three-dimensional image which gives real perspective viewed from a continuum of viewing positions within a viewing zone, without any zones of confusion or dead zones.
  • Optical arrangements suffer from manufacturing imperfections and aberrations, which tend to limit their usefulness, but are principally costly to manufacture and relatively inflexible in use.
  • the present invention provides techniques by which limitations of optical equipment can be avoided.
  • the invention comprises a method for transferring data representing a three- dimensional image into an image viewable within a viewing zone, without requiring a lenticular decoding or other optical decoding arrangement, comprising simulating, by means of a data processing arrangement, the direction-selective properties of a decoding lenticular array by sampling a lenslet-encoded spatial distribution (LeSD) at defined locations.
  • LeSD lenslet-encoded spatial distribution
  • the LeSD may be extracted from a unidirectional integral image or from an omnidirectional integral image, and the extraction process may be, respectively, carried out according to Algorithm I or Algorithm II - see below.
  • the method is, of course best carried out in data processing equipment, and may include displaying an image produced thereby.
  • the image displayed may be a cyclopean view, that is to say, a two- dimensional image as seen without the depth perception of binocular vision.
  • images are displayable from a continuum of positions within the viewing zone which correspond to cyclopean views of the scene represented by the data from corresponding positions relative to the scene.
  • Two such images may then be processed to yield a volumetric optical model, from which a three-dimensional image may be generated.
  • the data may, of course, be derived in the first instance from actual optical imaging.
  • integral imaging the data can be recorded at a capture or recording plane, as a photographic film, or a CCD array to form a pixel image, the information in that plane containing, by virtue of the integral imaging, depth information, which is representable in a three-dimensional array of intensity values.
  • Such a three-dimensional array of intensity values can be created without any optical input - a synthetic image.
  • the use of data processing enables any such three-dimensional array of data to be manipulated, and even for two or more such arrays to be combined.
  • the processing can be carried out, moreover, not only to produce still images, but also to produce moving images, and the volumetric object scene can be reconstructed for three-dimensional image processing, video mixing, movement of viewer position, object depth sensing in machine vision application, and other processing tasks.
  • Figure 1 is a small section of an LeSD representing a recorded unidirectional integral image
  • Figure 2 is a view of an area of a pixel array within a LeSD
  • Figure 3 is a diagram showing the geometry of the displacement of a lenslet encoded spatial component
  • Figure 4 is a diagram showing the geometry of a linear interpolation technique used in sampling
  • Figure 5 is a diagram showing sampling of an LeSD representing an omnidirectional integral image
  • Figure 6 is a diagram showing the geometry of a bilinear interpolation technique used in sampling; and Figure 7 is a set of twenty views extracted from a unidirectional LeSD data set.
  • the figures illustrate methods for transforming data representing a three- dimensional image into an image viewable within a viewing zone, without requiring a lenticular decoding array or other optical decoding arrangement, comprising simulating, by means of a data processing arrangement, the direction-selective properties of a decoding lenticular array by sampling a lenslet-encoded spatial distribution (LeSD) at defined locations.
  • LeSD lenslet-encoded spatial distribution
  • Oil Omnidirectional Integral Image - An integral image with omnidirectional parallax, recorded by means of a square - or hex- based microlens array
  • UII Unidirectional Integral Image An integral image with unidirectional parallax, recorded by means of a lenticular array VOM Volumetric Optical Model UII Unidirectional Integral Image - An integral image with unidirectional parallax, recorded by means of a lenticular array VOM Volumetric Optical Model LeSD Lenslet-encoded Spatial Distribution - The encoded representation of a volumetric optical model produced by a microlens or lenticular array LeSC Lenslet-encoded Spatial Component - A portion of LeSD generated by an individual lenslet
  • Replay of an integral image makes use of a microlens or lenticular array to decode the intensity data in the LeSD of a UII or an Oil respectively.
  • the continuity of the intensity information stored within each component of the LeSD combined with the directional selectivity of the decoding array enables intersecting pencil beams to reconstruct the original VOM with continuous parallax within the viewing zone in the direction(s) perpendicular to the lenslets of the decoding array.
  • the 3D image experienced by a human viewer is the result of stereo vision making use of two horizontally displaced viewpoints to perceive the reconstructed VOM with a natural depth sensation. If a single viewpoint is considered at a certain angle from the normal to the decoding array, the scene may be termed a cyclopean view, a planar image which is a perspective projection of the VOM at that angle.
  • a method has been developed by the authors for extracting a cyclopean view from any angle in the beed field of view directly from the LeSD without requiring a microlens or lenticular array.
  • the process may be carried out efficiently in software, requiring few operations to produce a view. Broadly, it operates by simulating the direction-selective properties of a decoding microlens or lenticular array by sampling the LeSD at precisely defined locations.
  • N represent the number of LeSCs recorded (and hence also the number of lenslets in the encoding lenticular array).
  • I represent the number of pixels recorded per LeSC, and let us define a displacement from the geometric centre of the LeSC, s, expressed as a proportion of the LeSC travelled from the centre ( Figure 2) such that s e . and is defined on the interval [—0.5, 0.5).
  • a sampling of a LeSC at a displacement s is the value of the intensity of the LeSC at that point.
  • a sampling of a LeSD at a displacement s is the ensemble of intensity
  • the ensemble of intensity values resulting from a LeSD sampling when arranged in order identical to that of the LeSCs from which they were drawn, represents a perspective projection image of the VOM represented by the LeSD, which we term a cyclopean view.
  • the angle from which the view is taken is directly related to the sampling displacement, s, as follows:
  • ? is the refractive index of the decoding lenslet array
  • pi is the lenslet pitch in millimetres
  • t is the thickness of the decoding lenslet array at the central axis of a lenslet in millimetres
  • s is the sampling displacement.
  • the sampling point in a LeSC does not necessarily, and in fact is not likely to fall in the centre of a pixel.
  • the use of linear interpolation between pixel centres is in agreement with the direction-selective behaviour of a lenticular decoding array, and so is used to generate views where the sampling point occurs between pixel centres.
  • Algorithm 1 describes the overall cyclopean view extraction process for a UII. Note that the aspect ratio of the resulting view image will not be the same as that of the LeSD, since the horizontal sampling resolution is l/l times the vertical sampling resolution. For viewing, corrective scaling of l/l is required in the vertical.
  • Figure 7 shows 20 cyclopean images extracted from a computer generated LeSD “Standard” and a photographically recorded LeSD, "Plane” respectively.
  • the LeSD may also be extracted from an Oil.
  • the procedure described above can be applied to an Oil, by sampling in both the horizontal and vertical dimensions since LeSCs run in both directions.
  • the displacement on each LeSC has two components: s x (horizontal displacement from the central axis) and s y (vertical displacement from the central axis).
  • the definition of the viewing position also has two components: x (horizontal) and y (vertical). Equations 1 and 2 provide the mapping between displacement and viewpoint angle for the horizontal and vertical directions independently.
  • LeSCs are rectangular. This is due to the camera system ' s aperture being rectangular.
  • the aspect ratio of a hex-based LeSC (horizontal to vertical) is
  • Algorithm 2 describes the cyclopean view extraction process for an OIL Note that the aspect ratio of the resulting view image will not be the same as that of the LeSD, but will be that of the camera system aperture (and individual LeSC). For viewing, on a display with square pixels, corrective scaling of 2tan 30* i s required in the horizontal.
  • a method has been developed for extraction of perspective projections (views) of three dimensional object space from the encoded representation of an integral image. These views can be extracted at arbitrary viewpoint angles from the norm, within the allowable viewing zone for the integral image.
  • a method has been developed for providing interactivity of viewpoint (interactive look- around) within the angle of view of an integral image, whether presented natively as an integral image, as a planar two dimensional or a stereoscopic image.
  • a method has been developed for' trans-conversion of integral image types, ie.. conversion of an Oil to .a UII. This is useful where a UII display is considered more appropriate (eg. for economic reasons) than an Oil display, but integral images or video have been captured with omnidirectional parallax.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

There is disclosed a method for transferring data representing a three-dimensional image into an image viewable within a viewing zone, without requiring a lenticular decoding or other optical decoding arrangement, comprising simulating, by means of a data processing arrangement, the direction-selective properties of a decoding lenticular array by sampling a lenslet-encoded spatial distribution (LeSD) at defined locations.

Description

AUTOSTEREOSCOPIC DISPLAY
This invention relates to imaging arrangements.
The invention is particularly concerned with three-dimensional images, or, rather, data representing three-dimensional images, and to manipulating such data to yield viewable images.
Imaging usually involves optical reflection or refraction arrangements, three-dimensional imaging involving multiple imaging, as in multi-view imaging, where two or more images are made from different viewpoints or integral imaging, where information is encoded using an encoding screen for viewing using a decoding screen. As compared to multi-view, integral imaging can be used to create a true orthoscopic three-dimensional image which gives real perspective viewed from a continuum of viewing positions within a viewing zone, without any zones of confusion or dead zones.
Optical arrangements suffer from manufacturing imperfections and aberrations, which tend to limit their usefulness, but are principally costly to manufacture and relatively inflexible in use.
The present invention provides techniques by which limitations of optical equipment can be avoided.
The invention comprises a method for transferring data representing a three- dimensional image into an image viewable within a viewing zone, without requiring a lenticular decoding or other optical decoding arrangement, comprising simulating, by means of a data processing arrangement, the direction-selective properties of a decoding lenticular array by sampling a lenslet-encoded spatial distribution (LeSD) at defined locations.
The LeSD may be extracted from a unidirectional integral image or from an omnidirectional integral image, and the extraction process may be, respectively, carried out according to Algorithm I or Algorithm II - see below.
The method is, of course best carried out in data processing equipment, and may include displaying an image produced thereby.
The image displayed may be a cyclopean view, that is to say, a two- dimensional image as seen without the depth perception of binocular vision.
However, on account of the continuum of information afforded by integral imaging, as compared to multi-view imaging, images are displayable from a continuum of positions within the viewing zone which correspond to cyclopean views of the scene represented by the data from corresponding positions relative to the scene.
Two such images may then be processed to yield a volumetric optical model, from which a three-dimensional image may be generated.
The data may, of course, be derived in the first instance from actual optical imaging. In integral imaging, the data can be recorded at a capture or recording plane, as a photographic film, or a CCD array to form a pixel image, the information in that plane containing, by virtue of the integral imaging, depth information, which is representable in a three-dimensional array of intensity values.
However, such a three-dimensional array of intensity values can be created without any optical input - a synthetic image. The use of data processing enables any such three-dimensional array of data to be manipulated, and even for two or more such arrays to be combined.
The processing can be carried out, moreover, not only to produce still images, but also to produce moving images, and the volumetric object scene can be reconstructed for three-dimensional image processing, video mixing, movement of viewer position, object depth sensing in machine vision application, and other processing tasks.
Methods for transforming data according to the invention will now be described with reference to the accompanying drawings, in which:
Figure 1 is a small section of an LeSD representing a recorded unidirectional integral image;
Figure 2 is a view of an area of a pixel array within a LeSD;
Figure 3 is a diagram showing the geometry of the displacement of a lenslet encoded spatial component;
Figure 4 is a diagram showing the geometry of a linear interpolation technique used in sampling;
Figure 5 is a diagram showing sampling of an LeSD representing an omnidirectional integral image;
Figure 6 is a diagram showing the geometry of a bilinear interpolation technique used in sampling; and Figure 7 is a set of twenty views extracted from a unidirectional LeSD data set.
The figures illustrate methods for transforming data representing a three- dimensional image into an image viewable within a viewing zone, without requiring a lenticular decoding array or other optical decoding arrangement, comprising simulating, by means of a data processing arrangement, the direction-selective properties of a decoding lenticular array by sampling a lenslet-encoded spatial distribution (LeSD) at defined locations.
Techniques disclosed herein allow the data representing a three-dimensional integral image to be transformed into a continuum of two-dimensional planar images within the viewing zone. The planar images represent perspective views of the volumetric object space from any angle or angles within the viewing zone. Figure 7 shows a series of twenty such images - the numbers are values of a displacement S from a central ( S = O) position and it will be seen that with increasing values of S, the viewpoint is changing from right to left of the scene.
The following acronyms will be used hereafter:
II Integral Image
Oil Omnidirectional Integral Image - An integral image with omnidirectional parallax, recorded by means of a square - or hex- based microlens array
UII Unidirectional Integral Image - An integral image with unidirectional parallax, recorded by means of a lenticular array VOM Volumetric Optical Model UII Unidirectional Integral Image - An integral image with unidirectional parallax, recorded by means of a lenticular array VOM Volumetric Optical Model LeSD Lenslet-encoded Spatial Distribution - The encoded representation of a volumetric optical model produced by a microlens or lenticular array LeSC Lenslet-encoded Spatial Component - A portion of LeSD generated by an individual lenslet
Replay of an integral image makes use of a microlens or lenticular array to decode the intensity data in the LeSD of a UII or an Oil respectively. The continuity of the intensity information stored within each component of the LeSD combined with the directional selectivity of the decoding array enables intersecting pencil beams to reconstruct the original VOM with continuous parallax within the viewing zone in the direction(s) perpendicular to the lenslets of the decoding array.
The 3D image experienced by a human viewer is the result of stereo vision making use of two horizontally displaced viewpoints to perceive the reconstructed VOM with a natural depth sensation. If a single viewpoint is considered at a certain angle from the normal to the decoding array, the scene may be termed a cyclopean view, a planar image which is a perspective projection of the VOM at that angle.
A method has been developed by the authors for extracting a cyclopean view from any angle in the denned field of view directly from the LeSD without requiring a microlens or lenticular array. The process may be carried out efficiently in software, requiring few operations to produce a view. Broadly, it operates by simulating the direction-selective properties of a decoding microlens or lenticular array by sampling the LeSD at precisely defined locations.
Consider a section of a LeSD representing a recorded UII (Figure 1). This is comprised of 8 pixels per LeSG, to give a total for three LeSCs of (8 x 3)2 = 576 pixels.
Definition 1 Let N represent the number of LeSCs recorded (and hence also the number of lenslets in the encoding lenticular array). Let I represent the number of pixels recorded per LeSC, and let us define a displacement from the geometric centre of the LeSC, s, expressed as a proportion of the LeSC travelled from the centre (Figure 2) such that s e . and is defined on the interval [—0.5, 0.5). Definition 2 A sampling of a LeSC at a displacement s is the value of the intensity of the LeSC at that point. A sampling of a LeSD at a displacement s is the ensemble of intensity
values found at the displacements s within each individual LeSC comprising the LeSD. Such a LeSD sampling will contain N intensity values.
The ensemble of intensity values resulting from a LeSD sampling, when arranged in order identical to that of the LeSCs from which they were drawn, represents a perspective projection image of the VOM represented by the LeSD, which we term a cyclopean view. The angle from which the view is taken is directly related to the sampling displacement, s, as follows:
(i)
Figure imgf000008_0001
where ? is the refractive index of the decoding lenslet array, pi is the lenslet pitch in millimetres, t is the thickness of the decoding lenslet array at the central axis of a lenslet in millimetres, and s is the sampling displacement. Conversely,
Figure imgf000008_0002
The geometry of the relationship is shown in Figure 3.
The sampling point in a LeSC does not necessarily, and in fact is not likely to fall in the centre of a pixel. In order to produce a continuum of cyclopean views, it must be possible to retrieve intensity values from points at arbitrary locations between pixel centres. The use of linear interpolation between pixel centres is in agreement with the direction-selective behaviour of a lenticular decoding array, and so is used to generate views where the sampling point occurs between pixel centres.
For two pixels x\ and x , with intensity values f\ and 2 respectively, the linearly interpolated value at point xa, fa = (f2 - fl)(Xa ~ Xl) + fl. (3)
Algorithm 1 describes the overall cyclopean view extraction process for a UII. Note that the aspect ratio of the resulting view image will not be the same as that of the LeSD, since the horizontal sampling resolution is l/l times the vertical sampling resolution. For viewing, corrective scaling of l/l is required in the vertical.
Figure 7 shows 20 cyclopean images extracted from a computer generated LeSD "Standard" and a photographically recorded LeSD, "Plane" respectively.
The LeSD may also be extracted from an Oil. The procedure described above can be applied to an Oil, by sampling in both the horizontal and vertical dimensions since LeSCs run in both directions. In this case the displacement on each LeSC has two components: sx (horizontal displacement from the central axis) and sy (vertical displacement from the central axis). The definition of the viewing position also has two components: x (horizontal) and y (vertical). Equations 1 and 2 provide the mapping between displacement and viewpoint angle for the horizontal and vertical directions independently.
The sampling geometry for a LeSD produced by hexagonal based microlens arrays is shown in Figure 5.
Note that the LeSCs are rectangular. This is due to the camera system's aperture being rectangular. The aspect ratio of a hex-based LeSC (horizontal to vertical) is
ARhex = tan 30°. (4)
To enable retrieval of intensity values sampled at any point sx, sy within each LeSC, bilinear interpolation is used. For four pixels of values /i, /2, 3 and /4, the bilinearly interpolated value at point xa, ya within the LeSD image, fa = (/a ~ fl)Xa + (/s ~ Λ + (Λ ~ f ~ fs +
Figure imgf000009_0001
+ fl- (5)
The geometry of the bilinear interpolation technique is shown in Figure 6.
Algorithm 2 describes the cyclopean view extraction process for an OIL Note that the aspect ratio of the resulting view image will not be the same as that of the LeSD, but will be that of the camera system aperture (and individual LeSC). For viewing, on a display with square pixels, corrective scaling of 2tan 30* is required in the horizontal.
From the foregoing, it is apparent that:
A method has been developed for extraction of perspective projections (views) of three dimensional object space from the encoded representation of an integral image. These views can be extracted at arbitrary viewpoint angles from the norm, within the allowable viewing zone for the integral image.
• We demonstrate that an integral image has continuous parallax, since we can extract a unique projection at any angle within a continuum of views.
Methods have been devised to produce planar two dimensional and stereoscopic image data from, integral images. This suggests that the omnidirectional integral imaging system is the general case of incoherent imaging systems, and hence that planar two dimensional and stereoscopic incoherent images are subsets of the omnidirectional integral image.
• A method has been developed for providing interactivity of viewpoint (interactive look- around) within the angle of view of an integral image, whether presented natively as an integral image, as a planar two dimensional or a stereoscopic image. A method has been developed for' trans-conversion of integral image types, ie.. conversion of an Oil to .a UII. This is useful where a UII display is considered more appropriate (eg. for economic reasons) than an Oil display, but integral images or video have been captured with omnidirectional parallax.
Algorithm 1 Cyclopean view extraction from a UII.
Require: LeSD, Ensure: Cyclopean view at angle Y - = Number of pixel rows in LeSD N -<= Number of LeSCs across LeSD Compute s from (equation 2) for y = 1 to Y do for i = 1 to N do Compute sampling pixel horizontal position xa in LeSD Compute intensity fa at position xa, y (equation 3) Store intensity /α in output image array at position i, y end for end for
Algorithm "2 Cyclopean view extraction from an OIL Require: LeSD, x, ay Ensure: Cyclopean view at angle pair x, y Nx = Number of LeSCs horizontally in LeSD Ny = Number of LeSCs vertically in LeSD Compute sx from ax (equation 2) Compute sy from ay (equation 2) for j ' = 1 to Ny do for i = l to Nx do Compute sampling pixel position xa, ya in LeSD Compute intensity fa at position xa, ya (equation 5) Store intensity fa in output image array at position i, j end for end for

Claims

1. A method for transferring data representing a three-dimensional image into an image viewable within a viewing zone, without requiring a lenticular decoding or other optical decoding arrangement, comprising simulating, by means of a data processing arrangement, the direction-selective properties of a decoding lenticular array by sampling a lenslet-encoded spatial distribution (LeSD) at defined locations.
2. A method according to claim 1, in which the LeSD is extracted from a unidirectional integral image.
3. A method according to claim 2, in which the LeSD is extracted by a process is carried out according to Algorithm I.
4. A method according to claim 1, in which the LeSD is extracted from an omnidirectional integral image.
5. A method according to claim 4, in which the LeSD is extracted by a process is carried out according to Algorithm II.
6. A method according to any one of claims 1 to 5, carried out by data processing equipment.
7. A method according to claim 6, including displaying an image produced thereby.
8. A method according to claim 7, in which the displayed image is a cyclopean view.
9. A method according to claim 7, in which images are displayed from a continuum of positions within the viewing zone which correspond to cyclopean views of the scene represented by the data from corresponding positions relative to the scene.
10. A method according to claim 9, in which two such images are processed to yield a volumetric optical model.
11. A method according to claim 10, in which a three-dimensional image is generated from said volumetric optical model.
12. A method according to any one of claims 1 to 11, in which data are derived from actual optical imaging.
13. A method according to claim 12, in which data are recorded at a capture or recording plane in integral imaging.
14. A method according to claim 13, in which the image is recorded as a photographic film.
15. A method according to claim 13 , in which the image is recorded as a pixel image.
16. A method according to claim 15, in which the pixel image is recorded as a CCD array.
17. A method according to any one of claims 1 to 11, in which a synthetic image is created as a three-dimensional array of intensity values.
18. A method according to claim 17, in which two or more three-dimensional arrays are combined.
19. A method according to any one of claim 1 to 18, used to produce still images.
20. A method according to any one of claims 1 to 18, used to produce moving images.
PCT/GB2001/003777 2000-08-23 2001-08-22 Autostereoscopic display WO2002017647A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001282312A AU2001282312A1 (en) 2000-08-23 2001-08-22 Autostereoscopic display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0020733.2A GB0020733D0 (en) 2000-08-23 2000-08-23 Imaging arrangements
GB0020733.2 2000-08-23

Publications (1)

Publication Number Publication Date
WO2002017647A1 true WO2002017647A1 (en) 2002-02-28

Family

ID=9898118

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2001/003777 WO2002017647A1 (en) 2000-08-23 2001-08-22 Autostereoscopic display

Country Status (3)

Country Link
AU (1) AU2001282312A1 (en)
GB (1) GB0020733D0 (en)
WO (1) WO2002017647A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997010675A1 (en) * 1995-09-16 1997-03-20 De Montfort University Stereoscopic image encoding
WO1998034133A1 (en) * 1997-01-31 1998-08-06 De Montfort University Lens arrangements

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997010675A1 (en) * 1995-09-16 1997-03-20 De Montfort University Stereoscopic image encoding
WO1998034133A1 (en) * 1997-01-31 1998-08-06 De Montfort University Lens arrangements

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HENKEL ET AL: "Locking onto 3D-structure by a combined vergence- and fusion system", 3-D DIGITAL IMAGING AND MODELING, 1999. PROCEEDINGS. SECOND INTERNATIONAL CONFERENCE ON OTTAWA, ONT., CANADA 4-8 OCT. 1999, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 4 October 1999 (1999-10-04), pages 70 - 76, XP001050357, ISBN: 0-7695-0062-5 *
HENKEL R D: "Constructing the cyclopean view", ARTIFICIAL NEURAL NETWORKS - ICANN '97. 7TH INTERNATIONAL CONFERENCE PROCEEDINGS, ARTIFICIAL NEURAL NETWORKS - ICANN '97. 7TH INTERNATIONAL CONFERENCE. PROCEEDINGS, LAUSANNE, SWITZERLAND, 8-10 OCT. 1997, 1997, Berlin, Germany, Springer-Verlag, Germany, pages 907 - 912, XP001051358, ISBN: 3-540-63631-5 *

Also Published As

Publication number Publication date
GB0020733D0 (en) 2000-10-11
AU2001282312A1 (en) 2002-03-04

Similar Documents

Publication Publication Date Title
Aggoun et al. Immersive 3D holoscopic video system
EP0637815B1 (en) Image processing method and image processing apparatus
Zaharia et al. Adaptive 3D-DCT compression algorithm for continuous parallax 3D integral imaging
Peleg et al. Omnistereo: Panoramic stereo imaging
Matusik et al. 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes
CN102164298B (en) Method for acquiring element image based on stereo matching in panoramic imaging system
EP1836859B1 (en) Automatic conversion from monoscopic video to stereoscopic video
Tanimoto Free-viewpoint television
Gurrieri et al. Acquisition of omnidirectional stereoscopic images and videos of dynamic scenes: a review
Shin et al. Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets
Aggoun et al. Live immerse video-audio interactive multimedia
Aggoun 3D Holoscopic video content capture, manipulation and display technologies
WO2002017647A1 (en) Autostereoscopic display
Eljdid et al. Computer generated content for 3D TV
Naemura et al. Orthographic approach to representing 3-D images and interpolating light rays for 3-D image communication and virtual environment
Edirisinghe et al. Stereo imaging, an emerging technology
Choi et al. Real-time sensing and three-dimensional display of far outdoor scenes based on asymmetric integral imaging
Onural et al. Three-dimensional television: From science-fiction to reality
Naemura et al. Ray-based approach to integrated 3D visual communication
Eljadid et al. Computer Generation of 3D Integral Imaging Animations
Vanijja et al. Omni-directional stereoscopic images from one omni-directional camera
Albar et al. Portable holoscopic 3D camera adaptor for Raspberry Pi
Jung et al. 2D/3D mixed service in t-DMB system using depth image based rendering
Jung et al. Disparity estimation using light ray pair in stacked 3D light field
Wang et al. P‐4.6: Efficient Synthetic Encoding Algorithm based on Depth Offset Mapping for Tabletop Integral Imaging Display

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP