WO2019135088A1 - Multi-angle light-field capture and display system - Google Patents

Multi-angle light-field capture and display system Download PDF

Info

Publication number
WO2019135088A1
WO2019135088A1 PCT/GB2019/050026 GB2019050026W WO2019135088A1 WO 2019135088 A1 WO2019135088 A1 WO 2019135088A1 GB 2019050026 W GB2019050026 W GB 2019050026W WO 2019135088 A1 WO2019135088 A1 WO 2019135088A1
Authority
WO
WIPO (PCT)
Prior art keywords
lens
image
field
light
array
Prior art date
Application number
PCT/GB2019/050026
Other languages
French (fr)
Other versions
WO2019135088A8 (en
Inventor
Ali Ӧzgür YOTEM
Original Assignee
Cambridge Enterprise Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cambridge Enterprise Limited filed Critical Cambridge Enterprise Limited
Publication of WO2019135088A1 publication Critical patent/WO2019135088A1/en
Publication of WO2019135088A8 publication Critical patent/WO2019135088A8/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/307Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using fly-eye lenses, e.g. arrangements of circular lenses
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B26/00Optical devices or arrangements for the control of light using movable or deformable optical elements
    • G02B26/08Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
    • G02B26/0808Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more diffracting elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/34Stereoscopes providing a stereoscopic pair of separated images corresponding to parallactically displaced views of the same object, e.g. 3D slide viewers
    • G02B30/35Stereoscopes providing a stereoscopic pair of separated images corresponding to parallactically displaced views of the same object, e.g. 3D slide viewers using reflective optical elements in the optical path between the images and the observer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/346Image reproducers using prisms or semi-transparent mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B17/00Systems with reflecting surfaces, with or without refracting elements
    • G02B17/08Catadioptric systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/08Mirrors
    • G02B5/10Mirrors with curved faces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2213/00Details of stereoscopic systems
    • H04N2213/001Constructional or mechanical details

Definitions

  • the present disclosure relates to a light field capture and display system. Particularly, but not exclusively, the disclosure relates to apparatus for capturing and displaying multi-depth images.
  • 3D displays have been studied extensively for a long time.
  • Several key methods have been developed, such as holography, integral imaging/light field imaging and stereoscopy.
  • many existing capture and display systems are either bulky (e.g. 3D cinema) and requires special eye-wear, or unsuitable for out-of-lab applications (e.g. holography).
  • the systems should be glasses free, multi-view, and have a large viewing angle. Ideally, an observer should be able to view a 3D object from all angles, 360° all around. Most of the volumetric 3D displays, which can provide glasses-free images, are based on projection in a diffusive medium.
  • Integral imaging is an alternative to holography and can provide 3D images under incoherent illumination leading to out-of-lab implementations. However, such a system still produces a coarse representation of the original 3D scene.
  • a light-field system can provide high resolution images, but current systems mostly feature the capture process.
  • experimental light field 3D display systems only provide a limited viewing angle from a fixed point of view. Captured light field images using commercially available cameras can only be observed on a 2D display with computational refocusing.
  • Experimental 3D light-field displays are extremely bulky setups which require multiple projection sources.
  • the imaging planes are limited to a planar configuration. As such, objects are imaged from one plane to a second, parallel plane. Therefore, these systems can only provide a planar field of view with a fixed viewing angle. Furthermore, the 3D reconstruction will be within the range of the fixed focused plane defined in a depth of field of a rectangular volume.
  • an image capture system comprising a free-form mirror, a field lens and a light field capturing apparatus, wherein the free-form mirror is configured to reflect incident light from an object through the field lens such that a three dimensional image of the object is formed in an intermediate imaging volume between the field lens and the light field capturing apparatus, and wherein the three dimensional image is captured by the light field capturing apparatus.
  • This setup allows for multi-perspective, multi-depth views of one or more three-dimensional objects to be captured by a stationary capturing apparatus.
  • the use of a field lens produces a smaller intermediate 3D image for capture, thereby reducing the required size of the capturing apparatus.
  • the field lens is one of a hemispherical, spherical or ball lens or any combination of reciprocal/reversible optics.
  • the field lens comprises at least one Fresnel lens.
  • the field lens is provided by a plurality of Fresnel lenses is a stacked configuration. This results in a lower total focal length, providing a lower numerical aperture and thus a large imaging angle. This is especially useful when it is desirable that the light rays reach the edges of the free form mirror.
  • the light field capturing apparatus comprises a first lens array and a two- dimensional photosensitive device.
  • the first lens array comprises diffractive optical elements such as photon sieves and/or Fresnel zone patterns.
  • the first lens array comprises diffractive optical elements realised by metamaterial structures.
  • the lenses in the first lens array have varying focal lengths and sizes.
  • the placement of the lenses in the first lens array have different configuration than a regular rectangular array such as hexagonal or random placement.
  • the lens arrays comprise liquid crystal lens arrays.
  • the first lens array comprises reconfigurable diffractive optical elements, i.e. a spatial light modulator (SLM) such as a liquid crystal on silicon device (LCoS) or a digital micromirror device (DMD).
  • SLM spatial light modulator
  • LCD liquid crystal on silicon device
  • DMD digital micromirror device
  • the two-dimensional photosensitive device comprises a CMOS array or a CCD.
  • the image capture system further comprises a second lens array at least partially surrounding the free-form mirror, such that each lens of the second lens array corresponds to a lens of the first lens array, and whereby light from an object passes through a lens of the second lens array to the free-form mirror and is reflected through the field lens and through a corresponding lens of the first lens array and onto the two-dimensional photosensitive device.
  • the second lens array reduces distortion of the light reflected by the free-form mirror and the corresponding nature of the lenses in the two arrays serves to limit the recorded light field and reduce cross-talk on the light sensors within the capturing apparatus.
  • the lenses in the second lens array have varying focal lengths and sizes.
  • the placement of the lenses in the second lens array have a different configuration than a regular rectangular array such as hexagonal or random placement.
  • an image capture and generation system comprising a free-form mirror, a first field lens, a light field capturing apparatus and a light field display, wherein the image capture and generation system is configured to operate in a capture mode and a generation mode, such that in the capture mode, the free-form mirror is configured to reflect incident light from an object through the field lens such that a three dimensional image of the object is formed in an intermediate imaging volume between the field lens and the light field capturing apparatus, and the three dimensional image is captured by the light field capturing apparatus, and in the generation mode, the light field display is configured to project a first 3D image in an intermediate imaging volume between the light field display and the field lens, such that a second 3D image is reflected by the free-form mirror and formed at a distance from the surface of the free-form mirror, the second image being a real image corresponding to the first image.
  • Such a system combines the advantages of the separate capture and generation systems to provide a complete three-dimensional media device.
  • the lens array of the light field capturing apparatus and the lens array of the light field display are provided by a single lens array. Having the capturing apparatus and display share optics serves to reduce the component count and complexity of the system as well as its overall volume.
  • the image capture and generation system further comprises a beam splitter located between the field lens and the light field capturing apparatus and display, the beam splitter being configured such that in the generation mode the three dimensional image generated by the light field display is relayed through the fields lens reflected by the free form mirror, and in the capture mode incident light from an object is reflected by the free form mirror through the field lens and onto the light field capturing apparatus.
  • a beam splitter located between the field lens and the light field capturing apparatus and display, the beam splitter being configured such that in the generation mode the three dimensional image generated by the light field display is relayed through the fields lens reflected by the free form mirror, and in the capture mode incident light from an object is reflected by the free form mirror through the field lens and onto the light field capturing apparatus.
  • the beam splitter provides a compact optical means by which light can be directed both from the light field display and to the light field capturing apparatus without the need for further optics.
  • the free-form mirror is a convex mirror.
  • the free-form mirror is a conical mirror.
  • the free-from mirror is a concave mirror.
  • the field lens comprises at least one Fresnel lens.
  • the field lens is provided by a plurality of Fresnel lenses is a stacked configuration. This results in a lower total focal length, providing a lower numerical aperture and thus a large imaging angle. This is especially useful when it is desirable that the light rays reach the edges of the free form mirror.
  • the light field display comprises a picture generation unit and a lens array.
  • the picture generation unit comprises one of a laser scanner, a hologram generator, a pixelated display or a projector, wherein the projector comprises a light source and a spatial light modulator, wherein the spatial light modulator is one of a digital micromirror device or a liquid crystal on silicon device.
  • the pixelated display comprises one of a OLED, QLED or a micro-LED display.
  • the pixelated display comprises either a rectangular or a circular pixel configuration.
  • the image capture and generation system further comprises an image processor in communication with the light-field display, wherein the image processor is configured to account for distortions caused by the optical set up such that the second image appears undistorted.
  • the image processor is configured to account for distortions caused by the optical set up such that the second image appears undistorted.
  • the image capture and generation system further comprises a phased array of ultrasonic transducers configured to provide a tactile output corresponding to the dimensions of the second image.
  • This provides haptic feedback to a user interacting with the displayed image, thereby improving the perceived reality of the displayed object.
  • Said transducer array being flexible and reconfigurable so as to conform to the exact size shape of the image displayed.
  • a tracking system is used to track hand gestures, head, eye, hand and finger positions in order to increase the accuracy of the tactile system.
  • the phased array of ultrasonic transducers is located around the periphery of the free-form mirror.
  • Figure 1 is a schematic illustration of the image capture apparatus according to an aspect of the invention.
  • FIG. 2 is a schematic illustration of the image capture apparatus according to an aspect of the invention.
  • FIG. 3 is a schematic illustration of the image capture apparatus according to an aspect of the invention.
  • FIG. 4 is a schematic illustration of the image generation apparatus according to an aspect of the invention.
  • FIG. 5 is a schematic illustration of the image generation apparatus according to an aspect of the invention.
  • Figure 6 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention.
  • FIG. 7 is a schematic illustration of the image generation apparatus according to an aspect of the invention.
  • Figure 8 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention.
  • Figure 9 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention.
  • Figure 10 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention.
  • Figure 11 is a schematic illustration of a portion of the image capture and generation apparatus according to an aspect of the invention
  • Figure 12 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention
  • Figure 13 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention.
  • Figure 14 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention.
  • Figure 15 is a schematic illustration of the optical setup of the image capture and generation apparatus an aspect of the invention.
  • Figure 16 is a schematic illustration of a portion of the image capture and generation apparatus according to an aspect of the invention.
  • Figure 17 is a schematic illustration of a portion of the image capture and generation apparatus according to an aspect of the invention.
  • Figure 18 is a schematic illustration of the free-form mirror according to an aspect of the invention.
  • Figure 19 is a schematic illustration of the free-form mirror according to an aspect of the invention.
  • Figure 20 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention.
  • Figure 21 is a schematic illustration of the field lens according to an aspect of the invention.
  • Figure 22 is a schematic illustration of the free-form mirror according to an aspect of the invention.
  • Figure 23 is a schematic illustration of the free-form mirror according to an aspect of the invention.
  • Figure 24 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention.
  • Figure 25 is a flow chart according to an aspect of the invention.
  • the disclosure relates to apparatus for capturing and projecting multi-dimensional true 3D images.
  • the system can further be configured to provide 3D augmented reality images.
  • Example applications can be, but are not limited to, automotive, telepresence, entertainment (gaming, 3DTV, museums and marketing), and education.
  • Figure 1 shows an image capture system 10 made up of a light field capturing apparatus 20, a field lens 30, and a free-form mirror 40.
  • the light field capturing apparatus 20 is formed by a 2D sensing device 21 and a lens array 22.
  • the 2D sensing device 21 is a CCD.
  • the 2D sensing device 21 is a CMOS device.
  • the lens array 22 is provided by an array of diffractive optical elements (DOE) such as photon sieves.
  • DOE diffractive optical elements
  • the lens array 22 is provided by an array of liquid crystal lens arrays.
  • the lens array 22 is provided by reconfigurable DOE.
  • the lens array 22 comprises phase Fresnel lens patterns on phase- only liquid crystal on silicon (LCoS) device. In an alternative embodiment, the lens array 22 comprises amplitude Fresnel lens patterns on a digital micromirror device (DMD) or amplitude only LCoS. In an alternative embodiment, the lens array 22 comprises conventional refractive micro lenses. Accordingly, any suitable image capture means, and lens array may be employed to provide the light field capturing apparatus 20.
  • the field lens 130 may be provided by any form of suitable lens, including, but not limited to, a hemispherical, spherical or ball lens.
  • the field lens 130 comprises at least one Fresnel lens.
  • a plurality of Fresnel lenses are provided in a stacked configuration, as shown in Figure 21.
  • the mirror 40 is a hemi-spherical, parabolic convex mirror, with a 360° field of view.
  • the skilled person would appreciate that any curved or multi-angled reflective surface may be employed, and that the field of view of the mirror 40 need not be limited to 2p steradians.
  • the volume around the mirror in which a resulting 3D image is displayed can be cylindrical, spherical, conical or any arbitrary shape.
  • the tree-form mirror 140 is formed by a Fresnel lens 142 on top of a flat surface mirror 141 , as shown in Figure 22. This provides a parabolic mirror to be simulated by a setup which beneficially has a thinner form factor.
  • the tree-form mirror 140 is formed by a holographic reflective plate 143 with an equivalent phase profile encoded, as shown in Figure 23.
  • optical path The path of the light through the system is referred to as the optical path.
  • any suitable number of intervening reflectors/lens or other optical components are included so as to manipulate the optical path as necessary (for example, to minimize the overall size of the image capture system 10).
  • a 3D object 80 to be imaged In use, light from a 3D object 80 to be imaged is incident on the mirror 40 and reflected through the field lens 30.
  • the field lens 30 produces an intermediate 3D image 90 in an imaging volume between the field lens 30 and the light field capturing apparatus 20.
  • a different 2D perspective view (or elemental image) 92 of this intermediate image is imaged through a corresponding lens of the lens array 22 and onto a corresponding portion of the 2D sensing device 21.
  • the differing distances between points of the intermediate 3D image and the lenses of the lens array 22 affect the focus of each corresponding perspective view, such that the captured 2D perspective views (elemental images) provide an apparent depth of field.
  • Figures 2 and 3 show the effect of distortions of the captured image(s) 85 as a result of the free-form mirror 40 and/or the field lens 30. Further distortions can be introduced by any intervening optical components used to manipulate the optical path. Such distortions can be corrected post capture by software in a known manner.
  • Figure 4 shows an image generation system 100, made up of a light field display 120, a field lens 130, and a free-form mirror 140.
  • the light field display 120 is formed by a 2D display device 121 and a lens array 122.
  • the 2D display device 121 is an LCD screen (either a single device or multiple, tiled devices).
  • the 2D display device 121 comprises a circular pixel configuration instead of the conventional rectangular configuration, wherein the relevant scaling and image generation is achieved by known image processing means and processes.
  • the 2D display device 121 comprises a scanning mirror and a digital micromirror device (DMD), or a liquid crystal on silicon (LCoS) device, though the skilled person would appreciate that any suitable light source and imaging means (including 3D holographic display devices) may be used provided they were capable of operating in the manner described below.
  • DMD digital micromirror device
  • LCDoS liquid crystal on silicon
  • the lens array 122 comprises an array of diffractive optical elements (DOE) such as photon sieves).
  • DOE diffractive optical elements
  • the lens array 22 is provided by an array of liquid crystal lens arrays.
  • the lens array 22 is provided by reconfigurable DOE.
  • the lens array 22 comprises phase Fresnel lens patterns on phase-only LCoS. In an alternative embodiment, the lens array 22 comprises amplitude Fresnel lens patterns on a digital micromirror device (DMD) or amplitude only LCoS. In an alternative embodiment, the lens array 122 comprises conventional lenses. The skilled person would appreciate that any suitable image generation means, and lens array may be employed to provide the light field display 120.
  • the field lens 130 may be provided by any suitable lens, including a hemispherical, spherical or ball lens or one or more Fresnel lenses as depicted in Figure 21.
  • the mirror 140 is a hemi spherical, parabolic convex mirror, with a 360° field of view.
  • any suitably shaped and reflective surface may be employed, including the mirrors depicted in Figures 22 and 23 discussed above in relation to the image capture system.
  • any suitable number of intervening reflectors/lens or other optical components are included so as to manipulate the optical path as necessary (for example, to minimize the overall size of the image display system 100).
  • a series of 2D perspective images (elemental images) 192 are displayed on the 2D display 121 and each of the 2D perspective images are imaged through a corresponding lens of the lens array 122, such that an intermediate 3D image 190 is formed in an imaging volume between the light field display 120 and the field lens 130.
  • This image is relayed through the field lens 130 before being reflected by the free-form mirror 140 towards one or more users 2 who in turn see a reconstructed real 3D image 180 projected at distance from the free-form mirror surface.
  • FIG. 5 shows an embodiment of the image generation system 100 in which a spatial light modulator (SLM) 125 is used in place of the 2D display 121.
  • SLM spatial light modulator
  • the remaining components and their arrangement are otherwise identical to those described above in relation to Figure 4, the common reference numerals of Figures 4 and 5 referring to the same components. Accordingly, a series of holographic 3D elemental images 193 are generated for transmission through the lens array 122, in place of the 2D perspective images 192.
  • Figure 6 depicts a combined image capture and generation system 200 formed by a free form mirror 240, a field lens 230 and a combined light field capture and display device 220, with the mirror 240 and field lens 230 arranged relative to the light field capture and display device 220 in the same manner as the mirror 40, 140 and field lens 30, 130 of the separate image capture system 10 and image generation system 100 are arranged relative to the light field capturing apparatus 20 and light field display 120 respectively.
  • the light field capture and display device 220 is made up of a lens array 222 and a beam splitter 223 separating a 2D sensing device 224 from a 2D display device 225.
  • any suitable light sensing and display means may be employed provided they operate as described below.
  • the 2D sensing device 224 is a CCD. In an alternative embodiment, the 2D sensing device 224 is a CMOS device.
  • the 2D display device 225 is an LCD screen (either a single device or multiple, tiled devices).
  • the 2D display device 225 comprises a scanning mirror and a digital micromirror device (DMD) or a liquid crystal on silicon (LCoS) device, though the skilled person would appreciate that any suitable light source and imaging means (including 3D holographic display devices) may be used provided they were capable of operating in the manner described below.
  • DMD digital micromirror device
  • LCDoS liquid crystal on silicon
  • the lens array 222 is provided by an array of diffractive optical elements (DOE) such as photon sieves.
  • DOE diffractive optical elements
  • the lens array 222 is provided by an array of liquid crystal lens arrays.
  • the lens array 222 is provided by reconfigurable DOE.
  • the lens array 222 comprises phase Fresnel lens patterns on phase-only liquid crystal on silicon (LCoS) device.
  • the lens array 222 comprises amplitude Fresnel lens patterns on a digital micromirror device (DMD) or amplitude only LCoS.
  • the lens array 222 comprises conventional refractive micro lenses.
  • any suitable image capture means, and lens array may be employed to provide the combined light field capture and display device 220.
  • the mirror 240 is a hemi spherical, parabolic convex mirror, with a 360° field of view. Though again skilled person would appreciate that any curved or multi-angled reflective surface may be employed, and that the field of view of the mirror 240 need not be limited to 2p steradians.
  • any suitable number of intervening reflectors/lens or other optical components are included so as to manipulate the optical path as necessary (for example, to minimize the overall size of the image capture and generation system 200).
  • the image capture and generation system 200 operates in either a capture mode or a generation mode.
  • the capture mode light reflected from a 3D object 280 is incident on the mirror 240 and reflected through the field lens 230, forming an intermediate 3D image 290 in an imaging volume between the filed lens and the lens array 222.
  • This intermediate 3D image is imaged through the beam splitter 222 and is captured by the 2D sensing device 224.
  • a series of 2D perspective images (elemental images) 292 are displayed on the 2D display 225 and each of the 2D perspective images is reflected by the beam splitter 223 through a corresponding lens of the lens array 222, such that an intermediate 3D image 290 is formed in an imaging volume between the lens array 222 and the field lens 230.
  • This image is relayed through the field lens 230 before being reflected by the free-form mirror 240 towards one or more users who in turn observe a reconstructed 3D image 280 at a distance from the surface of the free-form mirror 240.
  • the 3D images generated by both the combined image capture and generation system 200 and the image generation system 100 are able to be displayed in different ways.
  • Figure 7 shows the different ways a 3D image can be projected.
  • the mirror 240 shown in Figure 7 is that of the combined image capture and generation system 200, it may also be mirror 140 of the image generation system 100.
  • the field lens 230 and light field capture and display device 220 of the system 200 are not shown, they are arranged as described with reference to Figure 6.
  • a large, single image can be reconstructed around the mirror 240 such that an observer can view different parts of the same 3D image.
  • multiple different 3D images can be displayed around the mirror 240. Observers will view different objects at different locations around the mirror.
  • different observers can view the same portion of a common object.
  • the image capture and generation system 200 is able to operate in both capture and generation mode concurrently, with an image being projected on to one portion of the mirror 240 and captured from another portion.
  • An advantage of the image capture and generation system 200 employing a common optical set up in both the capture and generation modes is that any distortion affecting the captured image will be undone when these captured images are displayed by the same system.
  • Figure 8 depicts an alternative embodiment to that of figure 6, wherein a separate lens array is provided for each of the 2D sensing device 224 and the 2D display device 225.
  • the mirror 240 and field lens 230 and their relative arrangement to the light field capturing apparatus and display device 220 are otherwise as depicted and described with reference to Figure 4, with the common reference numerals referring to the same components.
  • Figure 9 depicts a further embodiment in which the 2D display device 225 is replaced with a 3D light field display provided by a spatial light modulator (SLM) 226, in a similar manner to Figure 5.
  • the elemental images 292 are holographic 3D perspective images 293.
  • the mirror 240 and field lens 230 and their relative arrangement are otherwise as depicted and described with reference to Figure 8, with the common reference numerals referring to the same components.
  • the skilled person would appreciate that the SLM 226 is also compatible with the single lens array 222 embodiment of Figure 6.
  • Figures 10 and 11 show a further embodiment of the image capture and generation system 200 which includes a phased array of ultrasonic transducers 150 arranged around the periphery of the free-form mirror 240. For simplicity, the features of the image capture portion of the system 200 are not shown. Whilst Figure 10 does not depict the 2D display device 225, it is present and arranged as depicted in any of Figures 6, 8 and 9, the phased ultrasonic array 150 being compatible with both the 2D display device and the holographic display device embodiments. The phased array 150 is also compatible with the image generation system 100 in isolation. The phased array may be provided by any suitable means. In a preferred embodiment the phased array is an array of ultrasonic transducers 150. The phased array is configured to provide haptic feedback to the user in order to allow the user to interact with the displayed object.
  • Figure 11 whilst the field lens 230 and light field capture and display device 220 of the system 100 are not shown, they are arranged as described with reference to Figures 6, 8 and 9. Accordingly, the embodiment of Figures 10 and 11 is identical to that of Figures 6, 8 and 9 with the addition of the phased ultrasonic array 150, with the common reference numerals referring to the same components. Again the phased array may be provided by any suitable means.
  • the phased ultrasonic array 150 generates a pressure field configured to emulate the physical sensation of the 3D object being displayed thus providing haptic feedback to the user. Accordingly, the system 200 provides both visual and sensational cues of the reconstructed image.
  • a synchronised sound system is used to generate a 3D surrounding and directional audible acoustic field. Such a system allows for the three components of the human sensory system (namely vision, somatosensation and audition) to be stimulated such that it is possible to completely simulate the sensation and interaction with physical objects.
  • Figure 12 and 13 show an embodiment of the invention in which the image capture and generation system 200 is installed in a vehicle.
  • the multi-direction projection capabilities of the image capture and generation system 200 enable a first real 3D image 280a to be generated at a first region and displayed towards the driver 2a of the vehicle, whilst a second real image 280b is generated at a second region and displayed for the front passenger 2b.
  • virtual images 280c and 280d are displayed to the driver 2a and passenger 2b respectively by projecting images onto the windscreen 199 of the vehicle which are then reflected back towards the driver 2a and passenger 2b.
  • Additional projections optics a combination of mirrors and lenses
  • the image generation system operates as a head- up display (HUD).
  • the real images will be in focus at an apparent depth within reach of the observer, whereas the virtual images will be seen as augmented/mixed reality images overlaid on physical objects outside the vehicle. This allows the occupants in the front seats to observe HUD images and infotainment images and interact with them.
  • the phased array such as ultrasonic array 150, generates a pressure field configured to emulate the physical sensation of the 3D object being displayed. Accordingly, the system 200 provides both visual and sensational cues of the reconstructed image. Though it is not shown in the figures, it is envisaged that the synchronised sound system described in relation to Figures 10 and 11 is optionally incorporated in to the vehicle.
  • Figure 13 shows a further embodiment with multiple image generation systems 200 for projecting real images 280e and 280f to passengers 2c and 2d in the back seats of the vehicle. This embodiment functions in the manner described above.
  • Figures 14 and 15 depict the general set up of the components common to the image capture system, the image generation system 100 and the combined image capture and generation system 200. For simplicity, only the reference numerals corresponding to the combined system 200 are shown in the figures.
  • Figures 14 and 15 depict the capture and generation modes, wherein in the capture mode, 3D objects in 3D volume 280 is reflected by the mirror 240 through the field lens 230 to produce the 3D intermediate images in imaging volume 290.
  • the generation mode intermediate 3D images are generated in the 3D volume 290 between the field lens 230 and the lens array 222 such that light is directed through the field lens 230 and onto the mirror 240 to produce the reconstructed 3D images in 3D volume 280.
  • Figures 18 and 19 depict an alternative configuration for the free-form mirror and field lens. Though the depicted embodiment refers to the field lens 30 and lens array 22 of the image capture system 10, the illustrated mirror configuration is compatible with the image generation system 100 and the combined image capture and generation system 200.
  • the illustrated mirror is formed by two sub-mirrors 401 and 402.
  • a first section 401 is a truncated hemisphere, whilst a second section 402 is convex.
  • the mirror pair can assume any matching surface shape to create the required geometry for the 3D image reconstruction.
  • light from the object 80 is incident on the first mirror section 401 and reflected towards the second section 402 which redirects the light through the field lens 30 (forming the intermediate image 90) and onto the lens array 22.
  • This arrangement reduces the size of the system by allowing optical components to be at least partially accommodated in the volume defined by the curve of the first mirror section 401.
  • a set of 2D perspective images 192, 292 or holographic 3D images 193, 293 are imaged through a lens array 122, 222 (not shown), such that an intermediate 3D image 190, 290 is formed in an imaging volume between the light field display and the field lens 130, 230.
  • This image is then relayed through the field lens before being reflected by the second mirror section 401 onto the first mirror section 402 thereby forming a reconstructed real 3D image 180, 280 projected at a distance from the mirror surface.
  • Figure 20 depicts a further embodiment of the combined image capture and generation system 200 in display mode, in which the lens array 222 is removed and replaced with a second lens array 123 which surrounds the free-form mirror 240.
  • a corresponding setup can be used in the image generation system 100.
  • the illustrated embodiment is compatible with both the 2D display device 225 of Figure 8 and the SLM 226 of figure 9.
  • a diffusive screen 160 is positioned around the periphery of the mirror 240.
  • the 2D perspective images are formed on the diffusive screen before being relayed through the second surrounding lens array 123 to generate the real 3D images 280. While the depicted diffusive screen 260 is cylindrical, any suitable shape may be used.
  • Figures 24 and 25 depict a further embodiment of the image capture and generation system 200 which includes an interactive hand tracking system 500. Whilst the image capture and generation system 200 is setup as shown in Figure 6, the interactive hand tracking system 500 is equally compatible with the alternative setup depicted in Figure 9. The interactive hand tracking system 500 is compatible with both the 2D display device and the holographic display device embodiments of the image capture and generation system 200, as well as with the image generation system 100 in isolation.
  • the interactive hand tracking system 500 comprises a controller 510 in communication with the 2D display device 225 (or its equivalent), the 2D sensing device 224 (or its equivalent) and the phased ultrasonic array 150.
  • the controller 510 includes an image processing unit configured to recognise and track a user’s hands in a known manner.
  • the position and movement of one or more hands is captured by the image capturing portion of the capture and generation system 200, whilst the display portion projects the relevant 3D object with which the user interacts.
  • the interactive hand tracking system 500 is used in conjunction with the phased ultrasonic array 150 in order to provide the sensation of tactile feedback corresponding to the displayed object and the detection of the user’s hands.
  • the process of recognising the user’s hands, determining their position and prompting the appropriate response is carried out by a controller 510 in a known manner.
  • Figure 25 sets out an example of the operational steps of the interactive hand tracking system 500.
  • step S501 the position of the user’s hand are recorded by the controller 510 using any known suitable means.
  • the controller 510 is in communication with the 2D sensing device 21 , 224 of the image capture system 10 or the image capturing portion of the combined image capture and generation system 200, data from which feeds into the image processing unit.
  • the background is removed and the general shape of the hand is determined by known background removal methods performed at the controller 510. . In an embodiment, this achieved by analysing differences in the perspective images captured by the 2D sensing device 21 , 224. Each perspective image will have a portion of the background. By comparing the pixels of the perspective images by correlation, it is possible to determine which pixels belong to the background or to the hand. Known filtering methods may then be applied to determine the overall shape of the hand.
  • the overall handshape is registered. Individual feature points of the hand image are identified and extracted for analysis. In an embodiment, this is achieved by comparing the extracted image of the hand to a database of known hand gestures accessible by the controller 510.
  • the individual portions of the hand, including the fingers are recognised via standard edge detection and shape recognition techniques that would be apparent to the skilled person.
  • recognisable gestures are stored as a series of feature points in the database. Detectable gesture include a button press, swipe, pick, pull, pinch zoom. The recorded feature points of the handshape are then compared to those in the database so as to identify the most likely gesture being performed.
  • the controller is trained to better recognise points on hands and fingers using a preferably real-time machine learning/deep learning algorithm.
  • the controller 510 comprises various models of hand and finger shapes which are cross- correlated with observations of the user’s hand and a database of images of known hand and finger positions so as to enable the identification and tracking of hands and fingers.
  • the position of the hands within the 3D volume around the mirror is calculated.
  • the determination of the hand’s exact location is performed by the controller 510 in a known manner.
  • multiple perspective images of the hand can be used to determine the 3D locations of the points on the hand uniquely.
  • a plurality of known scanning means are used to determine their respective distance from the hand, thereby providing its location in multiple dimensions.
  • the hand location is estimated from the observed size of the hand as compared to one or more images of a hand at a known distance stored in memory.
  • the position of the hands is correlated with the known virtual position of the one or more displayed 3D objects.
  • the appropriate visual, haptic and audio feedback is presented to the user.
  • the controller 510 being configured to adjust the shape, size and/or orientation of the displayed 3D objects and the output of the phased ultrasonic array to respond to the detected position and movements of the user’s hand.
  • the image sensing device 224 and the display device 225 of the combined image capture and display system 200 have different pixel densities.
  • the sensing device 224 has a higher resolution and pixel density than the display device.
  • a suitably sized higher resolution display is provided by tiling smaller higher resolution devices.
  • a single projected image from the display device is scanned using time-multiplexing in the transversal plane perpendicular to the optical axis of the system (i.e. the path taken by the light).
  • FIG. 16 An alternative approach is depicted in Figure 16. These Figures depict a system with a fixed number of pixels (m x ) and a fixed imaging distance (d). By increasing the distance (a) between the sensing device 224 and the lens array 222 and the focal length (f) of each lens in the lens array 222 the exit pupil size of the system is increased. Therefore, a larger display with larger pixels will give the same reconstruction when the elemental images are viewed through a larger pupil size and longer focal length lens arrays 222 when the display is placed at the matching imaging distance.
  • Figure 17a depicts a conventional display device 121 , 225 arranged relative to a lens array 122, 222 as described in the image generation system 100 and combined image capture and generation system 200 set out above. For simplicity, only a single lens of the lens array 122, 222 is shown.
  • the image generation process relies on light from one portion of the display device 121 , 225 passing through a single corresponding lens of the lens array 122, 222. If light from one pixel leaks into a neighbouring lens in the array (as shown in Figure 17a), this creates aliases around the generated 3D image.
  • a holographic plate 350 is used to redistribute the light coming from the backlight 300 and display device 121 , 225 such that, every elemental image appears behind its corresponding lens in the array 122, 222. This replaces the baffle (which is bulky and hard to manufacture) with a thin plate of diffractive optical element.

Abstract

An image capture and generation system (200) comprising a free-form mirror (240), a first field lens (230), a light field capturing apparatus (220, 224) and a light field display (220, 225), wherein the image capture and generation system is configured to operate in a capture mode and a generation mode, such that in the capture mode, the free-form mirror is configured to reflect incident light from an object through the field lens such that a three dimensional image of the object is formed in an intermediate imaging volume (290) between the field lens and the light field capturing apparatus, and the three dimensional image is captured by the light field capturing apparatus; and in the generation mode, the light field display is configured to project a first 3D image into an intermediate imaging volume between the light field display and the field lens, such that a second 3D image (280) is rendered, said second image reflected by the free-form mirror and formed at a distance from the surface of the free-form mirror, the second image being a real image corresponding to the first image.

Description

MULTI-ANGLE LIGHT-FIELD CAPTURE AND DISPLAY SYSTEM
TECHNICAL FIELD
The present disclosure relates to a light field capture and display system. Particularly, but not exclusively, the disclosure relates to apparatus for capturing and displaying multi-depth images.
BACKGROUND
3D displays have been studied extensively for a long time. Several key methods have been developed, such as holography, integral imaging/light field imaging and stereoscopy. However, many existing capture and display systems are either bulky (e.g. 3D cinema) and requires special eye-wear, or unsuitable for out-of-lab applications (e.g. holography).
Furthermore, to have a natural feeling of 3D perception, the systems should be glasses free, multi-view, and have a large viewing angle. Ideally, an observer should be able to view a 3D object from all angles, 360° all around. Most of the volumetric 3D displays, which can provide glasses-free images, are based on projection in a diffusive medium.
The limitations of conventional 3D capture and display methods such as holography can be overcome by techniques such as integral imaging. Integral imaging is an alternative to holography and can provide 3D images under incoherent illumination leading to out-of-lab implementations. However, such a system still produces a coarse representation of the original 3D scene.
A light-field system can provide high resolution images, but current systems mostly feature the capture process. In addition, experimental light field 3D display systems only provide a limited viewing angle from a fixed point of view. Captured light field images using commercially available cameras can only be observed on a 2D display with computational refocusing. Experimental 3D light-field displays are extremely bulky setups which require multiple projection sources.
Similarly, in integral imaging/light field systems, the imaging planes are limited to a planar configuration. As such, objects are imaged from one plane to a second, parallel plane. Therefore, these systems can only provide a planar field of view with a fixed viewing angle. Furthermore, the 3D reconstruction will be within the range of the fixed focused plane defined in a depth of field of a rectangular volume.
It is an aim of the present invention to overcome at least some of these disadvantages.
SUMMARY OF THE INVENTION
Aspects and embodiments of the invention provide apparatus as claimed in the appended claims.
According to a first aspect of the invention there is provided an image capture system comprising a free-form mirror, a field lens and a light field capturing apparatus, wherein the free-form mirror is configured to reflect incident light from an object through the field lens such that a three dimensional image of the object is formed in an intermediate imaging volume between the field lens and the light field capturing apparatus, and wherein the three dimensional image is captured by the light field capturing apparatus.
This setup allows for multi-perspective, multi-depth views of one or more three-dimensional objects to be captured by a stationary capturing apparatus. The use of a field lens produces a smaller intermediate 3D image for capture, thereby reducing the required size of the capturing apparatus.
Optionally, the field lens is one of a hemispherical, spherical or ball lens or any combination of reciprocal/reversible optics.
Optionally, the field lens comprises at least one Fresnel lens. In an embodiment, the field lens is provided by a plurality of Fresnel lenses is a stacked configuration. This results in a lower total focal length, providing a lower numerical aperture and thus a large imaging angle. This is especially useful when it is desirable that the light rays reach the edges of the free form mirror.
Optionally, the light field capturing apparatus comprises a first lens array and a two- dimensional photosensitive device.
Optionally, the first lens array comprises diffractive optical elements such as photon sieves and/or Fresnel zone patterns. Optionally, the first lens array comprises diffractive optical elements realised by metamaterial structures.
Optionally, the lenses in the first lens array have varying focal lengths and sizes.
Optionally, the placement of the lenses in the first lens array have different configuration than a regular rectangular array such as hexagonal or random placement.
Optionally, the lens arrays comprise liquid crystal lens arrays.
Optionally, the first lens array comprises reconfigurable diffractive optical elements, i.e. a spatial light modulator (SLM) such as a liquid crystal on silicon device (LCoS) or a digital micromirror device (DMD).
Optionally, the two-dimensional photosensitive device comprises a CMOS array or a CCD.
Optionally, the image capture system further comprises a second lens array at least partially surrounding the free-form mirror, such that each lens of the second lens array corresponds to a lens of the first lens array, and whereby light from an object passes through a lens of the second lens array to the free-form mirror and is reflected through the field lens and through a corresponding lens of the first lens array and onto the two-dimensional photosensitive device.
The second lens array reduces distortion of the light reflected by the free-form mirror and the corresponding nature of the lenses in the two arrays serves to limit the recorded light field and reduce cross-talk on the light sensors within the capturing apparatus.
Optionally, the lenses in the second lens array have varying focal lengths and sizes.
Optionally, the placement of the lenses in the second lens array have a different configuration than a regular rectangular array such as hexagonal or random placement.
According to a second aspect of the invention there is provided an image capture and generation system comprising a free-form mirror, a first field lens, a light field capturing apparatus and a light field display, wherein the image capture and generation system is configured to operate in a capture mode and a generation mode, such that in the capture mode, the free-form mirror is configured to reflect incident light from an object through the field lens such that a three dimensional image of the object is formed in an intermediate imaging volume between the field lens and the light field capturing apparatus, and the three dimensional image is captured by the light field capturing apparatus, and in the generation mode, the light field display is configured to project a first 3D image in an intermediate imaging volume between the light field display and the field lens, such that a second 3D image is reflected by the free-form mirror and formed at a distance from the surface of the free-form mirror, the second image being a real image corresponding to the first image.
Such a system combines the advantages of the separate capture and generation systems to provide a complete three-dimensional media device.
Optionally, the lens array of the light field capturing apparatus and the lens array of the light field display are provided by a single lens array. Having the capturing apparatus and display share optics serves to reduce the component count and complexity of the system as well as its overall volume.
Optionally, the image capture and generation system further comprises a beam splitter located between the field lens and the light field capturing apparatus and display, the beam splitter being configured such that in the generation mode the three dimensional image generated by the light field display is relayed through the fields lens reflected by the free form mirror, and in the capture mode incident light from an object is reflected by the free form mirror through the field lens and onto the light field capturing apparatus.
As with the shared lens array, the beam splitter provides a compact optical means by which light can be directed both from the light field display and to the light field capturing apparatus without the need for further optics.
Optionally, the free-form mirror is a convex mirror.
Optionally, the free-form mirror is a conical mirror.
Optionally, the free-from mirror is a concave mirror.
Optionally, the field lens comprises at least one Fresnel lens. In an embodiment, the field lens is provided by a plurality of Fresnel lenses is a stacked configuration. This results in a lower total focal length, providing a lower numerical aperture and thus a large imaging angle. This is especially useful when it is desirable that the light rays reach the edges of the free form mirror.
Optionally, the light field display comprises a picture generation unit and a lens array.
Optionally, the picture generation unit comprises one of a laser scanner, a hologram generator, a pixelated display or a projector, wherein the projector comprises a light source and a spatial light modulator, wherein the spatial light modulator is one of a digital micromirror device or a liquid crystal on silicon device.
Optionally, the pixelated display comprises one of a OLED, QLED or a micro-LED display.
Optionally, the pixelated display comprises either a rectangular or a circular pixel configuration.
Optionally, the image capture and generation system further comprises an image processor in communication with the light-field display, wherein the image processor is configured to account for distortions caused by the optical set up such that the second image appears undistorted. This obviates the need for any post-image generation corrections as well as bulky correction optics. Furthermore, it provides a higher degree of flexibility which can adapt to different display/mirror surfaces.
Optionally, the image capture and generation system further comprises a phased array of ultrasonic transducers configured to provide a tactile output corresponding to the dimensions of the second image. This provides haptic feedback to a user interacting with the displayed image, thereby improving the perceived reality of the displayed object. Said transducer array being flexible and reconfigurable so as to conform to the exact size shape of the image displayed.
Optionally, a tracking system, is used to track hand gestures, head, eye, hand and finger positions in order to increase the accuracy of the tactile system.
Optionally, the phased array of ultrasonic transducers is located around the periphery of the free-form mirror.
Other aspects of the invention will be apparent from the appended claim set. BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 is a schematic illustration of the image capture apparatus according to an aspect of the invention;
Figure 2 is a schematic illustration of the image capture apparatus according to an aspect of the invention;
Figure 3 is a schematic illustration of the image capture apparatus according to an aspect of the invention;
Figure 4 is a schematic illustration of the image generation apparatus according to an aspect of the invention;
Figure 5 is a schematic illustration of the image generation apparatus according to an aspect of the invention;
Figure 6 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention;
Figure 7 is a schematic illustration of the image generation apparatus according to an aspect of the invention;
Figure 8 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention;
Figure 9 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention;
Figure 10 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention;
Figure 11 is a schematic illustration of a portion of the image capture and generation apparatus according to an aspect of the invention; Figure 12 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention;
Figure 13 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention;
Figure 14 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention;
Figure 15 is a schematic illustration of the optical setup of the image capture and generation apparatus an aspect of the invention;
Figure 16 is a schematic illustration of a portion of the image capture and generation apparatus according to an aspect of the invention;
Figure 17 is a schematic illustration of a portion of the image capture and generation apparatus according to an aspect of the invention;
Figure 18 is a schematic illustration of the free-form mirror according to an aspect of the invention;
Figure 19 is a schematic illustration of the free-form mirror according to an aspect of the invention;
Figure 20 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention;
Figure 21 is a schematic illustration of the field lens according to an aspect of the invention;
Figure 22 is a schematic illustration of the free-form mirror according to an aspect of the invention;
Figure 23 is a schematic illustration of the free-form mirror according to an aspect of the invention. Figure 24 is a schematic illustration of the image capture and generation apparatus according to an aspect of the invention.
Figure 25 is a flow chart according to an aspect of the invention.
DETAILED DESCRIPTION
Particularly, but not exclusively, the disclosure relates to apparatus for capturing and projecting multi-dimensional true 3D images. The system can further be configured to provide 3D augmented reality images. Example applications can be, but are not limited to, automotive, telepresence, entertainment (gaming, 3DTV, museums and marketing), and education.
Figure 1 shows an image capture system 10 made up of a light field capturing apparatus 20, a field lens 30, and a free-form mirror 40. The light field capturing apparatus 20 is formed by a 2D sensing device 21 and a lens array 22. In an embodiment, the 2D sensing device 21 is a CCD. In an alternative embodiment, the 2D sensing device 21 is a CMOS device. The lens array 22 is provided by an array of diffractive optical elements (DOE) such as photon sieves. In a further embodiment, the lens array 22 is provided by an array of liquid crystal lens arrays. In a further embodiment, the lens array 22 is provided by reconfigurable DOE. In an alternative embodiment, the lens array 22 comprises phase Fresnel lens patterns on phase- only liquid crystal on silicon (LCoS) device. In an alternative embodiment, the lens array 22 comprises amplitude Fresnel lens patterns on a digital micromirror device (DMD) or amplitude only LCoS. In an alternative embodiment, the lens array 22 comprises conventional refractive micro lenses. Accordingly, any suitable image capture means, and lens array may be employed to provide the light field capturing apparatus 20.
The field lens 130 may be provided by any form of suitable lens, including, but not limited to, a hemispherical, spherical or ball lens.
In an embodiment, the field lens 130 comprises at least one Fresnel lens. In a particular embodiment, a plurality of Fresnel lenses are provided in a stacked configuration, as shown in Figure 21.
In the illustrated embodiment, the mirror 40 is a hemi-spherical, parabolic convex mirror, with a 360° field of view. The skilled person would appreciate that any curved or multi-angled reflective surface may be employed, and that the field of view of the mirror 40 need not be limited to 2p steradians. As a result, the volume around the mirror in which a resulting 3D image is displayed can be cylindrical, spherical, conical or any arbitrary shape.
In an embodiment, the tree-form mirror 140 is formed by a Fresnel lens 142 on top of a flat surface mirror 141 , as shown in Figure 22. This provides a parabolic mirror to be simulated by a setup which beneficially has a thinner form factor.
In a further embodiment, the tree-form mirror 140 is formed by a holographic reflective plate 143 with an equivalent phase profile encoded, as shown in Figure 23.
The path of the light through the system is referred to as the optical path. The skilled person would understand in further embodiments, any suitable number of intervening reflectors/lens or other optical components are included so as to manipulate the optical path as necessary (for example, to minimize the overall size of the image capture system 10).
In use, light from a 3D object 80 to be imaged is incident on the mirror 40 and reflected through the field lens 30. The field lens 30 produces an intermediate 3D image 90 in an imaging volume between the field lens 30 and the light field capturing apparatus 20. A different 2D perspective view (or elemental image) 92 of this intermediate image is imaged through a corresponding lens of the lens array 22 and onto a corresponding portion of the 2D sensing device 21. The differing distances between points of the intermediate 3D image and the lenses of the lens array 22 affect the focus of each corresponding perspective view, such that the captured 2D perspective views (elemental images) provide an apparent depth of field.
Figures 2 and 3 show the effect of distortions of the captured image(s) 85 as a result of the free-form mirror 40 and/or the field lens 30. Further distortions can be introduced by any intervening optical components used to manipulate the optical path. Such distortions can be corrected post capture by software in a known manner.
Figure 4 shows an image generation system 100, made up of a light field display 120, a field lens 130, and a free-form mirror 140.
The light field display 120 is formed by a 2D display device 121 and a lens array 122. In an embodiment, the 2D display device 121 is an LCD screen (either a single device or multiple, tiled devices). In a further embodiment, the 2D display device 121 comprises a circular pixel configuration instead of the conventional rectangular configuration, wherein the relevant scaling and image generation is achieved by known image processing means and processes. In an alternative embodiment, the 2D display device 121 comprises a scanning mirror and a digital micromirror device (DMD), or a liquid crystal on silicon (LCoS) device, though the skilled person would appreciate that any suitable light source and imaging means (including 3D holographic display devices) may be used provided they were capable of operating in the manner described below. The lens array 122 comprises an array of diffractive optical elements (DOE) such as photon sieves). In a further embodiment, the lens array 22 is provided by an array of liquid crystal lens arrays. In a further embodiment, the lens array 22 is provided by reconfigurable DOE.
In an alternative embodiment, the lens array 22 comprises phase Fresnel lens patterns on phase-only LCoS. In an alternative embodiment, the lens array 22 comprises amplitude Fresnel lens patterns on a digital micromirror device (DMD) or amplitude only LCoS. In an alternative embodiment, the lens array 122 comprises conventional lenses. The skilled person would appreciate that any suitable image generation means, and lens array may be employed to provide the light field display 120.
As with the image capture system depicted in figures 1 , 2 and 3, the field lens 130 may be provided by any suitable lens, including a hemispherical, spherical or ball lens or one or more Fresnel lenses as depicted in Figure 21.
As with the image capture system depicted in figures 1 , 2 and 3, the mirror 140 is a hemi spherical, parabolic convex mirror, with a 360° field of view. Though again skilled person would appreciate that any suitably shaped and reflective surface may be employed, including the mirrors depicted in Figures 22 and 23 discussed above in relation to the image capture system.
The skilled person would understand that in further embodiments, any suitable number of intervening reflectors/lens or other optical components are included so as to manipulate the optical path as necessary (for example, to minimize the overall size of the image display system 100).
In use, a series of 2D perspective images (elemental images) 192 are displayed on the 2D display 121 and each of the 2D perspective images are imaged through a corresponding lens of the lens array 122, such that an intermediate 3D image 190 is formed in an imaging volume between the light field display 120 and the field lens 130. This image is relayed through the field lens 130 before being reflected by the free-form mirror 140 towards one or more users 2 who in turn see a reconstructed real 3D image 180 projected at distance from the free-form mirror surface.
Figure 5 shows an embodiment of the image generation system 100 in which a spatial light modulator (SLM) 125 is used in place of the 2D display 121. The remaining components and their arrangement are otherwise identical to those described above in relation to Figure 4, the common reference numerals of Figures 4 and 5 referring to the same components. Accordingly, a series of holographic 3D elemental images 193 are generated for transmission through the lens array 122, in place of the 2D perspective images 192.
Figure 6 depicts a combined image capture and generation system 200 formed by a free form mirror 240, a field lens 230 and a combined light field capture and display device 220, with the mirror 240 and field lens 230 arranged relative to the light field capture and display device 220 in the same manner as the mirror 40, 140 and field lens 30, 130 of the separate image capture system 10 and image generation system 100 are arranged relative to the light field capturing apparatus 20 and light field display 120 respectively.
The light field capture and display device 220 is made up of a lens array 222 and a beam splitter 223 separating a 2D sensing device 224 from a 2D display device 225. As with the separate capture and generation systems described above, any suitable light sensing and display means may be employed provided they operate as described below.
In an embodiment, the 2D sensing device 224 is a CCD. In an alternative embodiment, the 2D sensing device 224 is a CMOS device.
In an embodiment, the 2D display device 225 is an LCD screen (either a single device or multiple, tiled devices). In a further embodiment, the 2D display device 225 comprises a scanning mirror and a digital micromirror device (DMD) or a liquid crystal on silicon (LCoS) device, though the skilled person would appreciate that any suitable light source and imaging means (including 3D holographic display devices) may be used provided they were capable of operating in the manner described below.
The lens array 222 is provided by an array of diffractive optical elements (DOE) such as photon sieves. In a further embodiment, the lens array 222 is provided by an array of liquid crystal lens arrays. In a further embodiment, the lens array 222 is provided by reconfigurable DOE. In an alternative embodiment, the lens array 222 comprises phase Fresnel lens patterns on phase-only liquid crystal on silicon (LCoS) device. In an alternative embodiment, the lens array 222 comprises amplitude Fresnel lens patterns on a digital micromirror device (DMD) or amplitude only LCoS. In an alternative embodiment, the lens array 222 comprises conventional refractive micro lenses.
Accordingly, any suitable image capture means, and lens array may be employed to provide the combined light field capture and display device 220.
As with the image capture system depicted in figures 1 , 2 and 3, the mirror 240 is a hemi spherical, parabolic convex mirror, with a 360° field of view. Though again skilled person would appreciate that any curved or multi-angled reflective surface may be employed, and that the field of view of the mirror 240 need not be limited to 2p steradians.
The skilled person would understand that in further embodiments, any suitable number of intervening reflectors/lens or other optical components are included so as to manipulate the optical path as necessary (for example, to minimize the overall size of the image capture and generation system 200).
In use, the image capture and generation system 200 operates in either a capture mode or a generation mode. In the capture mode, light reflected from a 3D object 280 is incident on the mirror 240 and reflected through the field lens 230, forming an intermediate 3D image 290 in an imaging volume between the filed lens and the lens array 222. This intermediate 3D image is imaged through the beam splitter 222 and is captured by the 2D sensing device 224.
In the generation mode, a series of 2D perspective images (elemental images) 292 are displayed on the 2D display 225 and each of the 2D perspective images is reflected by the beam splitter 223 through a corresponding lens of the lens array 222, such that an intermediate 3D image 290 is formed in an imaging volume between the lens array 222 and the field lens 230. This image is relayed through the field lens 230 before being reflected by the free-form mirror 240 towards one or more users who in turn observe a reconstructed 3D image 280 at a distance from the surface of the free-form mirror 240.
The 3D images generated by both the combined image capture and generation system 200 and the image generation system 100 are able to be displayed in different ways. Figure 7 shows the different ways a 3D image can be projected. Whilst the mirror 240 shown in Figure 7 is that of the combined image capture and generation system 200, it may also be mirror 140 of the image generation system 100. Whilst the field lens 230 and light field capture and display device 220 of the system 200 are not shown, they are arranged as described with reference to Figure 6. A large, single image can be reconstructed around the mirror 240 such that an observer can view different parts of the same 3D image. In an alternative embodiment, multiple different 3D images can be displayed around the mirror 240. Observers will view different objects at different locations around the mirror. In a further embodiment, different observers can view the same portion of a common object.
In a further embodiment, the image capture and generation system 200 is able to operate in both capture and generation mode concurrently, with an image being projected on to one portion of the mirror 240 and captured from another portion.
An advantage of the image capture and generation system 200 employing a common optical set up in both the capture and generation modes is that any distortion affecting the captured image will be undone when these captured images are displayed by the same system.
Figure 8 depicts an alternative embodiment to that of figure 6, wherein a separate lens array is provided for each of the 2D sensing device 224 and the 2D display device 225. The mirror 240 and field lens 230 and their relative arrangement to the light field capturing apparatus and display device 220 are otherwise as depicted and described with reference to Figure 4, with the common reference numerals referring to the same components.
Figure 9 depicts a further embodiment in which the 2D display device 225 is replaced with a 3D light field display provided by a spatial light modulator (SLM) 226, in a similar manner to Figure 5. Accordingly, the elemental images 292 are holographic 3D perspective images 293. The mirror 240 and field lens 230 and their relative arrangement are otherwise as depicted and described with reference to Figure 8, with the common reference numerals referring to the same components. The skilled person would appreciate that the SLM 226 is also compatible with the single lens array 222 embodiment of Figure 6.
Figures 10 and 11 show a further embodiment of the image capture and generation system 200 which includes a phased array of ultrasonic transducers 150 arranged around the periphery of the free-form mirror 240. For simplicity, the features of the image capture portion of the system 200 are not shown. Whilst Figure 10 does not depict the 2D display device 225, it is present and arranged as depicted in any of Figures 6, 8 and 9, the phased ultrasonic array 150 being compatible with both the 2D display device and the holographic display device embodiments. The phased array 150 is also compatible with the image generation system 100 in isolation. The phased array may be provided by any suitable means. In a preferred embodiment the phased array is an array of ultrasonic transducers 150. The phased array is configured to provide haptic feedback to the user in order to allow the user to interact with the displayed object.
In Figure 11 , whilst the field lens 230 and light field capture and display device 220 of the system 100 are not shown, they are arranged as described with reference to Figures 6, 8 and 9. Accordingly, the embodiment of Figures 10 and 11 is identical to that of Figures 6, 8 and 9 with the addition of the phased ultrasonic array 150, with the common reference numerals referring to the same components. Again the phased array may be provided by any suitable means.
In use, the phased ultrasonic array 150 generates a pressure field configured to emulate the physical sensation of the 3D object being displayed thus providing haptic feedback to the user. Accordingly, the system 200 provides both visual and sensational cues of the reconstructed image. In a further embodiment, a synchronised sound system is used to generate a 3D surrounding and directional audible acoustic field. Such a system allows for the three components of the human sensory system (namely vision, somatosensation and audition) to be stimulated such that it is possible to completely simulate the sensation and interaction with physical objects.
Figure 12 and 13 show an embodiment of the invention in which the image capture and generation system 200 is installed in a vehicle.
For simplicity, only the mirror 240 and the phased ultrasonic array 150 of the system are pictured. Whilst the field lens 230 and light field capture and display device 220 of the system 200 are not shown, they are arranged as described with reference to Figure 4.
In both Figures 12 and 13 the multi-direction projection capabilities of the image capture and generation system 200 enable a first real 3D image 280a to be generated at a first region and displayed towards the driver 2a of the vehicle, whilst a second real image 280b is generated at a second region and displayed for the front passenger 2b. At the same time, virtual images 280c and 280d are displayed to the driver 2a and passenger 2b respectively by projecting images onto the windscreen 199 of the vehicle which are then reflected back towards the driver 2a and passenger 2b. Additional projections optics (a combination of mirrors and lenses) between the windscreen 199 and the mirror 240 can be used to scale the size of the virtual images. Accordingly, the image generation system operates as a head- up display (HUD). In an embodiment, the real images will be in focus at an apparent depth within reach of the observer, whereas the virtual images will be seen as augmented/mixed reality images overlaid on physical objects outside the vehicle. This allows the occupants in the front seats to observe HUD images and infotainment images and interact with them.
The phased array, such as ultrasonic array 150, generates a pressure field configured to emulate the physical sensation of the 3D object being displayed. Accordingly, the system 200 provides both visual and sensational cues of the reconstructed image. Though it is not shown in the figures, it is envisaged that the synchronised sound system described in relation to Figures 10 and 11 is optionally incorporated in to the vehicle.
Figure 13 shows a further embodiment with multiple image generation systems 200 for projecting real images 280e and 280f to passengers 2c and 2d in the back seats of the vehicle. This embodiment functions in the manner described above.
Figures 14 and 15 depict the general set up of the components common to the image capture system, the image generation system 100 and the combined image capture and generation system 200. For simplicity, only the reference numerals corresponding to the combined system 200 are shown in the figures.
Figures 14 and 15 depict the capture and generation modes, wherein in the capture mode, 3D objects in 3D volume 280 is reflected by the mirror 240 through the field lens 230 to produce the 3D intermediate images in imaging volume 290. In the generation mode, intermediate 3D images are generated in the 3D volume 290 between the field lens 230 and the lens array 222 such that light is directed through the field lens 230 and onto the mirror 240 to produce the reconstructed 3D images in 3D volume 280.
Figures 18 and 19 depict an alternative configuration for the free-form mirror and field lens. Though the depicted embodiment refers to the field lens 30 and lens array 22 of the image capture system 10, the illustrated mirror configuration is compatible with the image generation system 100 and the combined image capture and generation system 200.
The illustrated mirror is formed by two sub-mirrors 401 and 402. A first section 401 is a truncated hemisphere, whilst a second section 402 is convex. Though, the mirror pair can assume any matching surface shape to create the required geometry for the 3D image reconstruction. In the capturing mode, light from the object 80 is incident on the first mirror section 401 and reflected towards the second section 402 which redirects the light through the field lens 30 (forming the intermediate image 90) and onto the lens array 22. This arrangement reduces the size of the system by allowing optical components to be at least partially accommodated in the volume defined by the curve of the first mirror section 401.
In the display mode, a set of 2D perspective images 192, 292 or holographic 3D images 193, 293 are imaged through a lens array 122, 222 (not shown), such that an intermediate 3D image 190, 290 is formed in an imaging volume between the light field display and the field lens 130, 230. This image is then relayed through the field lens before being reflected by the second mirror section 401 onto the first mirror section 402 thereby forming a reconstructed real 3D image 180, 280 projected at a distance from the mirror surface.
Figure 20 depicts a further embodiment of the combined image capture and generation system 200 in display mode, in which the lens array 222 is removed and replaced with a second lens array 123 which surrounds the free-form mirror 240. A corresponding setup can be used in the image generation system 100.
The illustrated embodiment is compatible with both the 2D display device 225 of Figure 8 and the SLM 226 of figure 9.
When the light field capture and display device 220 is configured to generate a series of 2D perspective images 292, a diffusive screen 160 is positioned around the periphery of the mirror 240. In use, the 2D perspective images are formed on the diffusive screen before being relayed through the second surrounding lens array 123 to generate the real 3D images 280. While the depicted diffusive screen 260 is cylindrical, any suitable shape may be used.
When the light field capture and display device 220 utilises a SLM 225 to generate a series of 3D perspective images 293, no diffusive screen is employed.
Figures 24 and 25 depict a further embodiment of the image capture and generation system 200 which includes an interactive hand tracking system 500. Whilst the image capture and generation system 200 is setup as shown in Figure 6, the interactive hand tracking system 500 is equally compatible with the alternative setup depicted in Figure 9. The interactive hand tracking system 500 is compatible with both the 2D display device and the holographic display device embodiments of the image capture and generation system 200, as well as with the image generation system 100 in isolation.
The interactive hand tracking system 500 comprises a controller 510 in communication with the 2D display device 225 (or its equivalent), the 2D sensing device 224 (or its equivalent) and the phased ultrasonic array 150. The controller 510 includes an image processing unit configured to recognise and track a user’s hands in a known manner.
In use, the position and movement of one or more hands is captured by the image capturing portion of the capture and generation system 200, whilst the display portion projects the relevant 3D object with which the user interacts. In an embodiment, the interactive hand tracking system 500 is used in conjunction with the phased ultrasonic array 150 in order to provide the sensation of tactile feedback corresponding to the displayed object and the detection of the user’s hands.
The process of recognising the user’s hands, determining their position and prompting the appropriate response is carried out by a controller 510 in a known manner.
Figure 25 sets out an example of the operational steps of the interactive hand tracking system 500.
In step S501 , the position of the user’s hand are recorded by the controller 510 using any known suitable means. In an embodiment, the controller 510 is in communication with the 2D sensing device 21 , 224 of the image capture system 10 or the image capturing portion of the combined image capture and generation system 200, data from which feeds into the image processing unit.
At step S502, the background is removed and the general shape of the hand is determined by known background removal methods performed at the controller 510. . In an embodiment, this achieved by analysing differences in the perspective images captured by the 2D sensing device 21 , 224. Each perspective image will have a portion of the background. By comparing the pixels of the perspective images by correlation, it is possible to determine which pixels belong to the background or to the hand. Known filtering methods may then be applied to determine the overall shape of the hand.
At step S503, the overall handshape is registered. Individual feature points of the hand image are identified and extracted for analysis. In an embodiment, this is achieved by comparing the extracted image of the hand to a database of known hand gestures accessible by the controller 510.
At step S504, the individual portions of the hand, including the fingers are recognised via standard edge detection and shape recognition techniques that would be apparent to the skilled person. In an embodiment, recognisable gestures are stored as a series of feature points in the database. Detectable gesture include a button press, swipe, pick, pull, pinch zoom. The recorded feature points of the handshape are then compared to those in the database so as to identify the most likely gesture being performed.
At step S505, the controller is trained to better recognise points on hands and fingers using a preferably real-time machine learning/deep learning algorithm. In an embodiment, the controller 510 comprises various models of hand and finger shapes which are cross- correlated with observations of the user’s hand and a database of images of known hand and finger positions so as to enable the identification and tracking of hands and fingers.
At step S506 the position of the hands within the 3D volume around the mirror is calculated. The determination of the hand’s exact location is performed by the controller 510 in a known manner. In an embodiment, multiple perspective images of the hand can be used to determine the 3D locations of the points on the hand uniquely. In an embodiment, a plurality of known scanning means are used to determine their respective distance from the hand, thereby providing its location in multiple dimensions. In another embodiment, the hand location is estimated from the observed size of the hand as compared to one or more images of a hand at a known distance stored in memory.
At step S507, the position of the hands is correlated with the known virtual position of the one or more displayed 3D objects.
At step S508, the appropriate visual, haptic and audio feedback is presented to the user. The controller 510 being configured to adjust the shape, size and/or orientation of the displayed 3D objects and the output of the phased ultrasonic array to respond to the detected position and movements of the user’s hand.
PIXEL DISPARITY
In an embodiment, the image sensing device 224 and the display device 225 of the combined image capture and display system 200 have different pixel densities. In a particular embodiment, the sensing device 224 has a higher resolution and pixel density than the display device.
In an embodiment, a suitably sized higher resolution display is provided by tiling smaller higher resolution devices. In an alternative embodiment, a single projected image from the display device is scanned using time-multiplexing in the transversal plane perpendicular to the optical axis of the system (i.e. the path taken by the light).
An alternative approach is depicted in Figure 16. These Figures depict a system with a fixed number of pixels (mx) and a fixed imaging distance (d). By increasing the distance (a) between the sensing device 224 and the lens array 222 and the focal length (f) of each lens in the lens array 222 the exit pupil size of the system is increased. Therefore, a larger display with larger pixels will give the same reconstruction when the elemental images are viewed through a larger pupil size and longer focal length lens arrays 222 when the display is placed at the matching imaging distance.
CROSSTALK
Figure 17a depicts a conventional display device 121 , 225 arranged relative to a lens array 122, 222 as described in the image generation system 100 and combined image capture and generation system 200 set out above. For simplicity, only a single lens of the lens array 122, 222 is shown.
The image generation process relies on light from one portion of the display device 121 , 225 passing through a single corresponding lens of the lens array 122, 222. If light from one pixel leaks into a neighbouring lens in the array (as shown in Figure 17a), this creates aliases around the generated 3D image.
It is known to place a baffle array between the lens array 122, 222 and the display device 121 , 225 to block the light from neighbouring regions. An equivalent setup is applied to the light field capture apparatus 20.
In an alternative embodiment depicted in Figure 17b, a holographic plate 350 is used to redistribute the light coming from the backlight 300 and display device 121 , 225 such that, every elemental image appears behind its corresponding lens in the array 122, 222. This replaces the baffle (which is bulky and hard to manufacture) with a thin plate of diffractive optical element.

Claims

1. An image capture system comprising a tree-form mirror, a field lens and a light field capturing apparatus,
wherein the tree-form mirror is configured to reflect incident light from an object through the field lens such that a three-dimensional image of the object is formed in an intermediate imaging volume between the field lens and the light field capturing apparatus, and
wherein the three-dimensional image is captured by the light field capturing apparatus.
2. The system of claim 1 wherein the field lens is one of a hemispherical, spherical or ball lens or any combination of reciprocal/reversible optics.
3. The system of any preceding claim wherein the light field capturing apparatus comprises a first lens array and a two-dimensional photosensitive device.
4. The system of claim 3 wherein the two-dimensional photosensitive device comprises a CMOS array or a CCD.
5. The image capturing system of claims 3 or 4 further comprising a second lens array at least partially surrounding the free-form mirror, such that each lens of the second lens array corresponds to a lens of the first lens array, and whereby light from an object passes through a lens of the second lens array to the free-form mirror and is reflected through the filed lens and through a corresponding lens of the first field array and onto the two- dimensional photosensitive device.
6. The system of any of claims 3-5 wherein the lens arrays comprise diffractive optical elements.
7. The system of any of claims 3-5 wherein the lens arrays comprise spatial light modulators.
8. An image capture and generation system comprising a free-form mirror, a first field lens, a light field capturing apparatus and a light field display,
wherein the image capture and generation system is configured to operate in a capture mode and a generation mode, such that: in the capture mode, the tree-form mirror is configured to reflect incident light from an object through the field lens such that a three-dimensional image of the object is formed in an intermediate imaging volume between the field lens and the light field capturing apparatus, and the three-dimensional image is captured by the light field capturing apparatus; and
in the generation mode, the light field display is configured to project a first
3D image in an intermediate imaging volume between the light field display and the field lens, such that a second 3D image is rendered, said second image reflected by the free-form mirror and formed at a distance from the surface of the free-form mirror, the second image being a real image corresponding to the first image.
9. The image capture and generation system of claim 8 wherein the field lens is one of a hemispherical, spherical or ball lens or any combination of reciprocal/reversible optics.
10. The image capture and generation system of any of claims 8 and 9 wherein the light field capturing apparatus comprises a lens array and a two-dimensional photosensitive device.
11. The image capture and generation system of claim 10 wherein the two-dimensional photosensitive device comprises a CMOS array or a CCD.
12. The image capture and generation system of claims 8-11 wherein the light field display comprises a picture generation unit and a lens array.
13. The image capture and generation system of claim 12 wherein the picture generation unit comprises one of a laser scanner, a hologram generator, a pixelated display or a projector, wherein the projector comprises a light source and a spatial light modulator.
14. The image capture and generation system of claim 12 as dependent on claim 10 wherein the lens array of the light field capturing apparatus and the lens array of the light field display are provided by a single lens array.
15. The image capture and generation system of any of claims 8-14 further comprising a second lens array at least partially surrounding the free-form mirror, such that each lens of the second lens array corresponds to a lens of the first lens array, and whereby light from an object passes through a lens of the second lens array to the free-form mirror and is reflected through the filed lens and through a corresponding lens of the first field array and onto the two-dimensional photosensitive device.
16. The image capture and generation system of any of claim 8-15 wherein the lens arrays comprise diffractive optical elements.
17. The image capture and generation system of any of claim 10-16 further comprising a beam splitter located between the field lens and the light field capturing apparatus and display,
the beam splitter being configured such that in the generation mode the three- dimensional image generated by the light field display is reflected through the field lens onto the free-form mirror, and
in the capture mode incident light from an object is reflected by the free-form mirror through the field lens and onto the light field capturing apparatus.
18. The system of any of claims 8-17 further comprising an image processor in communication with the light-field display, wherein the image processor is configured to account for distortions caused by the optical set up such that the second image appears undistorted.
19. The system of any of claims 8-18 further comprising a phased array of ultrasonic transducers configured to provide a tactile output corresponding to the dimensions of the second image.
20. The system of claim 19 wherein the phased array of ultrasonic transducers is located around the periphery of the free-form mirror.
21. The system of any of claims 8-20 further comprising monitoring means for tracking the hands, fingers, head and eyes of users.
22. The system of any of claims 8-21 further comprising a 3D surrounding, and directional sound system synchronised with the phased array of ultrasonic transducers and the displayed 3D images.
23. The system of any preceding claims wherein the free-form mirror is one of a convex mirror and a conical mirror.
24. The system of any preceding claims wherein the free-form mirror comprises a first truncated hemispherical portion and a separate second convex portion.
PCT/GB2019/050026 2018-01-05 2019-01-04 Multi-angle light-field capture and display system WO2019135088A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
GBGB1800173.5A GB201800173D0 (en) 2018-01-05 2018-01-05 Multi-angle light capture display system
GB1800173.5 2018-01-05
GBGB1802801.9A GB201802801D0 (en) 2018-01-05 2018-02-21 Multi-angle light capture display system
GB1802801.9 2018-02-21
GBGB1815033.4A GB201815033D0 (en) 2018-01-05 2018-09-14 Multi-angle light field capture and display system
GB1815033.4 2018-09-14

Publications (2)

Publication Number Publication Date
WO2019135088A1 true WO2019135088A1 (en) 2019-07-11
WO2019135088A8 WO2019135088A8 (en) 2019-09-12

Family

ID=61190272

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/GB2019/050026 WO2019135088A1 (en) 2018-01-05 2019-01-04 Multi-angle light-field capture and display system
PCT/GB2019/050025 WO2019135087A1 (en) 2018-01-05 2019-01-04 Multi-angle light-field display system

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/GB2019/050025 WO2019135087A1 (en) 2018-01-05 2019-01-04 Multi-angle light-field display system

Country Status (2)

Country Link
GB (4) GB201800173D0 (en)
WO (2) WO2019135088A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4025953A4 (en) * 2019-09-03 2023-10-04 Light Field Lab, Inc. Light field display system for gaming environments

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000035204A1 (en) * 1998-12-10 2000-06-15 Zebra Imaging, Inc. Dynamically scalable full-parallax stereoscopic display
WO2005114554A2 (en) * 2004-05-21 2005-12-01 The Trustees Of Columbia University In The City Of New York Catadioptric single camera systems having radial epipolar geometry and methods and means thereof
US20130208083A1 (en) * 2012-02-15 2013-08-15 City University Of Hong Kong Panoramic stereo catadioptric imaging
US20170243373A1 (en) * 2015-04-15 2017-08-24 Lytro, Inc. Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6270674B2 (en) * 2014-02-27 2018-01-31 シチズン時計株式会社 Projection device
CA2950425C (en) * 2014-05-30 2022-01-25 Magic Leap, Inc. Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality
GB2557231B (en) * 2016-11-30 2020-10-07 Jaguar Land Rover Ltd Multi-depth display apparatus
CN110914741A (en) * 2017-03-09 2020-03-24 亚利桑那大学评议会 Free form prism and head mounted display with increased field of view

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000035204A1 (en) * 1998-12-10 2000-06-15 Zebra Imaging, Inc. Dynamically scalable full-parallax stereoscopic display
WO2005114554A2 (en) * 2004-05-21 2005-12-01 The Trustees Of Columbia University In The City Of New York Catadioptric single camera systems having radial epipolar geometry and methods and means thereof
US20130208083A1 (en) * 2012-02-15 2013-08-15 City University Of Hong Kong Panoramic stereo catadioptric imaging
US20170243373A1 (en) * 2015-04-15 2017-08-24 Lytro, Inc. Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALI ÖZGÜR YÖNTEM ET AL: "Design for 360-degree 3D Light-field Camera and Display", IMAGING AND APPLIED OPTICS 2018, 25 June 2018 (2018-06-25), XP055562196, ISBN: 978-1-943580-44-6, Retrieved from the Internet <URL:https://www.osapublishing.org/DirectPDFAccess/A7C6F6A6-B2D5-E696-DE9108249EEC4DF2_390697/3D-2018-3Tu5G.6.pdf?da=1&id=390697&uri=3D-2018-3Tu5G.6&seq=0&mobile=no> [retrieved on 20190226], DOI: https://doi.org/10.1364/3D.2018.3Tu5G.6 *
BENJAMIN LONG ET AL: "Rendering volumetric haptic shapes in mid-air using ultrasound", ACM TRANSACTIONS ON GRAPHICS (TOG), ACM, US, vol. 33, no. 6, 19 November 2014 (2014-11-19), pages 1 - 10, XP058060840, ISSN: 0730-0301, DOI: 10.1145/2661229.2661257 *
SHUAISHUAI ZHU ET AL: "On the fundamental comparison between unfocused and focused light field cameras", APPLIED OPTICS, vol. 57, no. 1, 1 January 2018 (2018-01-01), US, pages A1, XP055564763, ISSN: 1559-128X, DOI: 10.1364/AO.57.0000A1 *

Also Published As

Publication number Publication date
WO2019135087A1 (en) 2019-07-11
WO2019135088A8 (en) 2019-09-12
GB201800173D0 (en) 2018-02-21
GB201815029D0 (en) 2018-10-31
GB201815033D0 (en) 2018-10-31
GB201802801D0 (en) 2018-04-04

Similar Documents

Publication Publication Date Title
US11921317B2 (en) Method of calibration for holographic energy directing systems
JP7369507B2 (en) Wearable 3D augmented reality display with variable focus and/or object recognition
US7261417B2 (en) Three-dimensional integral imaging and display system using variable focal length lens
KR100947366B1 (en) 3D image display method and system thereof
US20050270645A1 (en) Optical scanning assembly
EP2924991B1 (en) Three-dimensional image display system, method and device
JP2008146221A (en) Image display system
WO2018014049A1 (en) Method of calibration for holographic energy directing systems
US11423814B2 (en) Wearable display with coherent replication
JP2008293022A (en) 3d image display method, system thereof and recording medium with 3d display program recorded therein
Brar et al. Laser-based head-tracked 3D display research
JP4546505B2 (en) Spatial image projection apparatus and method
JP2020508496A (en) Microscope device for capturing and displaying a three-dimensional image of a sample
CN107111147A (en) Stereos copic viewing device
WO2019135088A1 (en) Multi-angle light-field capture and display system
JP2004226928A (en) Stereoscopic picture display device
KR20230165389A (en) Method of Calibration for Holographic Energy Directing Systems
KR101741227B1 (en) Auto stereoscopic image display device
US20090021813A1 (en) System and method for electronically displaying holographic images
Kim et al. A tangible floating display system for interaction
JP2011033820A (en) Three-dimensional image display device
KR102012454B1 (en) Three-dimensional image projection apparatus
JP2002296541A (en) Three-dimensional image display device
Soomro Augmented reality 3D display and light field imaging systems based on passive optical surfaces
Halle et al. Three-dimensional displays and computer graphics

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19700435

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19700435

Country of ref document: EP

Kind code of ref document: A1