CN104704821B - Scan two-way light-field camera and display - Google Patents
Scan two-way light-field camera and display Download PDFInfo
- Publication number
- CN104704821B CN104704821B CN201380052016.0A CN201380052016A CN104704821B CN 104704821 B CN104704821 B CN 104704821B CN 201380052016 A CN201380052016 A CN 201380052016A CN 104704821 B CN104704821 B CN 104704821B
- Authority
- CN
- China
- Prior art keywords
- display
- light
- light field
- radiance
- light beam
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/02—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes by tracing or scanning a light beam on a screen
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B26/00—Optical devices or arrangements for the control of light using movable or deformable optical elements
- G02B26/08—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B26/00—Optical devices or arrangements for the control of light using movable or deformable optical elements
- G02B26/08—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
- G02B26/0816—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements
- G02B26/0833—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements the reflecting element being a micromechanical device, e.g. a MEMS mirror, DMD
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B26/00—Optical devices or arrangements for the control of light using movable or deformable optical elements
- G02B26/08—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
- G02B26/0816—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements
- G02B26/0833—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements the reflecting element being a micromechanical device, e.g. a MEMS mirror, DMD
- G02B26/085—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more reflecting elements the reflecting element being a micromechanical device, e.g. a MEMS mirror, DMD the reflecting means being moved or deformed by electromagnetic means
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B26/00—Optical devices or arrangements for the control of light using movable or deformable optical elements
- G02B26/08—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
- G02B26/0875—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light by means of one or more refracting elements
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B26/00—Optical devices or arrangements for the control of light using movable or deformable optical elements
- G02B26/08—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
- G02B26/10—Scanning systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B26/00—Optical devices or arrangements for the control of light using movable or deformable optical elements
- G02B26/08—Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
- G02B26/10—Scanning systems
- G02B26/101—Scanning systems with both horizontal and vertical deflecting means, e.g. raster or XY scanners
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0075—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for altering, e.g. increasing, the depth of field or depth of focus
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/10—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images using integral imaging methods
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/20—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
- G02B30/26—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
- G02B30/27—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
-
- G—PHYSICS
- G02—OPTICS
- G02F—OPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
- G02F1/00—Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
- G02F1/01—Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour
- G02F1/11—Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour based on acousto-optical elements, e.g. using variable diffraction by sound or like mechanical waves
-
- G—PHYSICS
- G02—OPTICS
- G02F—OPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
- G02F1/00—Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
- G02F1/29—Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the position or the direction of light beams, i.e. deflection
-
- G—PHYSICS
- G02—OPTICS
- G02F—OPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
- G02F1/00—Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
- G02F1/29—Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the position or the direction of light beams, i.e. deflection
- G02F1/33—Acousto-optical deflection devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/001—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
- G09G3/003—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/10—Intensity circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/957—Light-field or plenoptic cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/71—Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
- H04N25/745—Circuitry for generating timing or clock signals
-
- G—PHYSICS
- G02—OPTICS
- G02F—OPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
- G02F1/00—Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
- G02F1/29—Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the position or the direction of light beams, i.e. deflection
- G02F1/294—Variable focal length devices
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0626—Adjustment of display parameters for control of overall brightness
- G09G2320/064—Adjustment of display parameters for control of overall brightness by time modulation of the brightness of the illumination source
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/14—Detecting light within display terminals, e.g. using a single or a plurality of photosensors
- G09G2360/141—Detecting light within display terminals, e.g. using a single or a plurality of photosensors the light conveying information used for selecting or modulating the light emitting or modulating element
Abstract
The invention provides a kind of two-way light-field camera and display device (300), it includes the array of two-way light-field camera and display element (310), and each bilateral element (310) includes:Scanner (504,506), for scanning the inputs light beam on two-dimensional field of view (600) and output beam (500);Input focus modulator (612), the focus for modulating inputs light beam (600) over time;Radiance sensor (604), the radiance for sensing inputs light beam (600) over time;Radiance sampler (606), the radiance for inputs light beam (600) of being sampled at discrete time;Light beam generator (522), for producing output beam (500);Radiance modulator (524), the radiance for modulating the output beam (500) over time;And output focal spot modulation device (530), the focus for modulating the output beam (500) over time.
Description
Technical field
The present invention relates to a kind of high-fidelity light-field camera, light field display and two-way light-field camera and display.
Background technology
7D light fields (or plenoptic function [Adelson 91]) define every ray through each point of spatial volume
Spectral radiance over time, and each possible viewpoint (view) being therefore included in the volume.6D light fields are defined
Pass through the spectral radiance of every ray of given surface over time, i.e. it represents the section (slice) by 7D light fields.
Under normal circumstances, the ray for only passing through surface along a direction is interested, for example, being limited by surface
Volume emitted by ray.The 7D light fields of surrounding space can be used to infer in the 6D light fields of boundary, and this is provided
The basis that light field is shown.Extrapolation is as they are carried out by spatial by the light by display emission.
Although optics light field is continuous, for practical operation, for example it is having the discrete point set on border surface
Place and to discrete directions of rays collection by limit band and sampling.
In the present context, the final purpose that light field is shown is in order to which any discrete light field with enough fidelitys is rebuild
Continuous optics light field so that the display seems and the window in the original physical scene for the discrete light field of sampling from it
It is difficult to distinguish, i.e. the Depth cue of all real worlds is all present.Spectators see different points of view from each eye;It can fix
And it is absorbed in the object at its appropriate depth in virtual scene;And experience smooth when they move relative to display
Motion parallax.
In the present context, the final purpose of light-field camera is to capture any physical scene with enough fidelitys
Discrete light field so that discrete light field seems when being shown by high-fidelity light field display is difficult to area with the window in original scene
Not.
Three-dimensional (3D) display of existing glasses-free is divided into three major types [Benzie 07, Connor 11]:Automatic light splitting
Three-dimensional display (autostereoscopic), body three-dimensional display (volumetric) and with holographic three-dimensional display
(holographic).Single-tone viewing areas or multiple viewing areas of the automatic light splitting three-dimensional display on viewing field are spectators
(or multiple spectators) are provided with the three-dimensional right of the 2D images of scene, and can be aligned using head tracking with spectators'
Viewing areas.Body three-dimensional display device by quickly skim over 0D, 1D or 2D of the optical transmitting set by the volume array or
By directly launching the true 3D rendering that light generates the scene in the volume of display from translucent voxel array.It is holographic
Display re-creates the wave surface (wavefront) launched by original scene [Yaras10] using diffraction light.
Nominally both body three-dimensional display and holographic three-dimensional display rebuild correct optics light field, that is to say, that
They generate the wide visual field wave surface with the correct center of curvature.However, body three-dimensional display have the shortcomings that two it is main:Institute
The scene for stating reconstruction is limited to the volume of display, and whole scene is that translucent (this is so that it is unsuitable for needing authenticity
Display application).The problem of there are size and resolution limitations always in practical holographic three-dimensional display, and generally current
Implementation in a support level parallax [Schwerdtner 06, Yaras 10, Barabas 11]
The typical automatic light splitting three-dimensional display of various visual angles provides the visual angle of limited quantity, therefore does not support motion parallax.
So-called " (holoform) of holographic format " automatic light splitting three-dimensional display [Balogh 06, Benzie 07, Urey 11] is carried
For greater amount of visual angle (such as 10-50), therefore provide similar (being generally only horizontal) motion parallax.However, it
Nominally without rebuilding even correct optics light field.
The content of the invention
In a first aspect, the invention provides a kind of two-way light-field camera and display device, it includes two-way light-field camera
With the array of display element, each bilateral element includes:Scanner, for scanning inputs light beam and output on two-dimensional field of view
Light beam;Input focus modulator, the focus for modulating inputs light beam over time;Radiance sensor, for over time
Sense the radiance of inputs light beam;Radiance sampler, the radiance for inputs light beam of being sampled at discrete time;Light beam is given birth to
Grow up to be a useful person, for producing output beam;Radiance modulator, the radiance for modulating the output beam over time;And it is defeated
Go out focus modulator, the focus for modulating the output beam over time.
Two-way light-field camera and display device can include beam splitter, for inputs light beam and output beam to be divided
Open.
Two-way light-field camera and display device may include at least one actuator, the array for causing the bilateral element
Vibrate between at least two positions.The vibration can be resonance.
The array of vibration bilateral element causes entrance/exit pupil of the element to array extent (array extent)
Endless all standing improved.
In second aspect, the present invention provides a kind of light-field camera equipment, and it includes the array of light-field camera element, Mei Gexiang
Machine element includes:Scanner, for scanning the inputs light beam on two-dimensional field of view;Input focus modulator, for over time
Modulate the focus of inputs light beam;Radiance sensor, the radiance for sensing inputs light beam over time;And radiance is adopted
Sample device, the radiance for inputs light beam of being sampled at discrete time.
The input focus modulator can be liquid crystal lens (or lens to), liquid lens, deformable films speculum,
Deformable films liquid-filled lens, addressable lens stack or electro-optic lens.
Inputs light beam make it that fidelity maximizes in the light field captured obtained by display with the adequate focusing of time.
The radiance sensor includes at least one photoelectric detector.The photoelectric detector can be in photoconductive mode
Photodiode, the photodiode run under photovoltaic pattern, photistor and the photo-resistor of lower operation.
Radiance sensor can include multiple photoelectric detectors, and each is suitable to have different spectral responses.It is described
Multiple photoelectric detectors can be stacked (stacked).
Radiance sampler can include at least one analog-digital converter (ADC) and at least one programmable automation controller
Device (PGA).
Radiance sensor can include the linear or two-dimensional array of photoelectric detector, be configured to hold with the exposure of extension
Track and accumulate continuous time parallel the radiance of a plurality of inputs light beam.The array of photoelectric detector can be with simulation shift LD
Device is coupled, for the radiance with the multiple light beam of accumulated time.
Integrated photodetector array and linear shift register device can be by the electricity as linear shift register
Lotus coupled apparatus (CCD) or by the way that photoelectric detector collection to its memory node is made up of bucket brigade device (BBD).
Sampled radiance can be stored in discrete light field.Discrete light field can be sent to the light field for display
Display device.
Light-field camera equipment may include at least one actuator, for causing the array of the camera components at least two
Vibrated between position.The vibration can be resonance.
In the third aspect, the invention provides a kind of light field display device, it includes the array of light field display element, each
Display element includes:Scanner, for scanning the output beam on two-dimensional field of view;Light beam generator, for producing output light
Beam;Radiance modulator, the radiance for modulating the output beam over time;And output focal spot modulation device, for
The focus of output beam described in time-modulation.
With reference to sufficiently fast sweep speed, visual angle persists the continuous light of output beam picture for allowing the viewer to feel scanned
Learn light field.
The light beam generator includes at least one optical transmitting set, and the optical transmitting set can be with laser, laser diode, hair
Optical diode, fluorescent lamp and incandescent lamp.
The light beam generator can include multiple optical transmitting sets, each with different emission spectrum.The multiple light
Transmitter can be stacked.
The radiance modulator can be light beam generator it is intrinsic, or it can be separation.
The radiance modulator can be acousto-optic modulator, absorbability electrooptic modulator or refraction electrooptic modulator.
The radiance of the light beam can be according to specified emissivity values corresponding with the position of display element and visual field
The instantaneous direction of interior output beam is modulated.
Specified emissivity values can be retrieved and obtained from the discrete representation of desired output light field.The discrete light field
It may be received from light-field camera equipment.
Light field display device may include to act on angle reconstruction wave filter on the output beam.The wave filter can be diffusion
Device or array or lenslet (lenslets).
The output focal spot modulation device can be liquid crystal lens (or lens to), liquid lens, deformable films speculum,
Deformable films liquid-filled lens, addressable lens stack or electro-optic lens.
The adequate focusing of the output beam ensure that spectators perceive smooth disparity between output beam example with
And experience consistent influx (vergence) and adaptability regulation (accommodation) clue.
The focus of the light beam can be according in corresponding with the position of display element specified depth value and visual field
The instantaneous direction of output beam is modulated.
Specified depth can be the constant depth of scene depth or the spectators on the surface, and scene depth or fixation
Depth can be drawn from the sight of tracks facial portion, eyes or spectators.
The light field display device may include at least one actuator, for causing the array of the display element at least
Vibrated between two positions.The vibration can be resonance.
The scanner can be electromechanical scanning reflection mirror, addressable deflector storehouse, acousto optic scanner or electropical scanning
Instrument
The scanner includes the biaxial plane electric scanning speculum with least one drive mechanism.The drive mechanism can be with
For electrostatic drive mechanism, Magnetic driving mechanism or capacitive drive mechanism.
The scanner may include biaxial plane electric scanning speculum, and it may include:Speculum, platform, inner frame and housing
Frame.The speculum is connected to the platform via post (post), and the platform is hingedly connected to inner frame via first pair, and
And the inner frame is hingedly connected to outer framework via second pair.First pair of hinge is disposed substantially perpendicular to second pair of hinge
Chain, thus allows the biaxial movement of speculum.
The scanner includes:First scanner, for scanning the inputs light beam and output beam in the first direction;The
Two scanners, for scanning inputs light beam and output beam simultaneously in a second direction, the second direction is substantially perpendicular to the
One direction.
Each scanner can electromechanical scanning reflection mirror, addressable deflector storehouse, acousto optic scanner with or electric light sweep
Retouch instrument.
In fourth aspect, the invention provides a kind of two-way light-field camera and display device, it includes two-way light-field camera
With the array of display element, each bilateral element includes:Photosensor array, for sensing across one group of input on two-dimensional field of view
Light beam;Input focus modulator, the focus for modulating inputs light beam;Light emitter arrays, for across two-dimensional field of view transmitting one
Group output beam;And output focal spot modulation device, the focus for modulating the output beam.
Each optical sensor may include at least one photoelectric detector.Photoelectric detector can be transported under photoconductive mode
Capable photodiode, the photodiode run under photovoltaic pattern, photistor and photo-resistor.
Each optical sensor can include multiple photoelectric detectors, each to be suitable to have different spectral responses.It is described many
Individual photoelectric detector can be stacked.
Each optical transmitting set may include at least one transmitter, and it can be laser, laser diode, light-emitting diodes
Pipe, fluorescent lamp or incandescent lamp.
Each optical transmitting set may include multiple transmitters, and each has different emission spectrum.The multiple transmitter
It can be stacked.
Input focus modulator can be identical focal spot modulation device or single focal spot modulation with output focal spot modulation device
Device.If they are identical focal adjustments, it can be carried out with the time between input focal modes and output focusing mode
Switching.If they are independent, then they can be alternately chosen via polarization rotator over time.
Each (or described) focus modulation device can be that liquid crystal lens (or lens to), liquid lens, deformable films are anti-
Penetrate mirror, deformable films liquid-filled lens, addressable lens stack or electro-optic lens.
At the 5th aspect, (NED (near-eye display)) equipment, the nearly eye is shown the invention provides a kind of nearly eye
Display device is made up of light field display device, and the light field display device is configured to be shown according to the regulation of the adaptability of spectators and influx
The light field focused on.The NED devices may include one or more cameras and be adjusted or influx to track the adaptability of spectators.
At the 6th aspect, the invention provides a kind of mirror equipment, the mirror equipment includes a two-way light field phase
Machine and display device, the two-way light-field camera and display device are configured to capture in real time and show light field video again.It is described
Light field video can be simulated conventional speculum by flip horizontal.
At the 7th aspect, the invention provides a kind of unidirectional not visible (invisibility) equipment, the not visible equipment
It is made up of the light-field camera equipment for being connected with light field display device, the camera and the display device are with any hole
(interstitial) space is installed back-to-back, and the display device is configured to show the light captured by the camera apparatus in real time
Field picture, so that the interstitial space is effectively not visible to the spectators of display device.
In eighth aspect, the invention provides the two-way not visible equipment of one kind, the not visible equipment is by a pair of bi-directional lights
Field camera and display device are constituted, and the two equipment are installed back-to-back with any interstitial space, and each equipment is configured in real time
Ground shows the light field video captured by other devices, so that the interstitial space effectively can not to the spectators of miscellaneous equipment
Depending on.
At the 9th aspect, the invention provides a kind of virtual (virtual) 3D for showing and being lighted in real time by true light field
The method (or light field video) of the light field of scene, this method includes:Use two-way light-field camera and display device capture input light
Field video;Using the video of input light field, actual (virtually) lights 3D scenes (or light field video) in real time;According to point
Bright 3D scenes generation output light field video;And show that exported light field is regarded using two-way light-field camera and display device
Frequently.
At the tenth aspect, the present invention provides a kind of method for showing light field, and this method includes detecting at least one
At least one eye of spectators;For each eye and for each in one group of display location, it is determined that being passed through by the eyes
The depth for the scene that the position is seen;One group of output beam is generated for each position, is represented per beam output beam according to defeated
Go out the scene that the direction of light beam is seen by the position;And adjusted according to the depth for the scene watched by the eyes by the position
Focus of the system per beam output beam.
The tenth on the one hand, the invention provides a kind of method for showing light field, this method includes detection at least one
The blinkpunkt (fixation point) of individual spectators;It is defeated per beam for each, one group of output beam of generation of one group of display location
Go out light beam and represent the scene seen according to the direction of output beam by the position;And according to the blinkpunkt relative to the position
Focus of the depth modulation per beam output beam.
The scene can be virtual (virtual), or can pass through light field be previously generated or capture or light field
Video represents, or can be represented by the light field that is captured in real time by light-field camera or light field video.
This method can include the focus for modulating the corresponding set of inputs light beam captured by light-field camera in real time.
This method may include only to generate to the visible output beam of spectators.
Brief description of the drawings
Figure 1A shows the representative ray of the continuous 6D light fields on the border across volume of interest.
Figure 1B shows what is be sampled, i.e., discrete, class (class) figure of 6D light fields.
Fig. 2A shows the photosensor array sampling directions of rays for particular ray position.
Fig. 2 B show the lens array sampling ray position in light field boundary.
What Fig. 3 A showed the spatial dimension (extent) of optical sensor and the aperture (aperture) of lens realizes that 4D is low
The combined effect of pass filter.
Fig. 3 B show Fig. 3 A focused on using the lens with higher optical power (power) at the point of object space
Specimen beam.
Fig. 4 A show the light emitter arrays for rebuilding the directions of rays for particular ray position.
Fig. 4 B show the lens array in light field Boundary Reconstruction ray position.
What Fig. 5 A showed the spatial dimension (extent) of optical transmitting set and the aperture (aperture) of lens realizes that 4D is low
The combined effect of pass filter
Fig. 5 B show Fig. 5 A's that lens of the use with more low optical ability (power) are focused according to virtual objects
Reconstructed light beam.
Fig. 6 A show the sampling (left side) matched corresponding with Fig. 3 A and 5A and rebuild (right side) light beam.
Fig. 6 B show the sampling (left side) matched in object point/focused on from object point corresponding with Fig. 3 B and 5B and rebuild
(right side) light beam.
Fig. 7 A show the wave surface launched from preferable light field display.
Fig. 7 B show the wave surface launched from multicomponent light field display.
Fig. 8 A show the wave surface captured by preferable light field display.
Fig. 8 B show the wave surface captured by multicomponent light field display.
Fig. 9 A show the eyes of the spectators in the reconstruction light field of virtual point source, and the eye focus is in the light
Source.
Fig. 9 B show eye focus than virtual point source closer to point.
Fig. 9 C show Fig. 9 A and 9B light field display, and it is launched overlaps with Fig. 9 B translated (translated) object point
Spot light light field.
Figure 10 A show the spectators for staring light field display, and the light field display launches the void with being made up of several objects
Intend the corresponding light field of scene.
Figure 10 B are shown in which the position of an eyes, are used by each display element and determine view direction, and because
This is directed to each direction of observation, it is determined that the crosspoint with scenario objects.
Figure 10 C show the direction of gaze of each eye of two eyes of spectators, and it is used for estimating their blinkpunkt.
Figure 10 D show the plane of the focusing of one of eyes of the estimation of Depth according to the blinkpunkt and for every
The intersection point of individual view direction and the focusing panel.
Figure 11 shows a pair of two-way light field displays via network connection.
Figure 12 shows the light-field camera and light field display via network connection.
Figure 13 A show showing for the two-way light field display element based on array with the liquid crystal lens in passive states
It is intended to.
Figure 13 B show showing with the two-way light field display element based on array in the liquid crystal lens by source state
It is intended to.
Figure 14 A show the schematic diagram of the two-way light field display element based on array with Double liquid crystal lens, wherein the
One lens are active.
Figure 14 B show the schematic diagram of the two-way light field display element based on array with Double liquid crystal lens, wherein the
Two lens are active.
Figure 15 shows the block diagram of scanning light field display element.
Figure 16 shows the block diagram of the RGB laser beam generators with multiple brightness (intensity) modulator.
Figure 17 shows the block diagram of scanning light-field camera element.
Figure 18 shows the block diagram for scanning two-way light field display element.
Figure 19 A show the plan of the optical design for scanning two-way light field display element, wherein being penetrated with output
Line.
Figure 19 B show the front view of the optical design for scanning two-way light field display element, wherein being penetrated with output
Line.
Figure 20 shows Figure 19 A realized using lenslet array angle reconstruction wave filter.
Figure 21 A show the plan of the optical design for scanning two-way light field display element, wherein with input light
Beam.
Figure 21 B show the front view of the optical design for scanning two-way light field display element, wherein with input light
Beam.
Figure 22 A show the top view of the biaxial MEMS scanner with the speculum being lifted.
Figure 22 B show the sectional view of the biaxial MEMS scanner with the speculum being lifted.
Figure 23 A show Figure 21 scanning reflection mirror, and it scans and watched attentively spot light on linear photodetector array
Corresponding stationary laser beam.
Figure 23 B show photodetector array, and it includes analog photoelectricity detector array, and it is coupled with simulation displacement and posted
Storage.
Figure 24 shows the block diagram of multicomponent light field display.
Figure 25 A show the top view of the optical design of two-way light field display, with 5 element widths, with output
Ray.
Figure 25 B show the front view of the optical design of two-way light field display, are made up of 10 rows, often 5 elements of row, tool
There is output beam.
Figure 25 C show the front view of the optical design of two-way light field display, are made up of 5 rows, often 10 rotations of row
Element, with output beam.
Figure 26 shows the plan of the two-way light field display of a line, equally rotates as shown in Figure 25 B, each element generation
Light beam corresponding with the single spot light after display.
Figure 27 shows the plan of the two-way light field display of a line, equally rotates as shown in fig. 25 c, each element generation
Light beam corresponding with the single spot light after display.
Figure 28 shows the plan of the two-way light field display of a line, equally rotates as shown in Figure 25 B, each element generation
Light beam corresponding with the single spot light before display.
Figure 29 shows the plan of the two-way light field display of a line, equally rotates as shown in fig. 25 c, each element generation
Light beam corresponding with the single spot light before display..
Figure 30 shows the block diagram of multicomponent light-field camera.
Figure 31 A show the top view of the optical design of two-way light field display, with 5 element widths, with output
Light beam.
Figure 31 B show the front view of the optical design of two-way light field display, are made up of 10 rows, often 5 elements of row, tool
There is inputs light beam.
Figure 31 C show the front view of the optical design of two-way light field display, are made up of 5 rows, often 10 rotations of row
Element, with inputs light beam.
Figure 32 shows the plan of the two-way light field display of a line, equally rotates as shown in figure 31b, each element capture
Light beam corresponding with the single spot light before display.
Figure 33 shows the plan of the two-way light field display of a line, is rotated as shown in Figure 31 C, each element capture
Light beam corresponding with the single spot light before display.
Figure 34 A show the cross-sectional side view for vibrating two-way light field display.
Figure 34 B show the cross-sectional side view for vibrating two-way light field display, two display panel height.
What Figure 34 C showed the two-way light field display of vibration cuts open rearview.
Figure 34 D, which are shown, vibrates the rear cross sectional elevation that two-way light field is shown, the height and width of two display panels.
Figure 35 A show the vibration display curve that vertical shift is changed over time when being driven directly.
Figure 35 B show the vibration display curve that vertical shift is changed over time when being driven by resonance the.
Figure 36 shows the activity diagram of the focus for staring (gaze) control light-field camera according to spectators.
Figure 37 shows the activity diagram for the focus that light-field camera is controlled according to the blinkpunkt (fixation point) of spectators.
Figure 38 shows the activity diagram for showing light field stream from light-field camera.
Figure 39 shows the activity diagram for showing captured light field.
Figure 40 shows the activity diagram of the synthesis light field for display.
Figure 41 shows the block diagram of two-way light field display controller.
Figure 42 A show eye orientation (eye-directed) field of the display element of light field display.
Figure 42 B show central fixation (foveal) field of the eyes in light field display.
Figure 43 shows the block diagram of the two-way light field display controller optimized by specific spectators.
Reference marker is briefly explained
100 light field rays
102 light field borders
104 with the ray intersection on light field border
110 light field videos
112 time intervals
114 time sampling cycles
116 light field frames
118 spatial fields
120 spatial sampling cycles
122 light field visual point images
124 visual fields
126 jiaos of sampling periods
128 spectral radiances
130 spectrum intervals
132 spectrum samples basis
134 radiance samples
136 depth
138 sampling focuses
150 photosensor arrays
152 optical sensors.
154 angular samples light beams
156 angular samples wave filter pin holes
158 planes of delineation
160 spatial sampling filter lens
162 spatial sampling light beams
64 picture points
166 4D specimen beams
168 object points (object point)
170 object planes
180 light emitter arrays
182 optical transmitting sets
184 angle reconstruction light beams
186 angle reconstruction wave filter pin holes
188 space reconstruction filter lens
190 space reconstruction light beams
192 4D reconstructed light beams
200 light field displays
202 display output light beams
204 virtual point sources
206 wave surfaces
210 light field display elements
212 element output beams
220 light-field cameras
222 camera input light beams
224 true spot lights
230 light-field camera elements
232 element inputs light beams
240 spectators' eyes
242 eyes object points
244 eye pupils
246 axial inputs light beams
248 eyes picture points
250 spectators
252 scenario objects
254 display element focuses
256 spectators' blinkpunkts
258 spectators' eyes object planes
300 two-way light field displays
310 two-way light field display elements
320 networks
322 bi-directional display controllers
324 remote audiences
The virtual image of 326 remote audiences
328 local audiences
The virtual image of 330 local audiences
332 remote objects
The virtual image of 334 remote objects
336 native object
The virtual image of 338 native object
340 camera controllers
342 display controllers
344 tracking cameras
400 first positive lens
402 electrodes
404 variable negative lens convex portions
406 variable negative lenses
408 electrodes
410 linear polarizations
412 second positive lens
414 input/output light beams
416 second variable negative lenses
418 switching polarization rotators
500 scanning output beams
502 output visual point images
504 line scanners
506 frame scan instrument
508 2D scanners
510 timing generators
512 outside frame synchronization
514 frame synchronization
516 rows (line) are synchronous
518 sampling clocks
520 radiance controllers
522 light beam generators
524 radiance modulators
526 output focuses
528 output focus controllers
530 output focal spot modulation devices
540 color beam makers
542 red beam makers
544 red radiation rate modulators
546 green beam makers
548 Green-Emission rate modulators
550 blue beam makers
552 blue radiation rate modulators
554 first beam combiners
556 second beam combiners
600 scanning inputs light beams
602 input visual point images
604 radiance sensors
606 radiance samplers
608 input focuses
610 input focus controllers
612 input focus modulators
614 beam splitters
700 laser
702 angle reconstruction wave filters
704 variable output focuses
706 beam splitters
708 speculums
710 twin shaft scanning reflection mirrors
72 speculums
714 variable input focuses
716 constant (Fixed) input focuses
718 apertures (Aperture)
720 photodetectors
730 angle reconstruction wave filter lenslets
732 collimated output beams
The small light beam of 734 angle reconstructions (beamlet)
740 two-axis scanner platforms
742 two-axis scanner Platform hinges
744 two-axis scanner inside casings
746 two-axis scanner inside casing hinges
748 two-axis scanner housings
750 two-axis scanner speculum pillars
752 two-axis scanner speculums
760 fix inputs light beam
762 displacements and accumulation photoelectric detector linear array
764 photodetector linear arrays
766 photodetectors
768 analogue shift registers
770 analogue shift registers level
772 analog-digital converters (ADC)
774 beam energy sampled values
800 vibration display panels
802 vibration display frames (chassis)
804 vibrations show frame
806 vibration display cover glass
808 support springs
Spring support on 810 panels
Spring support in 812 frames
814 actuators
816 bars
Actuator bracket on 818 panels
Actuator bracket in 820 frames
900 detection faces and eyes
902 estimation direction of gaze
904 send the position of eyes and direction of gaze
906 along direction of gaze auto-focusing
908 estimation blinkpunkts
910 send the position of eyes and blinkpunkt
912 are look at focusing in (fixation) plane
920 capture light field frames (frame)
922 transmission light field frames
924 resample light field frame
926 display light field frames
930 positions (data storage)
932 blinkpunkts (data storage)
934 light field videos (data storage)
The focus light field frame that 936, which resample, has
938 3D animation models
940 render with focus light field frame
950 two-way panel controllers
952 bilateral element controllers
954 viewpoint image datas are stored
956 bilateral element controller modules
958 2D view data are stored
960 collimation viewpoint image data storages
962 network interfaces
964 input video interfaces
966 output video interfaces
968 display timing generators
970 panel movement controllers
972 high speed data bus
980 display elements
982 display element eyes
984 central fixations
986 local viewpoint image data storages
988 portion centers are recessed to watch viewpoint attentively as data storage
Embodiment
Light field parameter
Figure 1A shows the representative ray 100 of continuous 6D light fields, and the ray passes through the body interested at intersection point 104
Long-pending border 102.The radiance (L) of ray 100 is time (t), boundary position (being represented by coordinate x and y), directions of rays
(being represented by angle a and b) and the function of wavelength (w).
Although the radiance of ray is only strictly limited in boundary, i.e., at crosspoint 104, separated by border
The additional knowledge of the transparency of two volumes can allow the radiance of ray to be extrapolated in either direction.
Radiance is the measured value of the radiant power of the per unit solid angle (solid angle) of per unit area (with every
Square metre every surface of sphere watt, W/SR/m2Measurement).For the infinitely small ray of continuous light field, radiance is defined as infinitesimal
Solid angle and area.
In order to finally towards the display of people, radiance usually using with the tristimulus color response of human visual system
Ternary basic function (basis function) or the single basic function related to human luminosity's response are sparsely sampled.These bases
Function ensures the appropriate bandwidth limitation in terms of wavelength (w) latitude.For convenience, the wavelength latitude leads in most of analyses
It is often implicit to provide.Therefore, 6D light fields become 5D light fields.
Time dimension (t) can be sampled with discrete time step, to produce the 2D images similar to conventional video sequence
The 4D light field frame sequences of frame.In order to avoid motion spot, or the problem of as just practicality, when sampling or producing video
Generally do not apply appropriate frequency band to time dimension to limit, and this may result in aliasing (aliasing).This generally passes through
It is improved with sufficiently high polydispersity index.
4D light fields are mentioned in reference (and in the appropriate case in this manual) in the document on 4D light fields
In particular instances frame, i.e., limited in terms of the time using implicit wavelength dimension.
Figure 1B is shown for sampling, i.e., discrete, 6D light fields, is configured to the class figure of light field video 110.
Light field video 110 by it is tactic according to the time (t) and in specific time interval 112 with the specific time
The light field frame 116 for the sequence that sampling period 114 captures is constituted.
Each light field frame 116 by according to the tactic of ray position (x and y) and in specific spatial field 118 with
The array for the light field visual point image 122 that the specific spatial sampling cycle 120 captures is constituted.
Each light field visual point image 122 is by according to the tactic of directions of rays (a and b) and in specific visual field 124
On the array of spectral radiance 128 that is captured with the specific angle sampling period 126 constituted.
Each spectral radiance 128 is by the specific spectrum sample basis 132 of the basis sorted according to wavelength (w) specific
The Sequence composition of radiance (L) sample 134 captured on spectrum interval 130.Spectral radiance 128 has optional depth 136, such as
If fruit is known, i.e., scene along the depth in directions of rays.Spectral radiance 128 also records sampling focus 138, is adopted by this
Sample focus can capture spectral radiance 128.Depth 136 and sampling focus 138 will be discussed further below.
Each radiance sample 134 (L) record scalar (scalar) emissivity values.
In this manual, term " light beam " is used to refer to a branch of ray, although its characteristic can change, it is each on
Modification limitation can hereinafter be carried out.
Light field sampling
Fig. 2A, 2B, 3A and 3B show a kind of method carried out to continuous light field with limit and sampling to obtain discrete light field.
Photosensor array 150 shown in Fig. 2A, its relative to particular ray position 104 directions of rays to continuous light field
Sampling.Each optical sensor 152 sampling particular ray direction of the array 150, and integrated (integrate) surround and nominally penetrate
The light beam 154 of line 100.It is this integrated to realize 2D low-pass filters relative to directions of rays.Effective filter kernel (kernel)
It is undesirable tank filters corresponding with the spatial dimension of optical sensor 152.Optical sensor is ideally closely wrapped up
To ensure that enough wave filters are supported.Angular samples light beam 154 is focused on infinitesimal pinhole aperture 156, it and border
Ray position 104 on 102 is overlapped.
Photosensor array 150 is located in plane 158, is represented with parameter directions of rays angle a and b.
Visual field (angular field) 124 is by the institute of photosensor array 150 at angular samples wave filter pin hole 156
Against the angle of (subtend).The inverse in angle sampling period 126, i.e. angular samples rate, be by the center of optical sensor 152 to
The opposite angle of the spacing at center.Angle samples size (i.e. wave filter supports (filter support)) is by optical sensor
The opposite angle of 152 scope.Angle sample counting is equal to the visual field 124 divided by the angle sampling period 126, i.e. optical sensor
152 number.
Fig. 2 B show the lens array sampled at border 102 relative to the position of ray to continuous light field.Should
Each lens 160 of array are sampled to particular ray position, and by focusing of the light beam into the point on optical sensor 152
The collimated light beam 162 of the 164 integrated nominal rays 100 of encirclement.Realize 2D LPFs in integrated relative to one position.Effectively
Filter kernel is undesirable tank filters corresponding with the spatial dimension in the aperture of spatial sampling filter lens 160.
These lens are ideally closely wrapped up ensuring that enough wave filters are supported.
Image distance is from the second principal point (principal point) of lens 160 to the distance of image plane 158.
Spatial field 118 is equal to the scope of border surface 102.The inverse in spatial sampling cycle 120, i.e. space sampling frequency, be
The spacing of the center to center of spatial sampling filter lens 160.Space sample size (i.e. wave filter is supported) is lens 160
The area in aperture.Space sample counts the number for being equal to the spatial field 118, i.e. lens 160 that are divided by the spatial sampling cycle 120
Amount.
Fig. 3 A show optical sensor 152 spatial dimension and lens 160 aperture combination effect, its integrated sampled light
Beam 166 realizes 4D LPFs relative to direction and position simultaneously to realize 4D LPFs.Effective filter kernel is one
Individual 4D tank filters, it provides rational but non-ideal limit band.When spatially integrated light, to be made than box filtering
Device is difficult well.
The scalar value obtained from optical sensor 152 is generally in proportion to the time integral of radiant power, i.e. emittance.It leads to
Cross and it is divided and can changed according to 5D sample sizes (i.e. 1D exposure cycles, 2D space samples size and 2D angle samples size)
For radiance sample 134.
It should be noted that for the sake of clarity, size of the optical sensor 152 in figure is exaggerated, and should (
It is parallel in the case of other) light beam 166 due to angular samples diverging therefore also handled by exaggeration.
The LPF of light field causes visible spot (blur).
In the sampling system (sampling regime), spot is proportional to the diameter of light beam 166.This has two annexes
Component:Angular samples spot, the diameter of its angular samples light beam 154 corresponded in angular samples wave filter, i.e. Fig. 2A;And
Spatial sampling spot, the diameter of its spatial sampling light beam 162 corresponded in spatial sampling filter, i.e. Fig. 2 B.
Fig. 3 B are shown focuses on object sky using the lens 160 with than the higher optical power of lens 160 in Fig. 3 A
Between in point 168 at light beam 166.Corresponding object distance is the distance from object point 168 to the first principal point of lens 160.In object point
168 (and on general object plane 170) places, spatial sampling spot is zero, and beam diameter corresponds only to angular samples spot.
Object sampling period, the i.e. sampling period at object plane 170, it is multiplied by object distance equal to the angle sampling period 126 and (samples at angle
The tangent value in cycle 126).
When object plane 170 is in unlimited distance, then Fig. 3 A specimen beam 166 is obtained.
The convergent angle (convergence angle) of specimen beam 166 (or more appropriate spatial sampling light beam 162) be
By angle that the aperture of lens 160 is opposite at object point 168.The depth of field refers to the given threshold by surrounding (bracket) object point 168
The depth interval that value Spatial sampling spot (or defocused speckle) is limited.Convergent angle is bigger, and defocused speckle is with the change of depth
It is faster, and the depth of field is more shallow (being spaced shorter).Less than shorter object distance and for larger aperture (i.e. corresponding to relatively low
Space sampling frequency) for, the depth of field is relatively shallower.
The focus of adjustment specimen beam 166 makes to increase defocusing in other depths in a depth defocused speckle
Spot will be eliminated for cost, while keeping the appropriate support for 4D low pass filters.This causes defocused speckle will be in light
(trade) is traded between the different zones of field, this is minimized in spot (e.g., corresponds in some regions than other regions
Body surface region) it is more important when be highly useful.
Change focus and do not interfere with the visual field or the radiance of total capture, because each lens 160 substantially capture same
The ray of group individual focal spots.
If specimen beam 166 is focused on infinity (such as Fig. 3 A), its spatial sampling spot is constant, and right
Should be in the aperture of lens 160.Due to angular samples spot as object distance increases, the Relative Contribution of this constant space sampling spot
As distance reduces.This shows, exist one exceed it then the angular samples spot become to occupy the threshold object distance of leading position,
And spot reduced by focus sampling light beam 166 to greatest extent when object distance is increased above this threshold distance one kind is provided and passed
Subtract return (return).
The focus of light beam 166 is recorded in discrete light field 110, is used as the sampling focus associated with spectral radiance 128
138。
Optional depth 136 can be determined by ranging (range-finding) (being discussed below), and focus 138 of sampling
Depth 136 is can correspond to, such as when light beam 166 is focused according to the depth of scene.
The 4D light fields known biplane parametrization in [Levoy 96], uv planes just with light field border
102 overlap, and st planes and object plane 170 (or equally, image planes 158) overlap.The st planes are usually fixed, and are corresponded to
Focus sampling.
Light field is rebuild
The sampling system of the focus 138 including each sample for catching discrete light field 110, which is used as rebuilding, accordingly to be connected
The basis of continuous light field.
Continuous physical 4D light fields are rebuild using 4D low pass filters according to discrete 4D light fields.The filter can ensure that company
Continuous light field is limited to the frequency content of the continuous light field of tape limit by band, and the discrete light field is obtained from the continuous light field sampling.
Fig. 4 A, 4B, 5A and 5B show a kind of frequency band limitation and rebuild continuous light field method according to discrete light field.These figures
Mirror image Fig. 2A, 2B, 3A and 3B, and in appropriate situations respectively, identical reference marker are used for corresponding part.
Fig. 4 A show light emitter arrays 180, and it rebuilds continuous light relative to the directions of rays of particular ray position 104
.Each optical transmitting set 182 of array 180 rebuilds particular ray direction, and generates the light beam 184 around nominal ray 100.This
Planting generation realizes the 2D low-pass filters relative to directions of rays.Effective filter kernel is the space model with optical transmitting set 182
Enclose corresponding non-ideal tank filters.The optical transmitting set is by ideally tight to ensure that enough wave filter is supported.Angle
Degree reconstructed light beam 184 is focused on infinitesimal pinhole aperture 186, and it is overlapped with the ray position 104 on border 102.
Fig. 4 B show the lens array for rebuilding continuous light field relative to ray position at border 102.The array it is every
Individual lens 188 rebuild particular ray position, and surround nominal ray 100 by focusing on to produce from the point 164 on optical transmitting set 182
Collimated light beam 190.This generation realizes the 2D low-pass filters relative to the position.Effective filter kernel is and lens
The corresponding non-ideal tank filters of spatial dimension in 188 aperture.Lens are by ideally tight to ensure enough filters
Ripple device is supported.
Fig. 5 A show the combined effect in the aperture of the spatial dimension of optical transmitting set 182 and lens 188, and it produces reconstructed light beam
192, to realize 4D LPFs, i.e., realize 4D LPFs relative to direction and position simultaneously.Effective filter kernel is 4D
Tank filters, it provides reasonable but nonideal band limit.When spatially generating light, it is difficult to be made and compare tank filters
It is good.
It is supplied to the scalar value of optical transmitting set 182 to be generally proportional to transmitter power.Radiance sample 134 by by its
Be multiplied by the 5D sampling periods (that is, 1D time samplings cycle 114,2D spatial sampling cycles 120 and 2D angles sampling period 126) and
Itself divided by transmitter are actually turned on (on-time) time (it is generally shorter than the time sampling cycle 114) convertible hair
Emitter power.Note, if 4D (space and angle) reconstruction filter is supported to be less than 4D sampling periods, identical radiant power
By simply via more compact (compact) beam delivery.
Appropriate 4D rebuilds to depend on launches all possible between the scope of optical transmitting set 182 and the hole of lens 188
The optical transmitting set 182 of ray.If transmitter 182 is diffusion, this is just met.
Fig. 5 B are shown using the lens 188 with the power lower than lens in Fig. 5 A 188 from virtual object point focusing (to battle array
The left side of row 180, and not shown in figure 5b, but overlapped with the object point 168 in Fig. 6 B) light beam 192.
When virtual object plane is in unlimited distance, then to obtain the light beam 192 in Fig. 5 A.
The angle of divergence of the reconstructed light beam 192 (or more appropriate, space reconstruction light beam 190) is by lens at virtual object point
The opposite angle in 188 aperture.Reconstructed light beam 192 has the depth of field, is determined by its angle of divergence, corresponding to the sampling in Fig. 3 B
The depth of field of light beam 166.
To it is each sampling focus 138 adjustment reconstructed light beam 192 focus make its by with the sampled light for creating sample value
Beam 166 is matched.
Fig. 5 A and 5B reconstructed light beam 192 match Fig. 3 A and 3B specimen beam 166 respectively, and this is to be shown explicitly in
Fig. 6 A and 6B, wherein specimen beam 166 is shown on the left of each figure and right side show matching reconstructed light beam 192.
Light field display
Fig. 7 A show Utopian light field display 200, and it, which launches to correspond to, constitutes very simple virtual scene
The output beam 202 of two virtual point sources 204.Each output beam 202 constitutes spherical wavefront 206, and each wave surface rises
Come from respective spot light 204.The emergent pupil of each output beam 202 at the surface of display 200 is equal to entirely
The scope of display.
For clarity, Fig. 7 A illustrate only two spot lights 204.In practice, display 200 will be continuous from one group
Spot light launches light beam.Although in addition, being not explicitly depicted, the cross section of the radiance per light beams 202 is probably uneven.
For the spectators positioned at the front of light field display 200, display 200 will seem that being difficult to difference contains the spot light
Window on 204 real scenes.
Although Fig. 7 A show display 200, it launches hair corresponding with the virtual point source 204 behind display
Spreading beam, but display 200 can also launch convergent beam corresponding with the virtual point source before display.
Fig. 7 B show the implementation of display 200, and it is divided into the array of continuous display element 210, wherein
Each display element perform light emitter arrays 180 and lens 188 in Fig. 5 B Reconstruction of The Function.
The shown transmitting of each display element 210 is corresponding to the output beam 212 of spot light 204, i.e., each display element
210 with the identical mode of integral display 200 to show, but the emergent pupil reduced is equal to the scope of display element 210.
The each output beam 212 projected in Fig. 7 B by display element 210 is focused in its respective spot light 204,
So as to which output beam 212 mutually adjoins (abut) and formed and wider output beam is projected by whole display 200 in fig. 7
202, with same wave surface 206.
The light field display 200 being segmented is configured as directly displaying discrete 6D light fields 110.During display, display
200 surface corresponds to the light field border 102 associated with discrete light field, and the position of each display element 210 corresponds to
Borderline sampling location 104 (x, y).The direction of the every light beams 212 sent by display element correspond to sample direction (a,
B), the average radiation rate and per light beams 212 corresponds to sampling spectral radiance 128.Focus per light beams 212 corresponds to
Sampling focus 138.
Therefore, each display element 210 rebuilds corresponding with single light field visual point image 122 within the given time
Continuous light field, and whole display 200 rebuilds continuous light corresponding with single light field frame 116 within the given time
.Display 200 so that rebuild continuous 6D optics light field corresponding with discrete 6D light fields video 110 over time.
For clarity, the spatial sampling cycle 120 shown in Fig. 7 B is that than larger, and the angle sampling period 126 is relative
It is smaller.Therefore, each of which output beam 212 associated with the single spectral radiance 128 in the discrete light field 110
It is illustrated as being focused at its respective virtual point source 204 just.In practice, light beam is in a limited region rather than one
Individual point is assembled, i.e., what spot light was proportional to the angle sampling period 126 is formed spot (blured).
Such as from Fig. 7 B it will be clearly understood that to, the spatial sampling cycle 120 is bigger, and angle object details is explicitly fewer, and
And the angle sampling period 126 is bigger, then the details of spatial object shows fewer.It is shallow that the former shows as the depth of field, and the latter shows as
Spot is in object plane.
The 4D sampling periods are smaller (i.e. 4D sample rates are higher), then the fidelity of light field display is bigger.However, for fixation
The sample of number, can reduce object plane spot using the shallower depth of field as cost.
Light-field camera
Fig. 8 A show Utopian light-field camera 220, and it is captured with constituting very simple two real points of real scene
The corresponding inputs light beam 222 of light source 224.Each inputs light beam 222 constitutes spherical wavefront, and each is originating from respective light
Source 224.The entrance pupil at the surface of camera 220 of each inputs light beam 222 is equal to the scope of whole camera.
For the sake of clarity, Fig. 8 A illustrate only two spot lights 224.In practice, camera 220 will capture and come from one group
The light beam of continuous spot light.Although in addition, being not explicitly depicted, the cross section of the radiance per light beams 222 is probably uneven
's.
Fig. 8 B show the implementation of camera 220, and it is divided into the array of continuous camera components 230, wherein often
Individual camera components perform the sampling functions of the photosensor array 150 and lens 160 in Fig. 3 B.
Shown each camera components 230 capture inputs light beam 232 corresponding with spot light 224, i.e., each camera components
230 with the whole identical mode of camera 220 to show, but the entrance pupil reduced is equal to the scope of camera components 230.
The each inputs light beam 232 captured by the camera components 230 in Fig. 8 B is gathered in its corresponding spot light 224
Jiao, thus inputs light beam 232 by abutting with formed by the whole camera 220 in Fig. 8 A capture have identical wave surface compared with
Wide inputs light beam 222.
The light-field camera 220 of the segmentation is configured as directly obtaining discrete 6D light fields 110.During capturing, camera 220
Surface correspond to light field border 102 associated with discrete light field, and each camera components 230 position corresponding on border
Sampling location 104 (x, y).The direction of the every light beams 232 captured by display element corresponds to sample direction (a, b), and
Average radiation rate per light beams 232, which is captured, is used as spectral radiance 128.It is burnt that focus per light beams 232 corresponds to sampling
Point 138.
Therefore, the sampling within the given time of each camera components 230 is corresponding with single light field visual point image 122 continuous
Light field, and whole camera 220 sampling continuous light field corresponding with single light field frame 116 within the given time.Camera 220
Thus sample over time corresponding with discrete 6D light fields video 110 continuous 6D optics light field.
For clarity, the spatial sampling cycle 120 shown in Fig. 8 B is relatively large, and the angle sampling period 126 is relatively
It is small.Therefore, each of which inputs light beam 232 associated with the single spectral radiance 128 in the discrete light field 110 is illustrated as
Just it is focused at its respective practical point 224.In practice, light beam converges at limited region rather than in a point
Place, i.e. spot light are formed in proportion to spot with the angle sampling period 126.
On-plane surface light field border
Although the light field border 102 associated with light-field camera 220 with light field display 200 is shown as flat by these figures
, but actually it may be rendered as any convenient shape.
Depth perception
Biology (such as mankind) with central fovea eyesight (foveal vision) is noted by rotating eyes (or multiple eyes)
Depending on a point so that the image of the point is concentrated in the highdensity foveal area of retina.This causes perceived image
Definition maximize.When the retinal images of two eyes are mentally fused into single image in stereopsis processing procedure
When, an important clue (cue) is supplied to the absolute depth of the blinkpunkt by the degree of eyes convergence (or influx).
Except rotating one or more eyes, the biological lenticular shape for also adjusting eyes, to make during being look at
Blinkpunkt is focused on the retina.During the regulation, the state of the muscle of lens is controlled to provide another important clue
To absolute depth.
Mankind's adaptability regulation (accommodation) response curve show to the regulation of the over adaptation of distal stimulus and
Deficient adaptability regulation to proximal stimulus, has typical bridging (cross-over) (i.e. perfect suitable at 50cm or so object distance
Answering property is adjusted), and for the object distance more than 2-3 meters have 0.5 diopter (2m) typical minimum response [Ong 93,
Palmer 99、Plainis 05].Then, it is essential that, human visual system never suitably adaptability regulation to remote
Stimulate.
Influx and adaptability governing response close-coupled, and influx and adaptability the regulation clue provided by display
Between mismatch can cause the discomfort [Hoffman 08] of spectators.
Parallax refer to from during different viewpoint viewing in object apparent place put the difference of (apparent position), closely
The parallax bigger than the object of distant place is presented in the object of distance.Because binocular parallax is supported in stereopsis caused by parallax
(stereopsis) relative depth perception during, i.e., relative to definitely watching depth attentively.Motion parallax is even with an eyes branch
Hold relative depth perception.
Focus on the perception of light field
As shown in Figure 7 B, corresponding to a spot light 204 every its origin of beam output beam 212 in the spot light, i.e. light
Beam 212 every constitutes ray and originates in spot light 204.Equally, the curvature of the spherical wave front 206 of the light beam 212
Center is in spot light 204.Which ensure that spectators are correct in any given light beam 212 and in multiple light beams 212 of intersection
Ground perceives the parallax of spot light 204, and this causes accurate binocular parallax and smooth motion parallax.Object distance is smaller, then per light beams
212 diverging is bigger, and the presence of Intrabeam viewing difference is more important.In contrast, 3D displays are focused difference is only provided
Parallax between viewpoint, and incorrect (therefore conflict) parallax is provided in any given viewpoint.In addition, automatic light splitting
The viewpoint of the commonly provided moderate amount of stereoscopic display, this causes approximate binocular parallax and discontinuous fluid parallax.
The spherical wavefront 206 for correctly determining center of light beam 212 also allows the regulation of spectators' adaptability to arrive corresponding points light
The correct depth in source 204, which ensure that the influx of spectators is consistent with adaptability governing response.This avoids and focused 3D and shows
Show the conflict of the related influx of device-adaptability regulation.
The angular resolution of light field display and spatial sampling speed are separated using relative high angle sampling rate
(decouple) (see below).This with typical 3D displays on the contrary, in typical 3D displays, the spatial sampling speed
Determine angle display resolution.For the display 200, this permission spatial sampling speed ratio, which is used, focuses the low of 3D displays.
For given entirety (4D) sample rate, this allows of a relatively high angular sampling rate again.
Specific range when the virtual objects for showing specific object distance (r) place after display and in front of the display
(d) when watching, the angular resolution of the light field display 200 of focusing be at viewpoint by an object sampling period (h) (i.e.,
On object plane) opposite angle (g), i.e. g=h/ (r+d) (is used for small g).
The object sampling period (h) is the function in angle sampling period 126 (q) and object distance (r), i.e. h=qr (is used for small q).Cause
This, g=qr/ (r+d).
Therefore the angle sampling period 126 (q) represents minimum light field display resolution.When object distance (r) approach infinity or sighting distance
(d) close to (in both cases, when r/ (r+d) is close to 1), display resolution is received with the angle sampling period 126 (q) when zero
Hold back.
Therefore, (60 are about often spent by configuring its angle sampling period 126 (q) with the maximum angle resolution ratio for matching eyes
Circulate [Hartridge 22], equivalent to about 0.008 degree of angle sampling period), light field display 200 can be configured as
Any geometry of finding a view (viewing geometry) matches the human perception limit.For 40 degree of visuals field, this equivalent to
4800 angle samples numbers.
When sighting distance exceedes the object distance, the light field display resolution for giving sighting distance (d) and object distance (r) can be significantly beyond angle
Sampling period 126 (q).For example, if sighting distance is 4 times of object distance, display resolution is 5 times of angle sampling period 126, and
And, for 40 degree of visual fields 124, the angular samples of 960 count the human perception limit for being enough to match.
If (such as typical automatic light splitting three-dimensional display) that the angle sampling period 126 (q) is sufficiently large, then
The spatial sampling cycle 120 (s) determines angle display resolution (g).Angular resolution (g) is then by one at display surface
Spatial sampling cycle 120 (s) opposite angle, i.e.,:G=s/d (is used for small g).The angular resolution of light field display it is complete
Perfect square journey is:G=min (s/d, qr/ (r+d)).
Above-mentioned calculating represents best situation, because they ignore faulty mankind's modulability response.Light field display
Perceived resolution can make its focus and the reality of stimulation rather than depth in itself to given depth by (at least in part)
The governing response of mankind's adaptability, which is compared, matches somebody with somebody and is improved.This may include to match personal spectators the response of known modulability (if
Wearing spectacles, the then effect including glasses).However, focus on when relative to identified appropriate depth focus deviation
(deviation) parallax mistake (parallax error) can be caused, and the parallax mistake can increase as object distance is reduced.
But, as object distance increases, parallax mistake is increasingly covered by angular samples spot.So, compromise mode will be that selection is a kind of super
Its light field focus is crossed then by fixed threshold object distance.Light field focus system is divided into and focuses far field system and zoom near field body by this
System.Focus far-field threshold can it is equally near or significantly larger with common minimum adaptability governing response (2m) (including, in pole
It is infinitely great in limit).
The equivalence that scene is focused and browser is focused
Fig. 9 A show the eyes 240 of the spectators in the light field rebuild of virtual point source 204.The light field is by dividing
The display 200 of section is rebuild.Eyes are focused at the object point 242 overlapped with virtual point source 204.Received by the pupil of eyes
Inputs light beam 246, the beamlet of output beam 212 is focused onto the point 248 on retina.Therefore the spot light on retina
204 image is apparent from.
Fig. 9 B show object point 242, and it is now than virtual point source 204 closer to display 200.Corresponding to spot light
204 picture point 248 is now in before retina, and therefore the image of spot light on the retina fogs
(blur).This is natural, i.e., it matches reality.
Fig. 9 C show display 200, and its display now is with being translated the spot light that (translated) object point 242 is overlapped
Light field.Inputs light beam 246 is focused on object point 242 rather than on original spot light 204 now, is regarded so focusing on again
On nethike embrane (at picture point 248).Because inputs light beam is not to focus on spot light 204, the figure of spot light 204 on the retina
As keeping fuzzy (identical with the amount in Fig. 9 B).This is natural again.
For clarity, Fig. 9 A to 9C illustrate only single object point 242 on the optical axis of eyes 240." plane " of focus
Be the track (locus) of all these points, and be approximately spherical surface, its radius is equal to the object distance, center in eyes the
At one node.
Spectators feel that equivalent situation indicates to focus on two kinds of useful behaviour of light field for showing in Fig. 9 B and 9C
Operation mode.In the first mode, display is focused on object in the scene.In a second mode, display is according to spectators
Focus and be focused.
Light field shows focusing strategy
The advantage focused on based on scene is that the reconstruction light field is substantially many spectators.One has the disadvantage, the depth of scene
Degree must be known or fixed (being discussed below).Another has the disadvantage that output focus may need to enter each sample
Row changes, it is necessary to quick focus switching.In addition, it is necessary to the single depth of each samples selection, and ought be deposited in specimen beam
In notable change in depth, this may need to trade off.
If the focal spot modulation rate of display element 210 is substantially lower than the sample rate, then can be logical by multiple displays
Support multiple depth, i.e., each passage of depth one in road (pass).So to each passage according to the corresponding field in the passage
Depth of field degree adjusts the output focus of each display element 210.But, because the quantity of different depth is usual in visual point image 122
More than the actual quantity of display channel, the set of the depth of given display element is supported to be likely to a kind of compromise proposal.Choosing
A kind of method for selecting the depth groups is all depth boundses in the visual point image 122 for estimate display element, and is then recognized
Most common depth cluster.Then intermediate depth [Hoffman 08] is shown using depth weighted mixing (blending).
The advantage that spectators special (viewer-specific) focus on is can will be relatively slowly to change focus, and single
Change in depth in sample is inherently correctly handled.Have the disadvantage the reconstruction light field for special spectators, and the sight
It is many therefore must be traced.Its additional drawback is that light field must use correct focus capture (or synthesis), or before display by
Again focus on.
The acutance (sharpness) of the light field of institute's refocusing can record multiple spectral radiance by each direction (a, b)
Rate sample 128 and be improved, each direction has different sampling focuses 138.If each sampling focus 138 is either straight
Connect or via the actual object depth that corresponds within specimen beam 166 of path of transmission or reflection, acutance all can especially increase.
The special light field visual point image 122 of spectators of each display element 210 is for each direction by being passed through to the direction
Object point 242 (or more suitably, disk) and all rays through the aperture of the display element are carried out integrated and obtained.Work as light field
110 via light-field camera 220 when being captured, and correspondingly this integrated can be carried out by focusing on each camera components 230.
Then, in the special focusing mode of spectators, the blinkpunkt of spectators is constantly tracked, and each display element 110
It is independently controlled to launch the special light field of spectators according to the deep focus of blinkpunkt.
Multiple spectators, i.e., the every passage of spectators one can be supported by multiple display channels.Alternately, display focus
Can be by unique user control, and other users can passively watch the display in the focal point, i.e., they will be with
Identical mode, which is watched, to be focused light field and shows.
In a hybrid mode, one or more display channels can be that spectators are special, and one or more additional aobvious
It can be based on scene to show passage.For example, three display channels can be used to spectators' designated lane, limited focus is led to
Road is used for nearly scene content, and unlimited focus passage is used for remote scene content.
During the special display channel of spectators of optimization, output is produced only on the direction of spectators, such as below with reference to Figure 42 A
It is discussed further.This means the special display channel of implementation status spectators according to display element 210 is visible only to target audience,
And may only consume the sub-fraction in frame period.
The special display channel of spectators will generally be carried out with less than 10% visual field 124, and if display element 210 is positive
Scanning is (as being discussed in further detail below), then at least in a dimension, and display channel will only consume the frame period
Correspondence part.Below, spectators' dedicated frame that the duration reduces is referred to as subframe.
From wherein shown content be the special traditional head tracking of spectators 3D displays it is different, it is special with spectators
The light field display 200 of pattern operation is shown and the content independent of spectators using the special focus of spectators.If spectators change
Their blinkpunkt is moved relative to display, then shows that focus may need to be updated, but this may be will be relatively slowly
Occur, because spectators are always positioned in effectively (if without inevitable overall optimum) reconstruction light field, and the adaptation of people
Property governing response is relatively slow (i.e. hundreds of milliseconds of magnitudes).
Spectators' dedicated focus pattern
Figure 10 A to 10D show spectators' two kinds of strategies of special light field for display.
Figure 10 A show spectators 250, and it is stared transmitting and constitutes the corresponding light field of virtual scene with by multiple objects 252
Light field display 200.Included in display 200 or the tracking system associated with display 200 tracks the face of spectators 250
Portion, and the therefore position of two eyes 240 of tracking spectators.
Figure 10 B are shown in which the position in an eyes 240, and it is used for determining by each display element 210
View direction, and the intersection point 254 with scenario objects 252 therefore is determined to each direction of observation.Which show each display
The focus of element is set according to the depth of corresponding intersection point 254.
Figure 10 C show tracking system, and it is used for the gaze-direction for two eyes 240 every for tracking spectators, and therefore
Estimate its blinkpunkt 256.It is assumed that it is synchronous to watch attentively with adaptability regulation, then because they are in normal circumstances, therefore see
Many focuses can be estimated according to the depth of blinkpunkt 256.
Figure 10 D be shown in which the plane of the focus drawn according to the estimation of Depth of blinkpunkt 256 of an eyes 240 with
And for each direction of observation and the crosspoint 254 of the plane of the focus.The focus of each display element is once again shown as basis
The depth of corresponding intersection point 254 is set.
First spectators' dedicated mode as shown in Figure 10 B represents mixed mode, and it depends on depth information of scene and face
Portion is detected, but need not watch estimation attentively.It is referred to as location-based spectators' dedicated focus pattern.
As Figure 10 C and 10D show second spectators' dedicated mode independent of depth information of scene, but need to watch attentively to estimate
Meter.It is referred to as watching orientation spectators' dedicated focus pattern attentively.
Although Figure 10 D show the output focus of the position setting according to an eye 240, for the eyes with separating
Distance watches (fixation) depth attentively compared to larger, and the output focus of particular display element 210 is different between two eyes
It is sufficiently small so that averagely exporting focus can be used to serve two eyes during single display channel.But, for phase
The eyes answered, any display element 210 of the central fovea eyesight contributed in one or another eyes should be focused (as later
It is described in this manual on Figure 42 B).
Based on position and stare orientation focal modes be complementary.Stare directional pattern and produce more accurate focus, but according to
Rely gradually becomes not disposable in sight estimation, its increase with the distance between spectators and display.It is location-based
Pattern depend on face detection, this it is bigger apart from it is upper holding disposable (tractable), and location-based scene focus on
Accuracy with distance and increase because by display element 210 to angle with distance reduce.
Therefore both patterns can cooperate with (in tandem) to use, wherein operator scheme according to display with spectators it
Between distance selected respectively for each spectators.
The selection of focusing strategy
A kind of acceptable focus strategy is depended on how to use the quantity of display, i.e. spectators, its typical sighting distance and shown
The property for the scene shown.It also depends on the ability of the specific implementation of light field display 200, particularly in terms of focus modulation rate
Ability.
Minimum viewing object distance is the minimum summation for showing object distance and minimum sighting distance.If minimum viewing object distance is than far field threshold
Value is big, then single it is sufficient to focus display channel.
If minimum shows that object distance is bigger than far-field threshold, far field system is applicable independent of sighting distance, and spectators are not required to
It is to be traced.For example, display 200 can be by window simulation to remote outer scene.
If the object distance of minimum display is smaller than far-field threshold, near field system is applicable compares far field with any minimum viewing object distance
The small place of threshold value, and spectators may need to be traced.
If the focal spot modulation rate match time sampling rate of light field display 200, the near field light independent of spectators
Field can be shown in single passage.
If light field display 200 is used as near-to-eye (NED), only view single (site) eyes.Sight orientation is seen
Many special focal modes for example can be used effectively based on the depth of watching attentively inferred according to the vergence of two eyes, and be gathered
Burnt modulation rate need to only match relatively slow mankind's accommodation mechanism, and this needs to spend hundreds of milliseconds to be focused on again
(being less than 4 hertz).
If light field display 200 is used for by multiple close to spectators, sight orients many of the special focus of spectators
Individual passage can be effectively utilised.
If display 200 supports subframe, multiple display channels can be formed during single frame duration.If
Be not, then ratio of the number of display channel between time sampling cycle 114 and frame duration limiting (assuming that from perception
Based on the upper time sampling cycle 114 and therefore it can not compromise).
If the special period of sub-frame of eyes is Teye, focus switching time is Tfocus, and the frame period is Tframe, full frame
The quantity of passage is Nfull, and the time sampling cycle 114 is Ts, then the quantity available Neye of eyes designated lane is by following formula
Provide:Neye=floor ((Ts- (Tframe*Nfull))/(Tfocus+Teye)).
For illustrative purposes, it is assumed that frame period Tframe is sampling period Ts half.In the special passage of eyes
When quantity Neye is zero, this allows two full frame passages, and when the quantity Nfull of full frame passage is 1, eyes designated lane
Following quantity:Neye=floor (Tframe/ (Tfocus+Teye)), therefore, Tfocus=(Tframe/Neye)-Teye.
For illustrative purposes, it is further assumed that the requirement Teye of eyes designated lane is 4, and the subframe is assumed
Duration be Teye be Tframe 10%.Then maximum allowable focus switching time Tfocus is given by:Tfocus
=Tframe*0.15.
Assuming that it is that 10ms (corresponds to 20ms (50Hz) the time sampling cycle that frame rate, which is 100Hz, i.e. frame period Tframe,
Ts 114), this equivalent to 1.5ms focus switching time Tfocus.It is assumed that frame rate is 200Hz, it is equal to 750us's
Focus switching time Tfocus.
If display element 210 is scanned, and assumes spectators relative to the horizontal distribution of display 200, then favourable
It is the vertical dimensions that fast scan direction is distributed to display, to allow during single display channel (assuming that focus switches
It is sufficiently fast) focus by level (i.e. in the slow scan direction) change.This allows to create many during single (full frame) display channel
Individual eyes dedicated focus region, and the alternative for forming the special subframe display channel of multiple spectators is provided.
The selection of focusing strategy during the capture of light-field camera 220 is followed as described above for by light field display 200
The described same principle of display.This includes being adjusted according to the position of one or more of light field display 200 spectators and/or sight
Section capture focus, if that is, camera 220 is capturing the light field just shown in real time by light field display 200, as discussed
What ground was discussed.
Estimation of Depth
Scene depth in the record specimen beam 166 of optional depth 136 related to spectral radiance 128.In sampled light
When in beam in the presence of for example notable change in depth is caused due to partial occlusion (occlusion), transparency or reflection, it can be represented
One kind is compromise.For example, it can represent the depth along the name sampling enough opaque surfaces of ray 100 to the first.It is alternative
Ground, as discussed above, multiple depth 136 is can record to each direction (a, b).
Depth 136 can be used for a variety of purposes, generally include to show the light field (as above using the focusing based on scene
It is described), the blinkpunkt of estimation spectators (being discussed below), lightfield compression (is discussed below), and based on advanced treating and interaction.
When light field 110 be synthesis when, that is, according to threedimensional model generate when, the depth of scene is known.When
When light field 110 is captured by real scene, depth can be determined by ranging.
Ranging can be active, for example based on when m- flight measurement [Kolb 09, Oggier 11], it is or passive, such as
Based on image difference [Szeliski 99, Seitz 06, Lazaros 08] or defocused speckle [Watanabe96].It can also
Combination [Kolb09] based on actively and passively technology.Ranging further described below.
Two-way light field display
Due to the application symmetry and operation symmetry of two devices, the He of light field display 200 is combined in single assembly
The function of light-field camera 220 is favourable.Such device is hereinafter referred to as two-way light field display.
Figure 11 shows a pair of the two-way light field displays 300 connected via network 320.Each two-way light field display
300 are divided into the array of continuous two-way light field display element 310, and each of which performs light field display element 210 and light field phase
The function of machine element 220.
The figure shows the remote audience 324 interacted with long-range two-way light field display 300 at top and shown in bottom
The local audience 328 interacted with local two-way light field display 300.Each two-way displays 300 are by respective display control
Device 322 is controlled, and will be described in greater detail behind this specification.
Remote audience 324 is to be accompanied by remote object 332, and local audience 328 is with native object 336.Local audience
328 are shown as staring the virtual image 334 of remote object 332, and remote audience 324 is shown as staring the virtual image of native object 336
338.Remote display 300 also shows the virtual image of the virtual image 330 of local audience and the display remote audience 324 of local display 300
326。
Each spectators can use to be caught via two-way displays 300 (or via single tracking camera, being discussed below)
The visual point image 122 obtained is traced by the display controller 322 of their own two-way displays 300.As previously described
(and further will be described in greater detail below), the facial positions or direction of visual lines of each spectators can be corresponding for control
Two-way light field display 300 capture focus.
Remote audience can be significantly improved using a pair of two-way light field displays 300 rather than traditional monitor and camera
There is sense jointly there is provided strong in the communication between 324 and local audience 328.For example, each spectators can determine other spectators
It is look for or positions, and object can be lifted to be used for other spectators progress essence close to the surface of two-way displays 300
It is close to check.
Figure 11 also clearly illustrates that, if two two-way displays 300 are installed back-to-back, they serve as dummy bidirectional window
Mouth effect, that is to say, that their (and spaces therebetween) become effectively invisible.
Figure 12 shows unidirectional configuration at top, is made up of long-range light-field camera 220, and shown in bottom via net
The local light field display 200 that network 320 is connected.
The figure shows the local audience 328 of viewing display 200 in bottom.Light-field camera 220 passes through camera controller
340 controls, and light field display 200 is controlled by display controller 342.The controller will in this manual in further detail below
Ground is described.
Remote scene includes remote object 332, and local audience 328 is shown as staring the virtual image of remote object 332
334。
Spectators 328 can use the image shot via the two or more tracking cameras 344 for being connected to controller 342 to lead to
Display controller 342 is crossed to be traced.As it was previously stated, the facial positions or direction of visual lines of spectators can be for control light-field cameras
220 capture focus.
In the remainder of this specification, any reference to light field display 200 (and display element 210 of light field) is all
The display function of two-way light field display 300 (and two-way light field display element 310) should be considered as being equal to, vice versa.
Equally, refer to any to light-field camera 220 (and light-field camera element 230) is regarded as being equal to two-way light field display
The camera function of 300 (and two-way light field display elements 310), vice versa.
Face detection and stare estimation
As described above, light field display 200 can use viewer's location and direction of visual lines are understood it is special to generate spectators
Output light field, including use the special focus of spectators and the special angle of visual field of spectators.
Depending on the distance between spectators and light field display 200, display can be utilized alternatively to the three-dimensional position of spectators
The knowledge put, the position of spectators' eyes, the line of regard of eyes, the blinkpunkt for watching (fixation) depth and eyes attentively of eyes
Understanding generate the special output of spectators.When spectators are close to display, the direction of visual lines of spectators only can be with useful essence
Degree is estimated, and the position of spectators' face and eyes can also be estimated with useful precision even if when spectators are away from display.
The face detection of robust and high speed in digital picture is normally based on what is be trained on facial portion's database
Grader cascades [Jones 06].Multiaspect detector can be trained and using to cover large-scale head pose together
[Jones 03]。
Approximate eye detection is typically to be built in face detection, and can estimate more accurate after face detection
Eye position [HansenI O].Detection is also easy to expand to face and the other useful functions of eyes, including eyebrow, nose
Son, face, eyelid, sclera, iris and pupil [Betke 00, Lienhart03, Hansen 10].
Using the tracking camera 344 from two or more calibrations or from two or more light field phases for tracking
The image of machine element 230 (corner as being located at light-field camera 220), the image from many cameras is carried out face detection and with
Feature detection afterwards is to obtain the estimate in the feature locations of three dimensions.Additionally provided using multiple tracking cameras to potential
The position of spectators and the more preferable covering of posture.Feature locations can also be estimated by obtaining depth data by initiative range measurement
(as previously discussed).
In order to watch estimation attentively, display 200 includes multiple near-infrareds (NIR) light source, to allow the line of regard of each eyes
Difference between the position of the mirror-reflection (flicker) of each light source according to the position of its pupil and on its cornea is estimated.
[Shih 00, Duchowski 07, Hansen 10].NIR light source can only be energized to assistance figure in alternate frame of video
The detection [Amir 03] of its reflection as in.In order to assist pupil detection, display 200 can include additional NIR light source, if
Put on the axis of tracking camera 1 or close to the axis of tracking camera 1, to produce the bright view of the pupil by each eyes
Film reflects.This light source can be energized into the light source for producing flicker (glint) on alternate video frame.
The line of regard of eyes corresponds to eye optical axis, and the ideal line of sight is regarding by the central fovea slightly offset from axis
Web position is determined.
The estimation of the position of central fovea can be used to be estimated according to the circuit of sight for sight.The position of central fovea can be assumed
(such as based on demographic data), or can be estimated by calibrating.Clearly calibration is it is generally necessary to which spectators watch one group of target attentively.
Deduction during known scene point is stared in implicit calibration dependent on spectators.Calibration can be re-executed to each viewing session, or
Calibration data can be stored and retrieved when spectators interact with display.For example, it can be based on the identification spectators'
Face is retrieved [Turk 92, Hua11], or it can be based on another form of identification mechanism, such as be provided by spectators
Voucher.
The blinkpunkt of spectators can be estimated according to the intersection point of the sight of two eyes of spectators.Positive injection is likely in spectators
Under the hypothesis of the surface point of apparent scene, blinkpunkt can use the understanding to scene depth to be become more meticulous.Alternately, watch attentively
Depth can be estimated according to the influx of the sight of two rows, without estimating clear and definite blinkpunkt.
As the alternative for actively watching estimation attentively irradiated using NIR, it can be passive to watch estimation attentively, that is, is based only on
The image [Hansen 10] of the eyes of spectators under ambient light illumination.This is dependent on estimation such as canthus, eyelid, sclera and rainbow
The relative position and shape of border and pupil relative to the key feature of the overall posture on head between film (corneal limbus).Quilt
Move to watch attentively and estimate generally not as actively watching attentively to estimate accurately.
For the purpose for watching estimation attentively actively and passively, display 200 may include additional controllable Narrow Field Of Vision
(narrow-field-of-view (FOV)) tracks camera 344, the finer image for obtaining spectators' eyes.Institute
The camera components 230 of selection, if scanned, can also by constriction and be aligned their angular views be used as manipulating
Narrow Field Of Vision tracking camera.
Two-way light field display embodiment
In a preferred embodiment, the two-way light field display 300 of segmentation captures and shows light field video 110,
I.e. continuous light field frame 116, and be operable to minimize or eliminate with the time sampling cycle 114 short enough and perceived
Flicker (flicker), i.e. ideally with the frame of at least 60 hertz (peak threshold flicker fusion (CFF) frequency) speed
Rate is operated.
In the remainder of this specification, as an enlightening example, and for the mesh of illustrative consideration
, use the two-way light field display 300 with following parameter:10ms (that is, the frame speed of 100Hz of time sampling cycle 114
Rate is, it is assumed that each frame of time sampling cycle one), the 2mm spatial sampling cycle 120,1000mm is wide is multiplied by the high spaces of 500mm
118 (i.e. scopes of display surface) thus 500 be multiplied by 250 spatial sampling counting, 0.04 degree of angle sampling period 126,
40 degree be multiplied by 40 degree visual field therefore 1000 multiply 1000 angle samples count, basic 132 and 12 bits of rgb light spectrum sampling
The sample of radiance 134.
The configuration of the illustrative two-way displays 300 has 4E13 radiance samples each direction (show and catch)
Handle up.
It should be noted that many application programs allow to significantly reduce frame rate, sampling period and sample size.
Display brightness and power supply
Daytime, the brightness range of land sky was up to about 10,000cd/m∧2 (every square metre of candelas), it is equal to about
15W/sr/m∧2 radiance (in the visible spectrum).The configuration of operation instruction display reproduces this point and is equal to each display
The about 20uW of element 310 (microwatt) power output, and it is equal to the about 3W of whole display 300 gross output.Generally
Indoor light source can have larger number level brightness, i.e., 100,000cd/m∧2, be equal to each display element 200uW and
It is equal to the 30W of whole display.
Any radiance sample 134 more than the greatest irradiation rate of display 300 can be wiped out (clamp), or all
Emissivity values can be adjusted in available scope.
Two-way light field display element based on array
Figure 13 A and 13B show showing for one embodiment of the two-way light field display element 310 of two-way light field display 300
It is intended to.
Bilateral element 310 includes the photosensor array 150 covered by transparent light emitter arrays 180.Focus on by
First focuses positive lens 400, zoom negative lens 406 and second focuses positive lens 412 and provide.
Zoom negative lens 406 can be any suitable lens with controllable focus, such as pin later in this manual
As described in more detail to scanning light field display element.
Zoom negative lens 406 shown in Figure 13 A and 13B includes the nematic liquid crystal unit being clipped between concave surface and plane.Institute
Concave surface is stated to be formed by adjacent convex portion 404.The liquid crystal is birefringent (birefringent), wherein parallel to guider
(director) light of polarization undergoes higher (special) refractive index (n1e) and the light of vertically-guided device polarization undergoes relatively low
(common) refractive index (n1o).For convenience of description, the ordinary refractive index and 1.8 special refractive index using 1.5, this is to represent
The parameter of commercially available liquid crystal material, such as Merck E44.
The liquid crystal cells are also clipped between a pair of transparency electrodes 402 and 408 (such as ITO).As shown in FIG. 13A, when not having
When voltage is applied on electrode, guider (by the small oval orientation representation in figure) follows level friction (rubbing) direction.
As shown in Figure 13 B, when applying saturation voltage, guider becomes the field perpendicular alignmnet with being applied.
Refractive index (n2) approximate match of convex portion 404 is in the ordinary refractive index (n1o) of liquid crystal.Therefore, when saturation voltage quilt
During application, the optical power (power) of zoom lens 406 close to zero, and when do not apply voltage when, zoom lens 406
(negative) optical power is in maximum, is used as the difference and the letter of the curvature of convex portion 404 between the two refractive indexes (n1e and n2)
Number.Medium voltage is used to select coke number between these extreme cases.
When (negative) optical power of zoom lens 406 is maximum, the generation Figure 13 of bilateral element 310 A divergent beams.When
When lens power is minimum, the generation Figure 13 of bilateral element 310 B convergent beam.
Liquid-crystal zoom lens 406 work together with linear polarization 410, which ensure that only inclined parallel to acquiescence guider
The light (Figure 13 A) shaken pierces into or passed bi-directional display element 310, i.e., the light only focused on by zoom lens 406.
As alternative of the linear polarization 410 using single liquid-crystal zoom lens 406 is combined, with vertical friction side
To two liquid-crystal zoom lens can be used to focus on the light [Berreman80] of all polarizations.
The combination optical ability for focusing positive lens 400 and 412 keeps balance with respect to the optical power of zoom negative lens 406,
To produce as Figure 13 A and Figure 13 B are respectively shown in from the short positive focusing range of short negative sense.
In element 310 two-way, display and capture can be time-multiplexed during use, wherein being divided into per frame
(relatively long) display interval and (relatively short) capture interval, wherein zoom lens 406 are before each interval by suitably
Again focus on.
As shown in figs. 14 a-b, if zoom lens 406 must be not enough to every frame and be completely refocused twice soon, it can make
With a pair of variable focal length lens 406 and 416 with vertical frictional direction, one of them is exclusively used in display focusing and another is special
Focused in capture.In this case, polarization rotator 418 [SharpOO] is switched fast to can be used for optionally by light polarization
0 or 90 degree is rotated to be, and is therefore selected between showing and capturing focus.
Figure 14 A show the first zoom lens 406, the light beam 414 for being used to show of its active (active) collimation.Figure
14B shows the second variable focus lens 416, and its active (active) collimates the light beam 414 for capture.For clarity,
Untapped zoom lens (406 or 416) are shown in figure, it becomes not work due to applying saturation voltage.However, in reality
In trampling, untapped lens are practically due to polarization rotator 418 and become idle so that be applied to voltage thereon
It is unrelated.
Preferably active (active) element sensor (APS) of each optical sensor 152 of photosensor array 150
[Fossum 04] so that whole array can be exposed simultaneously in capture interim, and be then read out.
For colour application, each optical transmitting set 182 of light emitter arrays 180 is preferably full color emission device, such as one
Folded red, green and blue OLED [Aziz 10];And each optical sensor 152 can be full-color colour sensor, such as sense
Device stacks [Merrill 05] or the sensor array with color filter.In addition, each optical transmitting set 182 and light sensing
Device 152 can utilize any implementation option discussed below in relation to scanning light field display element.
Each optical transmitting set 182 and/or optical sensor 152 can also be as described in below in relation to scanning light field display elements
The ranging of support flight time.
Zoom lens 406 and 416 are shown as with gap heterogeneous, so as to allow to use simple electrode.Due to liquid crystal
The speed of rotation reduces as gap size reduces, but increase rotary speed using uniform gap, although this is necessarily used
Many segment electrodes [Lin 11].
Had several drawbacks in that using the display element of the light field based on array.Because each optical transmitting set 182 is typically to diffuse
Transmitter, therefore the light only a fraction generated actually projected by the emergent pupil of display element.Due to transmitter battle array
The size of row 180 is (because which has limited the width of display element) limited by the spatial sampling cycle 120, therefore angle samples
Number can be overly restrictive.And assume that the actual limitation to the complexity of lens be used to focus on from display element
Output (and being inputted to bi-directional display element), it is also difficult to realize high off-axial beam quality.
In the scanning display element 210, scanning camera element 230 and the scanning bi-directional display element 310 that are described below,
These limitations can be avoided.
Scan light field display element
Figure 15 shows the block diagram of the scanning embodiment of the light field display element 210 of light field display 200.
Display element 210 scans the output beam of light 500 across 2D visual fields 124 in cross grating mode, and for every
Individual direction (a, b) modulates the light beam, and to produce the desired radiance 134 specified in output light field visual point image 502, this is defeated
Go out the visual point image 122 that light field visual point image 502 is light field video 110.
On the duration of individual pulse (below described), light beam 500 corresponds to specific output beam in Fig. 7 B
212, and corresponding to the reconstructed light beam 192 in Fig. 5 B.
Display element 210 is scanned dependent on the persistence of vision with the induction continuous optical light field on the entirely check and correction visual field 124
Perceive.
Light beam 500 is scanned by quick line scanner 504 along line direction (with 100kHz exemplary scanning frequency rate), and
Scanned by slow frame scanner 506 along vertical (frame) direction (with 100Hz example frame speed).
Quick line scanner 504 and slow frame scanner 506 can be single, or combine in 2D (twin shaft) scanner
In.
Scanner is controlled by timing generator 510, and timing generator 510 is by being total to other display elements 210 in itself
The outside frame synchronizing signal 512 enjoyed is controlled.Frame scan instrument 506 is as from frame synchronizing signal derived from outside frame synchronizing signal 512
514 controls, and the line scanner 504 is controlled by line synchronising signal 516.
Radiance controller 520 controls the radiance of output beam.In the sampling clock 518 from timing generator 510
Control under, it reads next emissivity values 134 from output visual point image 502, and produces signal to control the output bundle
Radiance.
If the angular scan rate of fast scanner 504 be to rely on angle (for example, because the short scan instrument is humorous
Shake), then timing generator 510 correspondingly adjusts sampling clock 518, to ensure the constant angle sampling period 126.
Light beam generator 522 produces light beam, and is normally in response to the light beam power from the radiance controller 520
Signal, radiance modulator 524 modulates the radiance of the light beam.Implement selection as described below.
Pulse duration should the matching angle sampling period 126, to ensure appropriate reconstruction.If using (corresponding Gao Gong
Rate) short pulse, then appropriate reconstruction can be optically realized, as described below for described by Figure 20.
As described in earlier, required light beam power by the way that required radiance 134 is multiplied by the 5D sampling periods, (i.e. adopt by the 1D times
Sample cycle 114,2D spatial sampling cycles 120 and 2D angles sampling period 126) and obtain itself divided by this pulse duration
.
Pulse duration is obtained by the way that the angle sampling period 126 to be multiplied by the angular scan rate of fast scanner 504.
If angular scan rate is angular-dependent (for example, because the short scan instrument is resonance), the pulse duration is also
Angular-dependent.
Scanned output beam 500 can be focused according to output focusing source 526.Output focusing source 526 may include to gather
Coke number array, each focus value is associated with beam direction, i.e., corresponding to the sampling focus associated with spectral radiance 128
138.Or, it may include the single focus value that can change from a frame to next frame (or with a certain other speed).Output is focused on
The retrieval focus value of controller 528 (or next focus value, the sampling clock 518 of origin self-timing maker 510 control), and produce
Signal is given birth to control the focusing of output beam.
Focal spot modulation device 530 is exported according to the focus from the signal modulation of output focus controller 528 light beam.Implement
Selection is as described below.Operated if display 200 is only required in far field system is focused, output focal spot modulation device 530 can
To assign light beam fixed-focus, that is to say, that it may be constructed simple tight shot.
Display element 210 alternatively includes multiple light beam generators 522 and radiance modulator 524, many to produce simultaneously
Individual adjacent light beam 500.
Light beam generator
Light beam generator 522 can be monochromatic, but be more effectively polychrome.Figure 16 shows that multicolour light beam is generated
The block diagram of device and radiance modulator component 540, it replaces Figure 15 light beam generator 522 and radiance modulator 524.
Multicolour light beam maker and radiance modulator component 540 include red beam maker 542 and modulator is radiated
Rate 544, green beam maker 546 and modulator radiance 548 and blue light beam maker 550 and modulator radiance
552.Each radiance modulator responses are in from the corresponding signal of radiance controller 520 shown in Figure 15.It is modulated red
Color and green beam are combined through beam combiner 554.Gained light beam is via beam combiner 556 and the blue light beam group of modulation
Close.Beam combiner can be color separation (dichroic) beam combiner for the light beam that different wave length can be combined with high efficiency.
Maximized in order that obtaining reproducible color gamut (reproducible gamut), red, green and blue light beam generator 542,546 and 550
Ideally there is the centre wavelength [Brill98] close to 450nm, 540nm and 605nm primary color wavelength respectively.
Light beam generator 522 (or light beam generator 542,546 and 550) may include any suitable optical transmitting set, including
Laser [Sveltol O], laser diode, optical diode (LED), fluorescent lamp and incandescent lamp.Except non-emitter in itself
It is arrowband (for example, transmitter is laser, laser diode or LED), the light beam generator can include color filter
(not shown).Unless the light launched is collimated (collimate) on whole width of light beam with enough even powers, the light beam
Maker may include conventional collimating optic, beam expanding optics and/or beam shaping optics (not shown).
Radiance modulator 524 can be light beam generator 522 (or light beam generator 542,546 and 550) internal characteristicses.
For example, light beam generator can be semiconductor laser, it allows its power and pulse duration directly to be driven by modulating it
Streaming current and modulated.
If radiance modulator 524 is different from light beam generator, substantially (or its optical transmitting set) can for light beam generator
Shared between some display elements 310.For example, some display elements can share a lamp, or holographic beam can be passed through
Expander [Shechter 02, Simmonds 11] shares single lasing light emitter.
Each color light transmitters can use semiconductor laser particularly effectively to realize, such as Vertical Cavity Surface is launched
Laser (VCSEL) [Lu 09, Higuchi 10, Kasahara 11].VCSEL produces the circular light beam of low diverging, and it is only needed to
Minimum beam spread.
Provided via second_harmonic generation (SHG) [Svelto 10] frequency multiplication a kind of for instructing to launch with target wavelength
The alternative of laser.
Radiance modulator
If radiance modulator 524 is different from light beam generator, it can be by any suitable high speed light valve or tune
Device processed is constituted, including acousto-optic modulator [Chang 96, Saleh 07] and electrooptic modulator [Maserjian 89, Saleh
07].In the latter case, it can utilize Franz-Keldysh (Fu Langzi-Kai Erdishi) effects or quantum confinement
Stark (Stark) effects come the absorption modulated, or using Pockels (bubble Ke Ersi) effects or Kerr (Ke Er) effects come
Modulation refraction and thus modulation deflection.Radiance modulator can include optical element (not shown) so as to before modulation and/
Or light beam is manipulated later, i.e. to optimize the coupling of light beam and modulator (if for example, existed before the modulator and/or afterwards
Exist between the actual aperture and width of light beam of modulator and mismatch).
If modulation is binary system, middle radiance can be selected by (dither) light beam of dither in time
Select, i.e., by pseudorandomly beating on and off on the whole nominal pulse duration with the dutycycle being directly proportional to required power
Black out valve.Dither reduces the artifact (artifact) rebuild in light field.
For exemplary display configuration, required radiance modulation rate be 100MHz (or more than an order of magnitude, if
If being modulated to binary system).Two acousto-optics and electrooptic modulator support this speed, the intrinsic modulator of such as light beam generator
Equally.
Focal spot modulation device
Any suitable zoom lens can be utilized by exporting focal spot modulation device 530, including liquid crystal lens [Berreman 80,
Kowel 86, Naumov 99, Lin 11], it is liquid lens [Berge 07], deformable films speculum [Nishio09], variable
Shape film liquid-filled lens [Fang 08], addressable lens stack [Love 09] and electro-optic lens (are for example imitated using the Pockels
Should or Kerr effects modulate refraction) [Shibaguchi92, Saleh 07, Jacob 07, Imai 11].
Addressable lens stack [Love 09] is folded N number of birefringent lens by one and constituted, wherein each having different optics energy
Power (for example, being the half of its predecessor's optical power), and each above have a quick polarization circulator (such as [Sharp
00]).The 2 of binary system circulator∧The Ν possible focusing for setting generation respective numbers are set.For example, 10 lens are obtained
1024 focus on setting.
Fast polarization rotator can also be used for selecting (as described on Figure 14 A and 14B) in sub-fraction zoom lens.This
Sample lens are made up of the folded burnt birefringent lens of N number of variable, each above to have a quick polarization circulator.At a time open
A pair of circulators are moved to select to support (bracket) zoom lens (the first rotor selection lens by this pair of circulator;Second
Rotor cancels selection subsequent lens).This also realize in the case that zoom lens are relatively slow in itself
Being switched fast between variable-focus setting.Each zoom lens in the stacking can then be dedicated to a display channel
(it can be that spectators are special or scene is special), and these circulators are available for alternately logical for promptly each display
Road selects suitable lens.The stacking is optionally included with the additional circulator after last lens for example to exist
Optical path causes the final polarization of light beam to be constant in the case of including Polarization-Sensitive downstream components.
For exemplary display configuration, required focal spot modulation speed is to support each sample (per-sample) focus
100MHz, the multiple 100Hz of appropriateness (for example, for multiple spectators) for supporting multiple single focus display channels and support are single
Spectators' sight orients the about 4Hz of focus.The technical support 4Hz focal spot modulation rates of all above-mentioned zoom lens.Utilize polarization
The lens stack of circulator supports the modulation rate more than 1kHz.Electro-optic lens support the modulation rate more than 100MHz.
Row and frame scan instrument
Quick line scanner 504 and slow frame scanner 506 each utilize any suitable scanning or Beam Control machine
Structure, including (micro-) electromechanical scanning reflection mirror [Neukermans 97, Gerhard 00, Bernstein02, Yan06], addressable
Deflector stacks (" digital (indexed) light deflector ") [Titus 99], acousto optic scanner [Vallese 70, Kobayashi 91, Saleh
07] and electropical scanning instrument [Saleh 07, Naganuma 09, Nakamura 10].
Most of scanner technologies can support 100Hz example frame speed.Fast scanner technology, it is such as micro electronmechanical
Resonance scanner and electropical scanning instrument, can support 100kHz exemplary row speed.
If quick line scanner 504 is resonance, it can monitor (or otherwise determining) angle of its own
Position is spent, and the information of angle position is provided to timing generator 510, during assisting timing generator generation accurately to sample
Clock 518.
Micro electromechanical scanning instrument provides the particularly preferred combination of scan frequency and visual field, and more detailed in this manual later
Carefully describe.
Scan light-field camera element
Figure 17 shows the block diagram of the scanning embodiment of the light-field camera element 230 of light-field camera 220.
Camera components 230 scan the inputs light beam of light 600 on 2D visual fields 124 in two-dimensional grating mode, and to each
Direction (a, b) is sampled to the light beam to produce desired radiance 134 in the input light field visual point image 602, described
It is a kind of visual point image 122 of light field video 110 to input light field visual point image 602.
In the duration device of (being discussed below) single exposure, light beam 600 corresponds to specific in Fig. 8 B
Inputs light beam 232, and corresponding to the sample beam 166 in Fig. 3 B.
Light beam 600 is scanned along line direction (with exemplary row speed) by quick line scanner 504, also, and along
Vertically (frame) direction (100Hz example frame speed) is scanned by slow frame scan instrument 506.
The embodiment selection to scanner is described above for scanning display element 210.
The scanner is controlled by timing generator 510, and the timing generator 510 is by other camera lists in itself by it
The shared outside frame synchronizing signal 512 of member 230 is controlled.Frame scan instrument 506 is by from derived from outside frame synchronizing signal 512
What frame synchronizing signal 514 was controlled, and the line scanner 504 is controlled by line synchronising signal 516.
The radiance of the light beam of the radiance sensor 604 sensing, or more typically, sensing represents the amount of radiance, all
Such as beam energy (that is, the light beam power integrated in time).Implement selection as described below.
The sampling of radiance sampler 606 that the sampling clock 518 of origin self-timing maker 510 is controlled is passed from radiance
Radiance-typical value (such as beam energy) of sensor 604, and be converted into linearly or nonlinearly (for example, logarithm
) emissivity values 134, it is written into the input visual point image 602.Implement selection as described below.
As described in before earlier, radiance 134 can be by the beam energy value divided by 5D sample sizes that will be sampled (i.e.,
1D lengths of exposure, 2D space samples size and 2D angle samples size) obtain.
The nominal maximum sample length of exposure is by the way that the angle in angle sampling period 126 divided by fast scanner 504 is swept
Retouch speed and obtain.If angular scan rate is angular-dependent (for example, because the short scan instrument is resonance), expose
Duration is also angular-dependent.
The signal to noise ratio of trapped radiation rate 134 in order to improve, can be by using as follows for Figure 23 A and Figure 23 B descriptions
Sensor array increase effective length of exposure with more than the nominal maximum exposure duration.
In order to ensure appropriate bandwidth is limited, nominally radiance sensor 604 has the effective of matching angle sampling period 126
(active) spatial dimension.But, when being coupled with the maximum sample length of exposure, this can be produced in fast scan direction
Spot.In order to avoid this spot, or length of exposure need reduction, or sensor 604 in quick scanning side
Upward spatial dimension needs to reduce.Later approach can realize sensor by using the linear array of narrow photoelectric detector
604 realize, as described by Figure 23 A and Figure 23 B.
Scanned inputs light beam 600 can be focused according to the focus light source 608 of input.Input focus light source 608 can
Array including focus value, each focus value is associated with beam direction, i.e., adopted corresponding to associated with spectral radiance 128
Sample focus 138.Or, it may include single focus value, and it can be different (or with some other to having between next frame in a frame
Speed changes).(or next focus value, origin self-timing maker 510 is adopted for the retrieval of input focus controller 610 focus value
Sample clock 518 is controlled), and produce signal to control the focusing of the inputs light beam.
Input focus modulator 612 is according to the focus from the signal modulation of input focus controller 610 light beam.For
Implementation selection of the implementation selection of input focus modulation 612 with being used to export focal spot modulation device 530 as described above is identical.If phase
Machine 220 is only needed to focus operation far field system operation, then input focus modulator 612 can assign these light beam fixed-focus,
That is, it can be made up of simple tight shot.
Radiance sensor
Radiance sensor 604 can be monochromatic, but more useful is polychrome.If polychrome, it can be with
Utilize layered colors sensor [Merrill 05], or the sensor array with colour filter.
The sensor 604 can include any suitable one or more photoelectric detectors, be included in photoconduction or light
Send a telegraph the photodiode operated under die pressing type, phototransistor and photo-resistor.
The sensor 604 can include analog memory and exposure control circuit [Fossum 04].
Radiance sampler
Radiance sampler 606, which can be included, has suitable sample rate and any analogue-to-digital converters of precision (ADC),
Generally there is pipeline organization [Levinson 96, Bright 00, Xiaobo 10].For exemplary display configuration, sampling
Rate is 100Msamples/s (100M samples/sec), and precision is 12 bits.Sampler 606 can be comprising multiple ADC so as to simultaneously
Change multiple color channels capablely, or it can be time-multiplexed the conversion of multiple color channels by single ADC.It can also
Using multiple ADC to support specific sampling rate.
Sampler 606 may include programmable gain amplifier (PGA), to allow sensed value before conversion will be inclined
Move and scale.
The conversion of the value sensed to radiance 134 can be carried out before or after analog-digital conversion.
Flight time ranging
Light-field camera 220 is alternatively configured to carry out flight time (ToF) ranging [Kolb 09].Subsequent camera includes
One or more optical transmitting sets, for the light irradiation scene encoded using ToF-.The light of ToF- codings is by scene reflectivity and every
Depth is detected and be converted into the individual sampling period by each camera unit 230.
The radiance sensor 604 and radiance sampler 606 can be configured to by comprising for measuring emergent light
The phase difference between the coding of incident light is encoded to perform ToF rangings [Kolb 09, Oggier11].
When being configured as performing ToF rangings, each sampling period of sampler 606 writes estimated depth 136 defeated
Enter visual point image 602.
The ToF- encoded lights are desirably not visible, such as near-infrared (NIR).Sensor 604 can be used and is also used to
The photoelectric detector of visible ray is sensed, or sensor 604 can include the dedicated photodetectors for ToF- encoded lights.
It is used as the replacement camera that one or more ToF- codings optical transmitting sets are provided, each camera components 230, if going back quilt
Display element 210 (seeing below) is configured to, the ToF- encoded lights of their own can be launched.Light beam generator 522 can include using
In the optical transmitting set of ToF- encoded lights, such as NIR light transmitter.
If necessary talk about, face detection can for disable to can launch ToF light into eyes any sample (x, y, a,
B) ToF rangings.
Scan the display element of two-way light field
Figure 18 shows the block diagram of the two-way light field display element 310 of scanning of two-way light field display 300.It is combined with
Light field display element 210 and the function of light-field camera element 230 that Figure 15 and Figure 17 are shown respectively.
In the two-way light field display element 310 of scanning, line scanner 504, frame scan instrument 506 and timing generator
510 are shared between display and the camera function of element.
Beam splitter 614 is used to separate output and input light path.It can be any suitable beam splitter, including
The speculum of polarization beam splitter (discussed further below) and half silver-plated (or forming pattern).
In two-way light field display element 310 is scanned, display and generation simultaneously is captured, except when visual field 124 can be with it
Based on (such as discussed in this description later) outside visuality during significant changes between reality and capture.
Scan the optical design of two-way light field display element
Figure 19 A show the plan for the optical design for scanning two-way light field display element 310.Tracked ray is shown
Come the output light path of operation, the i.e. element and produce output beam 500.Figure 19 B show corresponding front view.
The height of bilateral element is the spatial sampling cycle 120.The width of bilateral element 310 is about the spatial sampling cycle
Twice of 120.
Wherein, the optical design is shown to have specific components selection, it should be noted that it can use other equivalent groups
Part realizes, component such as described in previous section.This replaces emitting module including the use of reflection subassembly, and vice versa.
The design object for output light path is to produce output beam 500, makes it for a given direction (a, b)
The corresponding 4D sections of (limit band) continuous light field described in correctly rebuilding.
Laser 700 is used to width to be as closely as possible to the collimated light beam 500 in spatial sampling cycle 120.Light beam
It can be expanded and/or shape (by unshowned add-on assemble) after being generated by laser 700.Before laser 700 is realized
Light beam generator 522 described by chapters and sections.
Angle reconstruction wave filter 702 is used to induce the diffusion for being equal to the angle sampling period 126 in output beam.Angle weight
Building wave filter 702 will be discussed more fully for Figure 20 below.
Zoom lens 704 are used to control the focusing of output beam.It realizes output focal spot modulation device 530.
Beam splitter 706 is used to segmentation output and input light path.It realizes beam splitter 614.
Stationary mirror 708 deflects into output beam the twin shaft scanning reflection mirror 710 described in ensuing part.
Scanning reflection mirror 710 scans the output beam 500 across visual field 124.It realizes scanner 504 and the frame scan of the row simultaneously
Instrument 506.
As an alternative, the twin shaft scan function can use two single single shaft scanning reflection mirrors to realize.
In this structure, stationary mirror 708 is replaced by fast single shaft scanning reflection mirror (it realizes line scanner 504), and twin shaft is swept
Retouch speculum 710 and be replaced by relatively slow single shaft scanning reflection mirror (it realizes frame scan instrument 506).
Figure 19 A show twin shaft scanning reflection mirror 710, and therefore illustrate in the center and two corresponding to visual field 124
The output beam 500 of individual three extreme different angles.
Angle reconstruction wave filter 702 can be used (possible ellipse) diffuser [Qi05] or use as shown in figure 20
The array of lenslet 730 is realized.The purpose of angle reconstruction wave filter is that induction is equal to the angle sampling period 126 in output beam
Diffusion, and lenslet 730 use allow diffusion angle it is precisely controlled.Each lenslet 730 acts on input light
On beam 732 output beamlet (beamlet) 734 is focused on to produce one.Because inputs light beam 732 is collimated, the expansion induced
Scattered angle is the focal point in lenslet by the opposite angle of the diameter of lenslet 730.In order to decouple induced diffusion with by
The light beam that downstream zoom lens 704 are induced is focused on, and the focal position of lenslet 730 is ideally placed on zoom lens 704
(at least approximate) first interarea.
The numeral of lenslet 730 is bigger, and the overall output beam as the summation of each beamlet 734 is more uniform.Respectively
The diameter of lenslet 730 is smaller, and focal length needed for it induces same extended corner is shorter, therefore angle reconstruction wave filter 702 and change
Gap needed between focus lens 704 is just smaller.In practice, can be molded into the zoom saturating for the array of lenslet 730
The surface of mirror 704.
If the output pulse the Duration match angle sampling period 126 (and in a fast scan direction scanning be
It is continuous rather than discrete), then output beam angle of flare has been correct in a fast scan direction, and diffusion is only needed
To be induced in slow scanning direction.In this case, each lenslet 730 can be along vertical with slow scanning direction
The cylindrical lens of direction orientation.
Figure 21 A show the plan of the optical design of two-way light field display element 310.The ray tracked shows behaviour
Input light path in work, the i.e. element are being sampled inputs light beam 600.Figure 21 B show corresponding front view.
Design object for input light path be sampling inputs light beam 600, make its for a given direction (a, b) just
Really filter the corresponding 4D sections of continuous light field.
The twin shaft scanning reflection mirror 710 (or a pair of single shaft scanning reflection mirrors) scans across the inputs light beam of visual field 124
600, as described above for described in output light path.
Incident beam is deflected into fixed mirror 712 by stationary mirror 708 and spectroscope 706, and it is deflected through zoom lens
714 light beam.
Zoom lens 714 are used for the focusing of control input light beam.It realizes input focus modulator 612.
Zoom lens 714 are after tight shot 716, and it gathers (nominally collimation) inputs light beam via aperture 718
It is burnt on photoelectric detector 720.Photodetector 720 realizes radiance sensor 604.
For color detection, photoelectric detector 720 can be stacked [Merrill 05] or with colour filter by photoelectric detector
The photodetector array of ripple device is constituted.
Laser 700 can produce substantially light beam (that is, because including polarization Brewster (Brewster) window
Be used as its to export speculum), in this case, this to for be by the positive beam splitter 706 polarized it is highly effective,
I.e. based on polarization separation output and inputs light beam [vonGunten 97].
If in addition, the zoom lens 704 and 714 are birefringent (for example, they are liquid crystal lens), they are only needed
Respective light beam polarization is acted on, and is therefore simplified.Even if laser 700 does not produce the light beam of high degree of polarisation substantially, it
Also it can include or followed by polarizer (not shown) for this purpose.
Twin shaft scanning reflection mirror
Micro electronmechanical (MEMS) scanner of single shaft generally includes to be hingedly connected to by a pair of perfectly elastic torsions anti-on framework
Penetrate mirror, and electrostatic, magnetic or the capacitive couplings between the speculum and driver are rotated around hinge.In twin shaft
In MEMS scanners [Neukermans 97], the inner frame of the speculum is kept via perpendicular to the another of mirror hinges
To being hingedly coupled to fixed outer framework, inner frame is set to be driven to rotate perpendicular to the speculum.The speculum is usual
Driven in resonance and inner frame is not then.
Inner frame and outer framework surround the speculum, therefore the face of the speculum in typical biaxial MEMS scanner
Product is the sub-fraction of the area coverage (footprint) of the device.This causes this device to the phase for wherein scanner
It is in the light field display important to aperture and non-optimal.This can be obtained by the way that speculum is brought up on sweep mechanism
Improve, such as the way in digital micromirror device (DMD) [Hornbeck 96, DiCarlo 06] is the same.
Figure 22 A show the plan of example biaxial MEMS scanner 710, and it has an elevated speculum, but except this it
Outside, all it is traditional design [Neukermans 97, Gerhard 00, Bernstein 02, Yan06].Central platform 740 passes through
Torsion hinge 742 is connected to inner frame 744.Inner frame 744 is connected to fixed housing by vertically arranged torsion hinge 746
Frame 748.Central platform 740 is actuated to around hinge 742 and rotated, and inner frame 744 is driven along vertical direction and surrounds hinge
746 rotations.Pillar 750 on platform 740 keeps 752 (showing profile) of reflection to raise above sweep mechanism.
Figure 22 B show that the section view of biaxial MEMS scanner 710 is faced, and show and are increased to sweep mechanism by pillar 750
The speculum 752 of top.Speculum 752 is increased to above sweep mechanism by selection is to provide for (accommodate) most
Wide angle scanning.
The drive mechanism that Figure 22 B are not shown, it can be any conventional design as described above.For example, it is central
Platform 740 can include coil, and it is used for conduct alternating electric current, therefore produces the magnetic field changed over time, and it (does not show with platform
Go out) magnetic field interaction of following permanent magnet, to produce the required moment of torsion changed over time.Similarly, inner frame 744
Coil, its magnetic field interaction with permanent magnet can be included.
For the purpose of the present invention, to support exemplary row speed, central platform 740 is driven [Turner in resonance
05], and quick line scanner 504 is realized, and inner frame 744 is driven directly and realizes slow frame scanner 506.
As previously mentioned, the control logic associated with scanner 710 can be monitored (or otherwise determining)
Central platform 740 is in the angle position [Melville 97, Champion12] in resonance scanning direction, to assist timing generator
510 produce accurate sampling clock 518.
Use the extension length of exposure of photodetector array
The nominal exposure duration of single light field sample is limited by the angle sampling period 126 in scanning process, and therefore
May be very short.However, it is also possible to linear photoconductor detector array be disposed parallel to fast scan direction, to replace single light
Photodetector 720, so as to the prolonged exposure duration.
As described above, Figure 21 A show scanning reflection mirror 710, it scans across the mobile inputs light beam 600 of visual field 124.
Equally, configured by not including simplifying for unnecessary optical component, Figure 23 A show the scanning reflection mirror 710, it is scanned
Through the stationary laser beam 760 for corresponding to fixed spot light 224 of photoelectric detector, here, photoelectric detector including M photoelectricity by examining
The linear photoconductor detector array 762 for surveying device is substituted.
If obtaining sample from linear photoconductor detector array 762 with the exact rates that stationary laser beam 760 is scanned across it
This, then the sample of the M Time Continuous from M photoelectric detector can sum to obtain sampled value, and it has than nominal
Effective length of exposure of long M times of length of exposure.
As shown in fig. 23 a, to cover M sample of the angle of visual field wide for linear photoconductor detector array 762, represent M it is continuous
The cycle of sampling clock 518.In given time t, these samples correspond to subtracts M/2 to time t plus M/2 scope from time t
Time, and M continuous sample in any given time by parallel accumulation.
In order to avoid halation (vignetting) when using linear photoconductor detector array 762, visual field 124 must be reduced
M times of angle sampling period 126.
Although sample is read and summation can use Digital Logic to carry out, the relatively high (example of oversampling clock rate 518
Such as, it is 100MHz to exemplary configuration) excite board design.
Therefore, Figure 23 B show photodetector array 762, it includes the simulation coupled with analogue shift register 768
Photodetector array 764.Each cycle of input sample clock 518, shift register 768 is all shifted up, and from every
The value of individual photoelectric detector 766 is added to corresponding shift register level 770.Move into first (bottom) shift register level
770 value is zero.The value of last (top) shift register level 770 is removed via analogue-to-digital converters (ADC) 772
It is converted into beam energy digital sample values 774.This is converted into radiance 134 as described earlier again in turn.The ADC
772 can be foregoing any suitable ADC.
Although analog photoelectricity detector array 764 and analogue shift register 768 can be different, in some realities
They can be tightly integrated together in the embodiment on border.If for example, bucket brigade device (BBD) [Sangster
77, Patel 78] it is used as analogue shift register 768, then photodiode 766 can be directly integrated into its memory node
In 770.And if linear charge-coupled device (CCD) [Tompsett 78] is used as analog photoelectricity detector array 764,
Then it can also constitutionally be operating as analogue shift register 768.
Analog photoelectricity detector array 764 is implemented separately with analogue shift register 768, for example, sensed as active pixel
The standard array [Fossum 04] of device (APS), and analogue shift register for example can be implemented as standard bucket chain type equipment
(BBD) the 3rd clock signal, is which increased to control electric charge from the transfer of the photodetector array 764.
Effective length of exposure can further be improved by accumulation sample in the slow scan direction.This is by portion
The array of M' linear photoconductor detector 762 of administration is realized, to capture the sample of M' adjacent lines simultaneously.During capturing, M'
Sample value 774 is then produced in each cycle of sampling clock 518, rather than is produced in a cycle, and each such
Sample 774 (once being converted into radiance) is added to the corresponding radiance 134 in input visual point image 602.Always
Radiance 134 is by the way that itself divided by M' to be scaled to the longer length of exposure.
For exemplary display configuration, (i.e. each 1/10) of visual field 124 obtains 100us's to arranges value M=M'=100
Length of exposure.
Except increasing effective length of exposure, the linear photoconductor detector array 742 can also be used to adopt by every angle
The sample cycle 126 is comprising K times narrower photoelectric detector 746 (and shift register level 770) and to K times of whole device timing
Individual sampling clock 518 captures apparent sample.Insert last shift register level 770 and ADC 772 between it is attached
Plus analog storage node be used subsequently to add up K continuous analog sample, wherein the value combined digitized and according to sample when
Clock 518 is read.
As radiance sensor 604 (and photodetector 720) is configurable to ToF rangings, therefore photodetection
Device array 762 can also.If for example, ToF rangings are to be based on phase measurement [Kolb 09, Oggier 11], then Photoelectric Detection
Device array 762 can be configured as concurrently accumulated phase sample.
The array of two-way light field display element
Figure 24 shows the simplified block diagram of the array of the two-way light field display element 310 operated in display mode.2D is swept
Retouch instrument 508 and represent 1D line scanners 504 and 1D frame scans instrument 506.
Figure 25 A show the plan of the optical design of the two-way light field display 300 of a line operated in display mode.
The display is made up of the array of two-way light field display element 310, wherein each producing output beam 500.The array is shown as
In single time instant, pointed to per light beams in same direction.Per light beams there is identical slightly to dissipate focus.
Figure 25 B show the corresponding front view of display 300.The rotation 180 degree of element 310 is continuously displayed to improve output
Uniformity
Figure 25 C show that being rotated by 90 ° front portion faces.
For clarity, Figure 25 A, 25B and 25C merely illustrates a small number of bi-directional display elements 310.In practice, bi-directional light
Field display 300 can include any amount of element 310, for example, quantity is thousands of or millions of.Match somebody with somebody for exemplary
Put, it contains 125,000 display element.
Figure 26 shows the plan of a line of display 300, is rotated as shown in Figure 25 B, wherein each element 310 because
This different time during its scan period produces the light beam 500 of the single spot light corresponding to display back.In output
Caused by intermediate gap is due to display element 310 relative to double width in the spatial sampling cycle 120.
Figure 27 shows the plan of a line of display 300, is rotated as shown in fig. 25 c, wherein each element 310 exists
Different time during its scan period produces the light beam 500 corresponding to one spot light of display back.Shown in Figure 26
The gap of output is substantially eliminated now, because display element 310 is rotated so that their width package space sampling period
120。
Figure 28 shows the plan of a line of display 300, is rotated as shown in Figure 25 B, wherein each element 310 exists
The different time of its scan period produces the light beam 500 corresponding with a spot light 204 before display.In output
Gap be due to display element 310 again relative to double width in the spatial sampling cycle 120 caused by.
Figure 29 shows the plan of a row of display 300, is rotated as shown in fig. 25 c, wherein each element 310
Therefore the different time during its scan period is produced and the corresponding light beam of a spot light 204 before display screen
500。
Gap in the output shown in Figure 28 is substantially eliminated now, because display element 310 is rotated so that they
Width have matched the spatial sampling cycle 120.
Figure 30 shows the simplified block diagram of the array of the two-way light field display element 310 operated in a camera mode.
Figure 31 A show the plane of the optical design of a line for the two-way light field display 300 run in a camera mode
Figure.The display is made up of the array of two-way light field display element 310, wherein each capture inputs light beam 600.The array is shown
For in single time instant, same direction is pointed to per light beams.Per light beams there is identical slightly to restrain focus.
Figure 31 B show the corresponding front view of display 300.The rotation 180 degree of element 310 is continuously displayed, to improve input
Uniformity.
Figure 31 C, which are shown, is rotated by 90 ° front view.
Figure 32 shows the plan of a row of display 300, is rotated as shown in figure 31b, and each element 310 is therefore
Different time during its scan period produces the light beam 600 corresponding with a spot light 224 before display screen.
Caused by gap in input is due to display element 310 relative to double width in the spatial sampling cycle 120.
Figure 33 shows the plan of a line of display 300, is rotated as shown in Figure 31 C, wherein each element 310 because
This different time during its scan period produces the light beam corresponding with a spot light 224 before display screen
600.Substantially eliminated now in the gap of the input shown in Figure 32, because display element 310 is rotated such that their width
The package space sampling period 120.
Vibrate display
It is due to display element 310 relative to this in the gap for exporting and inputting as described on Figure 26, Figure 28 and Figure 32
In the spatial sampling cycle 120 caused by double width.This can be by separated having spatial sampling cycle 120 at two
The array of bi-directional display element 310 is vibrated between the position of distance and in half of light field frame of each position display and/or capture
116 and improved.
More generally, can also be in a space except showing (or capture) 1/2nd frames at one of two positions place
(or seizure) 1/N frames are shown in dimension or two spaces dimension at one of N number of position place.
In general, the visual field 124 of display element 310 is limited by by the ratio between width of light beam and element width.
Width relative to the element reduces the bigger visual field 124 of width of light beam permission, but needs higher N values.
Figure 34 A show the sectional side view of two-way light field display 300, are suitable for vertical vibration two-way displays element
Array 310.
The display 300 includes the display panel 800 being removably couplable in frame 802.Display panel 800 includes double
To the array of display element 310.Framework 804 is attached to the frame 802, and the framework surrounds panel 800 and keeps protection face
The transparent cover glass 806 of plate 800.
Display board 800 is removably couplable in frame 802 by set spring (set spring) 808, each clutch
Pole spring is connected to the matching bracket 812 on the bracket (bracket) 810 and frame 802 at the back side of panel 800.
Display panel 800 is vertically moved via the actuator 814 of drive rod 816.The bar 816 is coupled in panel
The bracket 818 and actuator at 800 back side are connected to the matching bracket 820 in frame 802.
Actuator 814 can be suitable for the weight (weight) of the panel 800 with desired speed (such as 100Hz)
Any actuator of mobile desired amount (for example, 2mm).For example, it can be uploaded including acting on the magnet being embedded in bar 816
Streamline circle [Petersen 82, Hirabayashi 95].
Figure 34 B show the identical sectional side view of two-way light field display 300, but adjacent comprising two in vertical direction
Display panel 800 rather than only one of which.
Figure 34 C and Figure 34 D show section view rearview, correspond respectively to Figure 34 A and Figure 34 B.Figure 34 D show display
300, equipped with four neighbouring display panels 800, each dimension 2.This figure has released bigger display 300 can be such as how mould
The mode of block is formed according to multiple less constructions of panel 800.
The display 300 of the concussion, which is designed to it, makes its panel 800 in (the i.e. one time sampling cycle in a frame period
114) inherent two distances are swing between the separation upright position in a spatial sampling cycle 120.
In a kind of operator scheme, actuator 814 is used directly to determine the vertical shift of panel 800.Panel 800 is subsequent
Express delivery is moved to another from from an extreme vertical shift position as far as possible, and shown once panel 800 is in place (or catch
Obtain) under field.Display dutycycle is then the function of the speed of actuator.Actuator is faster, and dutycycle is higher.Pass through in Figure 35 A
Vertical shift time history plot has released this pattern.
In another pattern in operation, the spring constant of spring 808 is chosen, and them is had with the formation of panel 800
There is the oscillator of expected frequency.Actuator 814 is then used to oscillator of the driving with expected frequency.With direct drive phase
Than this needs less powerful actuator, and lower in operating process power consumption.
The shortcoming of harmonic oscillation is that display 800 follows the sinuous path shown in Figure 35 B, and therefore only temporarily solid
It is scheduled on extreme vertical shift position.Therefore the progress one between dutycycle and vertical movement spot is needed to trade off.Dutycycle is lower,
Spot is lower, although advantageously, spot is than dutycycle because sine reduces rapider.For example, Figure 35 B show 67%
Dutycycle, corresponding to 50% vertical movement, it is, the diameter of 25% motion spot.
If the vibration is harmonic wave and display element 310 is scanned, fast scan direction desirably with vibration
Axis aligns, and is minimized with the interaction between to vibrate and scanning.
The frequency of harmonic oscillator is proportional to the square root of the spring constant and the quality ratio of panel 800 of spring 808.
Because spring constant and quality are all additional (additive), therefore the number of panel 800 of the frequency with creating display 300
It is unrelated.
It is used as the alternative for merging the light field as two fields produced by single display device using vibration, Ke Yitong
Cross by beam combiner (for example, half silver-plated glass plate) to combine the light field produced by two displays.
The real-time capture of light field and display
In an important use situation (use-case), as is illustrated by figs. 11 and 12 and described above, light field is shown
Device 200 receives and shows the light field from (being probably long-range) light-field camera 220 in real time.
As described above, how management, which catches focus section, depends on available focal spot modulation rate.
Figure 36 shows the activity diagram of display controller 342 and camera controller 340, its position based on the spectators
(and selectively spectators' direction of visual lines) collaboratively control is focused on.
Described display controller 342 periodically detects (or each in several spectators) face and eye of spectators
Eyeball (at step 900), still alternatively estimates the direction of visual lines (at step 902) of spectators, and by the position of the eyes
(and selectively spectators' direction of visual lines) is sent to camera controller 340 (at step 904).
Camera controller 340 receives the eye position (and selectively spectators' direction of visual lines), and correspondingly
Automatically focus on (at step 906).Automatic focus on may rely on based on the depth by ranging acquisition (as described above) clearly
Focus is set, or dependent on traditional Techniques of Automatic Focusing, such as phase between the image from neighbouring photographing element 230
Detection, or dependent on both combinations, this adjusts focusing to cause image clearly with being autofocusing at desired direction-adaptive
Degree is maximized.
If camera controller 340 only receives eye position, it can go out each phase based on the location estimating of the eyes
A pair of the possible direction of gaze of machine element 230.This is realized in the early time with respect to location-based spectators' dedicated focus mould described in Figure 10 B
Formula.If camera controller 340 receives the estimate of direction of visual lines, it can directly use the estimate.This realizes early
Spectators' dedicated focus pattern of the preceding sight orientation for described in Figure 10 C and Figure 10 D.
If camera supports each sample auto-focusing, this is most based on each sample depth 136, and eyes naturally
Position or estimated direction of visual lines all need not.If camera supports (or each subframe) focus modulation per frame, automatic right
Jiao can be based on direction of gaze that is estimated or being inferred to.
As discussed previously, if the position of eyes is used to infer the possible side of watching attentively of each camera components 230
To then single display channel (and therefore capturing passage) is ideally used each eye.
In general, because auto-focusing may span across multiple frames, therefore when there is multiple capture passages (for example,
During corresponding to multiple spectators or eyes), auto-focusing environment (context) must be retained on a few frames to each passage.
Figure 37 shows the activity diagram for display controller 342 and camera controller 340, its blinkpunkt based on spectators
(or watching depth attentively) collaboratively control is focused on.This realizes spectators in the early time with respect to the sight orientation described in Figure 10 C and Figure 10 D again
Dedicated focus pattern.
Described display controller 342 periodically detects (or each in several spectators) face and eye of spectators
Eyeball (at step 900), estimates the blinkpunkt (or depth) (at step 908) of spectators, and by the position of eyes and the note
Viewpoint (or depth) is sent to camera controller 340 (at step 910).Display controller 342 may be incorporated in incident field and regard
Frequently direction of visual lines of the sample depth 136 based on spectators, the vergence (vergence) based on eyes of user or the base in 110
In both combinations, estimation blinkpunkt (or depth).
Figure 38 shows the activity diagram for camera controller 340 and display controller 342, its collaboratively real-time capture and
Show the sequence of light field frame 116.
Camera controller 340 periodically captures light field frame (at step 920), and sends it to display controller
342 (at steps 922).Display controller 342 is received and optionally resampling light field frame (at step 924), and finally
Show light field frame (at step 926).Resampling is discussed further below.
Resampling step 924 is alternatively using the light field frame locally captured, so that virtually (virtually) illuminates far
Scene representated by the light field frame of journey capture.If the light field frame 116 of remote capture includes depth 136, this is chased after by ray
Track directly carries out (following discussion).
The display of the light field video previously captured
In another important use situation, two-way light field display 300 shows the light field image previously captured.
Figure 39 shows the activity diagram for bi-directional display controller 322, and it shows light field video 110.
Two parallel activities are shown in figure:Face detection activity and display activity on the right in left side.
Detect to face detection property cycle of activity the face and eyes of spectators' (or each of several spectators) (in step
At 900), the position of 930 storage eyes, estimates the blinkpunkt (or depth) (at step 908) of spectators in data storage,
And the blinkpunkt (or depth) is stored in data storage 932.Controller combines the sample depth in source light field video 110
The vergence of 136 direction of gaze of (being stored in data storage 934) based on spectators or the eyes based on user is based on
The combination of both of the above, estimation blinkpunkt (or depth).
Display activity periodically shows next light field frame 116 of (at step 926) light field video 110.Before it shows
Arbitrarily resampling light field (at step 936), specially to match the focusing of estimated plane of fixation.This is realized again
The sight described by Figure 10 C and Figure 10 D orients spectators' dedicated focus pattern in the early time.
Display activity selectively capture light field frame 116 (in step 920), this causes subsequent resampling step (in step
At rapid 936) using captured light field frame, virtually (virtually) illuminates scene representated by light field video.If light field
Video 110 includes depth 136, and this is directly carried out by ray tracing.It allows incident actual rings on display 300
The scene in video is illuminated in border illumination, and allows two-way displays (including spectators) visible real-world object can be by virtual right
As being reflected in virtual scene.
Two parallel activities are asynchronous, and generally have the different cycles.For example, the detection activity of face portion can be with
In 10Hz operations, and display activity can be run on 100Hz.The two activities pass through shared data memory communication.
Light field video is shown according to 3D animation models.
In another important use example, two-way light field display 300 generates according to 3D animation models and shows light field
Video.
Figure 40 shows the activity diagram for bi-directional display controller 322, and it produces according to 3D animation models and shows light
Field video 110.
Two parallel activities are shown in figure:Face detection activity and display activity on the right in left side.
Detect to face detection property cycle of activity the face and eyes of (or each of several spectators) spectators (in step
At 900), the position of eyes is stored in data storage 930, the blinkpunkt (or depth) (at step 908) of spectators is estimated,
And the blinkpunkt (or depth) is stored in data storage 932.Controller, which combines the depth determined according to 3D animation models, to be believed
Cease the direction of gaze of (being stored in data storage 938) based on spectators or the eyes based on user vergence or based on
Both upper combinations, estimation blinkpunkt (or depth).
Display activity periodically renders (at step 940) according to 3 animation models and shows next (at step 926)
Individual light field frame 116.In render process, it matches estimation fixed pan focus.This is realized above for Figure 10 C and figure again
Sight orientation spectators' dedicated focus pattern described by 10D.
Render light field frame 116 and [Levoy 96, Levoy 00] is directly carried out by ray tracing.As shown in Figure 3 B, each
Spectral radiance 128 can be carried out by being tracked since corresponding (now virtual) optical sensor 152 to specimen beam 166
Every ray of one group of ray and determination of sampling is generated [Glassner 89] with the interaction of 3D models.These rays
Preferably randomly (stochastically) sampling 4D specimen beams 166 are selected to, it is related to periodic sampling to avoid
The low frequency human factor of connection.Radiographic density also can adaptively match scene complexity, to reduce aliasing (aliasing).
Two parallel activities are asynchronous, and generally have the different cycles.For example, face detection activity can be
Run on 10Hz, and display activity can be run on 100Hz.The two activities are stored and communicated by shared data.
Although rendering step 940 is shown as being performed by bi-directional display controller 322, it can also by with bi-directional display control
The independent computing device that device 322 processed communicates is performed
Display activity optionally captures (at step 920) light field frame 116, it is allowed to which subsequent rendering step is (in step
At 940) use captured light field frame virtually to illuminate the scene represented by 3D animation models.This is in the ray tracing phase again
Between directly carry out.It allows incident actual environment illumination on display 300 to light virtual scene, and its permission is two-way
Display (including spectators) visible real-world object is reflected in virtual scene by virtual objects.
Each virtual surface that the sight of spectators can be run at it is reflected to obtain actual blinkpunkt 262 (such as
Shown in Figure 10 C).Blinkpunkt can be virtual or real, i.e., respectively after display or before display.If the note
Viewpoint is virtual, then the depth of blinkpunkt by tracing into the ray of element 310 via further reflection (if any)
To determine.If the blinkpunkt is virtual, beam divergence is captured;If the blinkpunkt is real, light is captured
Shu Huiju.This is allowed the viewer to via the reflex fixation real-world object in virtual objects.
In addition to the light field video 110 including being captured by two-way displays 300,3D animation models can be included from other
Light field video that source has been captured or live.This include from this two-way light field display 300 install back-to-back it is another
The light field video 110 of individual two-way light field display 300, this enables dummy object to be superimposed when transparent (and reflect) back side
To the visible real-world object of two-way displays 300.
Function is distributed
The function of display controller 342 can be by associated with display 200 or be built in special in display 200
Controller is performed with one or more autonomous devices that the display 200 is communicated.
Similarly, the function of camera controller 340 can be by associated with camera 220 or be built in camera 220
Nonshared control unit is performed with one or more autonomous devices that the camera 220 is communicated.
Light field resampling
Before display, it may be necessary to resampling light field 110.If the time sampling cycle of target indicator 200
114th, spatial sampling cycle 120 or angle sampling period 126 are different from the corresponding sampling period of source light field 110;If each of which
Spectral sample basis 132 it is different;If their own sampling focus 138 is different;Or if their own light field side
Boundary 102 is different, such as one rotated relative to another or translate (translated) or they have it is different curved
Curved shape, resampling light field is necessary.
Translation can include translation in a z-direction, such as so that face in front of the display shows virtual objects.
Except spectrum resampling, spectrum, which is remapped, can be used for reflecting non-visible wavelength (such as ultraviolet and near infrared ray)
It is mapped to visible wavelength.
If the characteristic of the matching target of light field 110 light field display 200 of just shown (or synthesis) for being captured,
Resampling is not needed.For example, when paired identical two-way displays 300 are used together, such as it is each aobvious as shown in Figure 11
, in default situations need not resampling when showing that device shows the light field 110 captured by another display.However, resampling
To translate the light field border of (translate) light field video 110 to compensate being spatially separating for a pair back-to-back displays 300,
It can be used to realize the not visible property of reality to the region between two displays.
Light field resampling is related to the video of the output light field of the generation of the video 110 institute resampling according to input light field
110.If time sampling system is constant, it is related to the frame 116 according to input light field, i.e., one group output light field visual point image
122, the output light field frame 116 of resampling is generated, wherein each output light field visual point image corresponds to output light field frame 116
Position (x, y) on spatial sampling grid.One of most common purposes of light field is to produce new 2D views [Levoy 96, Levoy
00,Isaksen 00,Ng05a].Resampling light field is equal to one group of new 2D view of generation.
As shown in Figure 3 B, each spectral radiance 128 has corresponding (virtual) optical sensor 152 and specimen beam 166.
Re-sampling output spectrum radiance 128 is calculated to be related to:Identification is all, and light field frame 116 is associated impinging upon and exporting with input
Specimen beam 166 on the corresponding optical sensor 152 of spectral radiance, and calculate the corresponding input spectrum radiance of each light beam
128 weighted sum.Each weight is selected so that overlapping between light beam and optical sensor 152 is directly proportional.
Additional display pattern
The main display pattern of light field display 200 is the scene in order to include the object in any depth according to representative
Discrete light field 110 rebuild continuous light field.
In addition to this main display pattern, it also helps the display that support wherein display 200 simulates conventional 2D displays
Pattern.Given 2D images, this can be realized by two ways.In first method, the 2D source images are by simply
Convenient virtual location in embedded 3D, and discrete light field is rendered and shown accordingly.In this case, 2D images quilt
It is limited in the front or behind positioned at display 200, its minimum (negative or positive) focal length for being limited by display element 210 and visual field
124.So the sample counting of 2D source images is limited by the angle samples number of display 200.
In the second approach, the whole light field visual point image 122 of each display element 210 is set to steady state value, etc.
The value of spatially corresponding pixel in 2D source images, and display element focus is set to its minimum value (negative or positive).
Then the sample counting of 2D source images is counted by the space sample of display 200 and limited.
It can also be used to support wherein display pattern of the scene positioned at infinity.This collimated is exported in display 200
In the case of kind, the visual point image 122 shown by each display element 210 is identical, and exports focus and be arranged to infinitely great.Institute
The angle samples that the required sample counting of the source images of collimation is equal to display 200 are counted.
The source images collimated can using light-field camera 220 by by its camera components 230 focus at infinity with
And selection one visual point image 122 as the image collimated or in order to superior image (superior image) to from
Some visual point images 122 of multiple camera components 230 (and limiting case, from all camera components 230) are averaging
Value is captured.Average image be superior because its have more preferable signal to noise ratio and because its preferably inhibit not position
Scene content in infinity.This averaging method represents the specific example of more generally synthetic aperture method.
Synthetic aperture
During capturing, the light field visual point image 122 captured by any number of adjacent cameras element 230 can be put down
, to simulate the effect [Wilburn 05] in larger camera aperture.In this process, corresponding to identical virtual point source
The spectral radiance 128 of 224 (as shown in Figure 8 B) is averaged.This may need visual point image resampling to ensure to regard with combining
The 4D sampling grides alignment of dot image.
Cause bigger effective exposure using synthetic aperture, and therefore obtain improved signal to noise ratio, but the depth of field is more shallow.
Staggeredly element timing
In (and during subsequent display) during capturing, used by different cameral element 230 (with display element 210)
The timing of frame synchronizing signal can be interlocked at random, to provide sampling [Wilburnl 1] in the time domain evenly.This is in light
Field video 110 causes the smoother sensation to motion when being shown, but if then there is increased motion spot using synthetic aperture.
Speculum (mirror) pattern
Two-way light field display 300 can also be configured to act as speculum, i.e., the light field wherein captured is by real time
Again show.Management is captured and display focus as described above.
Using simplest mirror-mode, each bilateral element shows the visual point image that their own is captured again.This
It can be operated by the complete multi-view image buffer of sample buffer, line buffer or each element.
Image procossing can also be carried out to the light field between showing in capture and again, for example, perform image enhaucament, again
Illumination and frequency spectrum remap.
Audio
Light field display 200 can be by including the number installed in display periphery (or otherwise in its vicinity)
Analog-to-digital converter (DAC), amplifier and electroacoustic transducer (loudspeaker) are configured to reproduce related to light field video 110
The multi-sound channel digital audio of connection.
Light-field camera 220 can be by including one group of sound installed in display periphery (in its vicinity or otherwise)
Sound sensor (microphone) and analogue-to-digital converters (ADC) are configured as capturing many sound of a part for light field video 110
Road DAB.Microphone can also be incorporated in each camera components 230.
Each voice-grade channel can be labeled with being used to capture the physical deflection of the microphone of audio, to allow to carry out
Audio phased array (phased-array) processing [VanVeen 88, Tashev 08], for example for reduce environmental noise or every
From single remote speaker [Anguera 07] (for example, after selection via sight).
Phased-array technique can also be used for the reproducing focus of selected audio-source (such as remote speaker) is chosen
At the local audience in source [Mizoguchi 04] (for example, after selection via sight).This allows multiple spectators to note
To the different audio-sources of the interference with reduction.
Enough fine and close loudspeaker array (for example, with 5cm or smaller cycle) can for reproduction acoustic wavefield
[deVries 99, Spors 08, Vetterli 09], this make it that audio and the position of spectators (that is, audience) are independently virtual
Navigate to its each introduces a collection.Which ensure that the Auditory Perception of shown scene is consistent with its visually-perceptible.It is correspondingly intensive
Microphone array can be used for the actual acoustics wave field of capture, and acoustics wave field can quilt according to the 3D animation models containing audio-source
It is easily synthesized.
Therefore light field video 110 is expansible with including the discrete sound wave field changed over time, i.e., by intensive voice-grade channel
Array is constituted.
One-dimensional loudspeaker array can be used for reproducing acoustic wavefield in a one-dimensional fashion, for example, corresponding to the sight by display 200
Horizontal plane occupied by crowd.Two-dimentional loudspeaker array can be used for two-dimensionally reproducing acoustics wave field.
Bi-directional display controller architecture
Figure 41 shows block diagram in the early time on the bi-directional display controller 322 described in Figure 11 and Figure 12.
The display controller 342 should be considered as being equivalent to the bi-directional display controller 322 operated in display mode, and
And vice versa.Camera controller 340 should be considered as being equivalent to the bi-directional display controller operated in a camera mode, and instead
It is as the same.
Bi-directional display controller 322 includes two-way panel controller 950, and it coordinates the aobvious of single bi-directional display panel 800
Show and capturing function.When two-way displays 300 include multiple panels 800, they can be by multiple panel controllers 950 with mould
The mode of block is controlled.
Each individually display of bi-directional display element 310 and capturing function are by corresponding bilateral element controller 952
Come what is controlled.The element controller 952 is using viewpoint image data memory 954, and it possesses the output viewpoint figure for display
As 502 and the input visual point image 602 captured (described by as previously discussed with respect to Figure 15, Figure 17 and Figure 18).
During showing, the sample 134 of the reading continuous radiation rate from output visual point image 502 of display element 310, and
New radiance sample 134 is written to output visual point image 502 by same time, panel controller 950.The viewpoint image data
Memory 954 only needs to the output visual point image 502 of accommodating portion, if reading and writing good synchronization.
During capturing, the reading continuous radiation rate sample 134 from input visual point image 602 of panel controller 950, and
New radiance sample 134 is written to input visual point image 602 by same time, display element 310.The viewpoint image data is deposited
Storage 954 only needs to adapt to the input visual point image 602 of part, if reading and writing good synchronization.
For exemplary display configuration, it is assumed that whole (rather than part) visual point images, display, which each has, to be used for
Display and the total memory requirement of capture 6E11 bytes (600GB).
The element controller 952 supports two kinds of display patterns:Standard light field, which is shown, (comes from viewpoint image data memory
Output visual point image 502 in 954) and color constancy show (come from color constancy register).
Bi-directional display element controller block 956,954 devices are stored by bilateral element controller 952 and its viewpoint image data
Constitute, be repeated in each bi-directional display element 310.
The controller 952 of panel controller 950 and/or element can be configured as before display or period performs light field
Decompress and lightfield compression is performed during or after capture.Light field DIF further described below and compression.
Can each include in panel controller 950 and element controller 952 is associated with instruction and data memory
One or more general programmable processing units, the one or more graphics process lists associated with instruction and data memory
First [Moreton 05] and such as special logic of the processing of audio frequency process, image/video and compression/de-compression logic
[Hamadani 98], it is all these that all there is enough disposal ability and handling capacity, to support specific bi-directional display to configure.
Although Figure 41 shows each 310 1 element controllers 952 of display element, element controller 952 can be with
It is configured to control multiple display elements 310.
Panel controller 950, which preserves 2D images using 2D image data memories 958, to be used to show.As previously described,
2D images can be shown to show color constancy by configuring each display element 310.In such a mode, panel controller
950 write each pixel of 2D images in the color constancy register of respective element controller 952.Alternately, the 2D
Image can be shown by synthesizing light field frame 116.In such a mode, panel controller 950 is used for 2D images
3D positions and controlled syntheses light field frame 116, and the output visual point image 122 of each generation is write into its corresponding visual point image
Data storage 954.
When being worked under parallel model, panel controller 950 utilizes (collimated) the visual point image number collimated
According to memory 960, the memory preserves collimated output visual point image and the input visual point image collimated.As above institute
State, in the display pattern of collimation, each display unit 310 shows identical output visual point image 122.Panel controller 950
The output visual point image of the collimation can be broadcast to element controller 952, or the collimation before display during showing
Output visual point image can be written to each viewpoint image data memory 954.
Described again as before, in collimation acquisition mode, collimating output visual point image can be by multiple input viewpoint
Image 602 is averaged and obtained.Panel controller 950 can perform this during capturing or after capture and be averaged.
Network interface 962 allows panel controller 950 to exchange configuration data and light field video 110 with external equipment, and
A variety of conventional network interfaces can be included, light field video 110 is supported to provide necessary handling capacity.For example, it can be wrapped
Include be coupled to optical fiber or electric wire it is multiple 10Gbps's or 10OGbps gigabit Ethernets (GbE) interface.
Input video interface 964, which allows external equipment to write reference format video via 2D data memory 958, to be used for
In the display 300 that 2D is shown, display 300 is used as conventional 2D and show.
When display 300 is operated under Collimation Display pattern, input video interface 964 also allows external device (ED) via standard
Direct-view dot image data memory 960 is used to show using collimated light field video 110 as reference format video write-in display.
When display 300 to collimate during trap mode operation, output video interface 966 allows miscellaneous equipment from the display
Read collimated light field video 110 and be used as reference format video.This enables collimated light field video 110 to use a pair of normal videos
Interconnection is easily exchanged between a pair of two-way light field displays 300.
Display timing generator 968 is generated for controlling display and capturing the complete of (as described in being directed to Figure 15 and Figure 17 respectively)
Office's frame synchronizing signal 512.
If display is designed to the vibration as described in for Figure 24 A-24D, the driving actuating of panel movement controller 970
Device 814 and the position for monitoring piston 816.
The various assemblies of bi-directional display controller 322 are communicated via high speed data bus 972.Although various data are passed
Defeated is to be performed as described above by panel controller 950, but in practice, they can be by panel controller (or other assemblies)
To start but be performed by dma logic (not shown).Data/address bus 972 can include multiple buses.
Although various data storages are shown as different, they may be implemented as one or more memory arrays
The region of fixed size or variable-size.
Light field DIF and compression
Although light field video 110 can be with uncompressed form in compatible equipment (including light-field camera 220, light field display
200 and other equipment) between swap, but handling capacity (and memory) demand of light field video generally promote using press
Contracting.Exemplary display is configured with the handling capacity of 4E13 samples/sec (5e14 bps, 500 × 100GbE links (link)),
And need 6E11 bytes (600GB) frame memory.
Compression can be using whole 5D redundancies in the time interval of light field video 110 (that is, including the redundancy between viewpoint
[Chang 06]) or light field frame 116 in 4D redundancies [Levoy 96, Levoy 00, Girod 03].It can also it is each (with
Time change) using the traditional images such as implemented using various JPEG and MPEG standards or regard on light field visual point image 122
Frequency compress technique.100 based on 4D redundancies:1 compression is typical [Levoy 96, Levoy 00].
Three-dimensional and multi-view point video contains a small amount of sparse (sparse) as used in 3D TVs and video (3DV) system
Viewpoint, and because the usual room and time prediction to traditional single-view video adds interview prediction [Vetro 11],
H.264/MPEG-4 5D compressions are supported (via its multiple view video coding (MVC) profile).MVC 5D compressions can be applied to
Intensive light field video 110.
When optional light field depth 136 can use, the compress technique based on depth can be used.Used in 3DV systems
The sign based on depth include multi-view point video plus depth (MVD), geometry based on surface characterize (such as texture net) and
Volume characterization (for example, point cloud) [Alatan 07, Muller 11].
, can be according to one using depth information with being predicted between standard viewpoint compared with (i.e. the MVC without depth) using MVD
The more sparse viewpoint of group carries out effective interview prediction, therefore, and MVD allows the fine and close viewpoint organized more effectively dilute according to one group
Thin viewpoint is synthesized, so as to decouple the viewpoint density [Muller of DIF from the viewpoint density of display at least in part
11]。
By supporting 3DV forms, display 300 also becomes able to and other 3DV equipment and systems exchange 3D video flowings.
Based on visual bi-directional display controller architecture
As shown in Figure 42 A, each two-way displays element 310 has visual field 980 (corresponding to light field visual field 124), wherein
Only sub-fraction 982 is seen by the eyes 240 of spectators.
Therefore, only capture, transmission, resampling, render and show that the visual field of each element (is suitably extensible to allow
Eye motion between frame) subset 982 be effective because which reduce required communication and process bandwidth and required
Power.This selective capture, processing and display depend on face detection.
, can be with the case where scanning is limited to visible field 982 if bi-directional display element 310 is a scanning element
Reduce the sweep time scanned in one or both directions.
It is assumed that minimum sighting distance is 200mm at eyes and visible field 982 is 10mm wide, then (per spectators) exemplary aobvious
Showing (unidirectional) handling capacity of device configuration reduces by two orders of magnitude to 4E11 sample/(5E12 bps of s;Unpressed 46 ×
100GbE is linked;With 46:1 × 100GbE links of 1 compression), and requirement to memory is reduced to 6E9 bytes (6GB).
Further as shown in Figure 42 B, only a small number of display elements 210 and the foveal area of the retina of eyes 240
Projection 984 intersects.Therefore, the angular sampling rate for maying be used at the reduction of the areas outside (is suitably extensible to allow
Eye motion between frame) effectively seizure, transmission, resampling, render and show light field.This selective capture, processing
Depended on display and stare estimation.
Figure 43 shows that optimization is used for the block diagram of the bi-directional display controller 322 based on visual display and capture.
Each full field-of-view image 122 (being stored in Figure 41 viewpoint image data memory 954) is replaced by smaller
Local visual point image 122 (being stored in Figure 43 local viewpoint image data memory 986).Each part visual point image only covers
Cover the special local field of view 982 (as shown in figure 41) of eyes of corresponding element.
Size needed for the maximum of local visual point image is the function of minimum supported sighting distance.
If display 300 supports multiple spectators' (for example, by multiple display channels), the office with spectators' dedicated mode
The capacity of portion's viewpoint image data memory 986 can accordingly increase.At least, in order to support single spectators during showing and catch
Single spectators are supported during catching, local viewpoint image data memory 986 can accommodate four local visual point images, i.e. each
Spectators' eyes 240 1.
In addition, as mentioned above for described in Figure 42 B, each part visual point image can be by double sampling (subsample), then
Replaced when corresponding display element 310 is fallen into the projection of the central fovea by the local visual point image of non-double sampling.This
The order of magnitude of the size of each local visual point image, which can be caused, further to be reduced.In the method, some non-double samplings
Local visual point image is stored in localized indentation viewpoint image data memory 988, and each aobvious in the projection of central fovea
Show that element 310 is configured with the localized indentation visual point image that (in data storage 988) specify and replaced (in data storage
In device 986) the local visual point image of the double sampling of its own.
Number needed for the maximum of central fovea visual point image is the letter that central fovea shows the maximum sighting distance being supported at which
Number.
Assuming that the maximum sighting distance of viewing for central fovea is 5000 millimeters, and central fovea 984 is 2 degree, then (per spectators
) (unidirectional) handling capacity of exemplary display configuration be further reduced the factor 6 arrive 7E10 samples/sec (8E11 bits/
Second;With unpressed 8 × 100GbE links;With 8:1 compression have 1 × 100GbE link;With 80:The 1 of 1 compression ×
10GbE is linked), and storage requirement is reduced to 1E9 bytes (1GB).
When the region of the central fovea of multiple spectators is not overlapping, each see can be supported during single display scan channel
The special focus of spectators in many central foveal areas.
Worked in the same manner based on visual capture, difference is, based on visual display be in response in
The position of one or more local audiences of the display or sight, are in response in the long-range display of viewing based on visual capture
Position or sight that one or more spectators of the light field captured on device see.
Using based on visual double sampling, element controller 952 supports two additional display patterns:Using radiation
The display of the interpolation of rate sample 134 (exports viewpoint figure according to the double sampling in local viewpoint image data memory 986
Picture) and central fovea show (in localized indentation image data memory 988 specify localized indentation output visual point image).
General introduction
Although the present invention is described with reference to multiple embodiments, one of ordinary skill in the art will recognize that
Many alternate embodiments are also admitted of to the present invention, and the scope of the present invention is only limited by the claims.
Citation
The following publications content mentioned in this specification is incorporated herein by reference herein.
[Anguera 07] X. An Gela et al., " the meeting daily record for being used for the sound wave pack of loudspeaker ", on audio, language
The IEEE journals of sound and Language Processing, 15 (7), in September, 2007 ([Anguera 07] X.Anguera et al, " Acoustic
Beamforming for Speaker Diarization of Meetings",IEEE Transactions on Audio,
Speech,and Language Processing,15(7),September2007)。
I is smooth et al. by [Alatan 07] A.A., " the scene characterization technique-investigation for being used for 3DTV ", on circuit with being
Unite video technique IEEE journals, 17 (11), in November, 2007 ([Alatan 07] A.A.Alatan et al, " Scene
Representation Technologies for 3DTV-A Survey",IEEE Transactions on Circuits
and Systems for Video Technology,,November 2007)。
[Amir 03] A. Ahs Mil (A.Amir) et al., " calibration-free an eye line tracking (Calibration-Free
Eye Gaze Tracking) ", United States Patent (USP) US6578962, on June 17th, 2003).
[Aziz 10] H. A Qi hereby J.A. sections root (H.Aziz and J.A.Coggan), " stack OLED components
(Stacked OLED Structure) ", U.S. Patent number US7750561, on July 6th, 2010
[Adelson 91] E.H. A De Ademilsons and JR Bergens, " the full light function and element of early vision " calculates mould
The visual processes of type, M. orchid enlightening and JA Movshon (eds), publishing house of the Massachusetts Institute of Technology, pp.3-20 in 1991
([Adelson 91]E.H.Adelson and J.R.Bergen,"The Plenoptic Function and the
Elements of Early Vision",Computational Models of Visual Processing,M.Landy
and J.A.Movshon(eds),MIT Press,1991,pp.3-20)。
[Balogh 06] T. Ambrus Baloghs (T.Balogh) " method and apparatus (Method and of display 3D rendering
Apparatus for Displaying 3D Images) ", on 2 14th, U.S. Patent number US 6999071,2006.
[Barabasi 1] J. Barabas et al., " the specific relevant panorama sketch of diffraction of real scene ", SPIE annual reports, the
Volume 7957, collection of thesis ([Barabasi 1] J.Barabas et al, " Diffraction Specific in 2011
Coherent Panoramagrams of Real Scenes",Proceedings of SPIE,Vol7957,2011)。
[Benzie 07] P. sheets are neat etc., " the investigation of 3DTV displays:Technology and technique ", the circuit on video technique
With the IEEE journals of system, 17 (1), in November, 2007 ([Benzie07] P.Benzie et al, " A Survey of 3DTV
Displays:Techniques and Technologies",IEEE Transactions on Circuits and
Systems for Video Technology,17(11),November2007)
[Berge 07] B. shellfishes lattice (Berge) and J. ambers plucked instrument (Peseux), " zoom lens (Lens with Variable
Focus) ", U.S. Patent number RE39, on October 9th, 874,2007.
[Bernstein 02] J. Bornsteins (Bernstein), " device (Multi-Axis of multiaxis magnetically-actuated
Magnetically Actuated Device) ", U.S. Patent number US6388789, on May 14th, 2002.
[Berreman 80] D.W. Bu Ruiman (Berreman), " Zoom liquid crystal lens system ", U.S. Patent number
US4190330, on 2 26th, 1980.
[Betke 00] M. Bei Teke et al., " the real-time active detecting of eye sclera ", on Human Modeling, analysis and synthesis
IEEE CVPR seminars, ([Betke 00] M.Betke et al, " Active Detection of Eye in 2000
Scleras in Real Time",IEEE CVPR Workshop on Human Modeling,Analysis and
Synthesis,2000)。
[Brill 98] M.H. cloth Lille (Brill) etc., " primary colors and colour imaging ", the 6th colour imaging conference:Face
Color science, system and application, 17-20 days in November, 1998, IS&T ([Brill 98] M.H.Brill et al, " Prime
Colors and Color Imaging",The Sixth Color Imaging Conference:Color Science,
Systems and Applications,November 17-20,1998,IS&T)。
[Bright 00] W.J. Brights (Bright), " using double sampled pipeline analog-digital switching system and
Operating method (Pipeline Analog-to-Digital Conversion System Using Double Sampling
And Method of Operation) ", U.S. Patent number US6166675, on December 26th, 2000.
[Champion 12] M. Champlainians (Champion) etc., " scanning reflection mirror position determines (Scanning Mirror
Position Determination) ", U.S. Patent number US8173947, on May 8th, 2012.
[Chang 96] I-C. (Chang)." acousto-optic Prague unit (Acousto-Optic Bragg Cell) ", it is beautiful
State patent No. US5576880, on November 19th, 1996.
[Chang 06] C-L. (Chang) etc., " lightfield compression for being lifted and being moulded adaptation using parallax-compensation ", is closed
In the IEEE journals of image procossing, 15 (4), in April, 2006 ([Chang 06] C-L.Chang et al, " Light Field
Connpression Using Disparity-Compensated Lifting and Shape Adaptation",IEEE
Transactions on Image Processing,15(4),April 2006)。
[Connor11] R.A. Kang Na (Connor), " using holographic observation (TM) 3D imagings of rotary luminous element
(Holovision (TM) 3D Imaging with Rotating Light-Emitting Members) ", U.S. Patent number
US7978407, on July 12nd, 2011.
[deVries 99] D. De Vries and M.M. cloth grace, " utilizing the synthesis and analysis of the wave field of array technique ", are closed
In the collection of thesis of IEEE seminars in 1999 of the application to audio and the signal transacting of acoustics, 17-20 days in October, 1999
([deVries 99]D.de Vries and M.M.Boone,"Wave Field Synthesis and Analysis
Using Array Technology",Proceedings of the 1999IEEE Workshop on Applications
of Signal Processing to Audio and Acoustics,17-20October 1999)。
[DiCarlo 06] A. George DiCarlos (DiCarlo) etc., " no yoke implicit type hinge DMD (Yokeless
Hidden Hinge Digital Micromirror Device) ", U.S. Patent No. US7,011,415,2006 year March 14
Day.
[Duchowski 07] A. Du Qiaosiji, eye tracking methods-opinion and practice, the second edition, Springer Verlag are published
Society, 2007.([Duchowski07]A.Duchowski,Eye Tracking Methodology-Theory and
Practice,Second Edition,Springer 2007)。
[Fossum 04] E.R. good fortune Pehanorm (Fossum) et al., " has the CMOS active pixel sensor of electric charge transfer in pixel
(Active Pixel Sensor with Intra-Pixel Charge Transfer) ", U.S. Patent number US6744068,
On June 1st, 2004.
[Fang08] J. side (Fang) et al., " wide-angle zoom lens system (Wide-Angle Variable Focal
Length Lens System) ", U.S. Patent number US7359124, on April 15th, 2008.
[Georgiev 06] T. Georgis et al., " the space-time angular resolution balance in integration photography ", on rendering
European graphics can seminar, ([Georgiev 06] T.Georgiev et al, " Spatio-Angular in 2006
Resolution Tradeoff in Integral Photography",Eurographics Symposium on
Rendering,2006)。
[Georgiev 09] T. Georgis (Georgiev) et al., " full light camera (Plenoptic Camera) " is beautiful
State patent No. US7620309, on November 17th, 2009.
[Gerhard 00] G.J. Jie Hade (Gerhard) et al., " scanning with splicing, timing and distortion correction shows
Show (Scanned Display with Pinch, Timing and Distortion Correction) ", U.S. Patent number
US6140979, on October 31st, 2000.
[Girod 03] B. Ji Luode et al., " lightfield compression lifted using parallax compensation ", on multimedia and review
Can ieee international conferences in 2003, in July, 2003 ([Girod 03] B.Girod et al, " Light Field
Compression Using Disparity-Compensated Lifting",2003IEEE International
Conference on Multimedia and Expo,July 2003)。
[Glassner 89] A.S. Glassners (editor), ray tracing introduction, academic press, 1989
([Glassner 89]A.S.Glassner(ed.),An Introduction to Ray Tracing,Academic
Press,1989)。
[Hamadani 98] M. Ha Madanni (Hamadani) and R-S.Kao, " are used for the mpeg encoded of multimedia application
And solution code system (MPEG Encoding and Decoding System for Multimedia Applications) ", it is beautiful
State patent No. US5845083, on December 1st, 1998.
[Hansen 10] D.W. Hansens and Q. are neat " in onlooker's eye:Eyes and the model theory stared ", on pattern
Analysis and machine intelligence IEEE journals, 32 (3), in March, 2010 ([HansenI O] D.W.Hansen and Q.Ji, " In
the Eye of the Beholder:A Survey of Models for Eyes and Gaze",IEEE
Transactions on Pattern Analysis and Machine Intelligence,32(3),March 2010)。
[Hartridge 22] H. Hartrich, " visual acuity and resolution capability of eye ", physiology magazine, 57 (1-2),
December 22 nineteen twenty-two ([Hartridge 22] H.Hartridge, " Visual Acuity and the Resolving
Power of the Eye",The Journal of Physiology,57(1-2),December 22,1922)。
(Omae), " manufacture of nitride semiconductor luminescent element before [Higuchi 10] Y. burners (Higuchi) and K. are big
Method and nitride semiconductor luminescent element (Method of Manufacturing Nitride Semiconductor
Light Emitting Element and Nitride Semiconductor Light Emitting Element) ", it is beautiful
State patent application publication number US2010/0098127, on April 22nd, 2010.
[Hirabayashi95] Y. Pinglin (Hirabayashi) et al., " moving-magnetic type executing agency (Moving Magnet-
Type Actuator) ", U.S. Patent number US5434549, July 18 nineteen ninety-five.
[Hoffman 08] D.M. Huffmans et al., " influx-adaptability regulation conflict hinders visual performance and causes vision
Fatigue ", magazine vision, 8 (3):On March 28th, 33,2008 ([Hoffman 08] D.M.Hoffman et al, " Vergence-
Accommodation Conflicts Hinder Visual Performance and Cause Visual Fatigue",
Journal of Vision,8(3):33,28March 2008)。
[Hornbeck 96] L.J. Hornbecks (Hornbeck), " active yoke built-in hinge DMD
(Active Yoke Hidden Hinge Digital Micromirror Device) ", U.S. Patent number US5535047,
On July 9th, 1996.
[Hua 11] G. China (Hua) et al., " the facial portion identification (Face projected using the vertical tensor of differential training
Recognition Using Discriminatively Trained Orthogonal Tensor Projections) ", it is beautiful
State patent US7936906, on May 3rd, 2011.
" zoom lens (the Variable-Focal Length Lens) " of [Imai11] T. the present wells (Imai) et al. is beautiful
State patent No. US8014061, on September 6th, 2011.
[Isaksen 00] A. Isakssons etc., " light field of dynamic Reparameterization ", on computer graphical and interaction
The collection of thesis of 27th annual meeting of technology party, ACM, 2000 ([Isaksen 00] A.Isaksen et al, "
Dynamically Reparameterized Light Fields",Proceedings of the 27thAnnual
Conference on Computer Graphics and Interactive Techniques,ACM,2000)。
[Jacob 07] S.A. Jacobs (Jacob), " varifocal electro-optic lens (Variable Focal Length
Electro-Optic Lens) ", patent application publication US on March 29th, 2007/0070491,2007.
[Jones 03] M.J. Jones and P. violas, " quick multiple views face detection ", MITSUBISHI ELECTRIC RES LAB
TR2003-96, in August, 2003 ([Jones 03] M.J.Jones and P.Viola, " Fast Multi-View Face
Detection",Mitsubishi Electric Research Laboratories TR2003-96,August 2003)。
[Jones 06] M.J. Jones (Jones) and P. violas (Viola), " are used to detect target in digital picture
Method and system (Method and System for Object Detection in Digital Images) ", the U.S. is special
Sharp US7099510, on August 29th, 2006.
[Kasahara 11] D. large bamboo hat with a conical crown and broad brim originals et al., " blueness and green the hanging down based on GaN injected at room temperature using electric current
The demonstration of straight cavity surface-emitting laser ", Applied Physics bulletin, 4 (7), in July, 2011 ([Kasahara 11] D.Kasahara
et al,"Demonstration of Blue and Green GaN-Based Vertical-Cavity Surface-
Emitting Lasers by Current Injection at Room Temperature",Applied Physics
Express,4(7),July 2011)
" time-of-flight sensors in computer graphics " such as [Kolb 09] A. Cole's cloth, European graphics meeting meeting,
2009 ([Kolb 09] A.Kolb et al, " Time-of-Flight Sensors in Computer Graphics ",
Eurographics Conference 2009)。
[Kobayashi 91] K. holts (Kobayashi), " laser beam scanning systems (Laser Beam Scanning
System) ", U.S. Patent number US4992858, on 2 12nd, 1991.
[Koike 08] T. little Chi (Koike) et al., " three-dimensional display apparatus (Three-Dimensional Display
Device) ", patent application publication US 2008/0036759, on 2 14th, 2008.
[Kowel 86] G.T. examines " adaptive liquid crystal lens (the Adaptive Liquid of Weir (Kowel) et al.
Crystal Lens) ", United States Patent (USP) US4572616, on 2 25th, 1986.
[Lazaros 08] N.Lazaros et al. the, " review of stereoscopic vision algorithm:From software to hardware ", Light Electrical one
Body international magazine, 2 months 2008 ([Lazaros 08] N.Lazaros et al, " Review of Stereo Vision
Algorithms:from Software to Hardware",International Journal of
Optomechatronics,2,2008)。
[Levinson 96] R.A. Paul levinsons (Levinson) and S. Wus (Ngo) " Pipelining AD converter
", U.S. Patent number US5572212, on November 5th, 1996.
[Levoy 96] M. Lays are fertile and P. Han Lahan, " light field is rendered ", and the on computer graphical and interaction technique the 23rd
The collection of thesis of annual meeting, ACM, ([Levoy 96] M.Levoy and P.Hanrahan, " Light Field in 1996
Rendering",Proceedings of the 23rd Annual Conference on Computer Graphics and
Interactive Techniques,ACM,1996)。
[Levoy 00] M. Lays irrigate (Levoy) and P. Han Lahan (Hanrahan), " are used for the method and system that light field is rendered
(Method and System for Light Field Rendering) ", U.S. Patent number US6097394, August 1 in 2000
Day.
[Lin 11] H-C. woodss et al., " review that electric can adjust focus liquid crystal lens ", Electrical and Electronic materialogy meeting, 12
(6), ([Lin 11] H-C.Lin et al, " A Review of Electrically on December 25th, 2011 Tunable
Focusing Liquid Crystal Lenses",Transactions on Electrical and Electronic
Materials,12(6),25December 2011)。
High Hart of [Lienhart 03] 2003-R. row et al., " is used for the classification that real-time target detects the lifting with tracking
The detector tree of device ", on multimedia and international conference collections of thesis in 2003 of exhibition, volume 1 ([Lienhart 03]
R.Lienhart et al,"A Detector Tree of Boosted Classifiers for Real-Time Object
Detection and Tracking",Proceedings of the 2003International Conference on
Multimedia and Expo-Volume 1,2003)。
[Love 09] G.D. Lip rivers dimension et al., " high speed painted switchable lenticular promotes the development of volumetric display " optics is fast
Report, 17 (18), August in 2009 ([Love09] G.D.Love et al, " High-Speed Switchable Lens on the 31st
Enables the Development of a Volumetric Stereoscopic Display",Optics Express
17(18),31August 2009)。
[Lu 09] T-C. Shandongs et al., " development of the Vcsel based on GaN ", in quantum electronics
In the selected topic IEEE magazines, 15 (3), in May, 2009/June ([Lu09] T-C.Lu et al, " Development of
GaN-Based Vertical cavity Surface-Emitting Lasers",IEEE Journal of Selected
Topics in Quantum Electronics,15(3),May/June 2009)。
[Maserjian 89] J. Mei Se Ji'an (Maserjian), " MQW optical modulator (Multiple
Quantum Well Optical Modulator) ", U.S. Patent number US4818079, on April 4th, 1989.
[Melville 97] C.D. Melvilles, " position detection (Position of mechanical resonance scanning reflection mirror
Detection of Mechanical Resonant Scanner Mirror) ", U.S. Patent number US5694237.
[Merrill 05] R.B. Merrill Lynch (Merrill) and R.A. Martins (Martin), " vertical color filtered sensor group
With its semiconductor integrated circuit manufacture method (Vertical Color Filter Sensor Group and of manufacture
Semiconductor Integrated Circuit Fabrication Method for Fabricating Same) ", it is beautiful
State patent No. US6894265, on May 17th, 2005.
[Mizoguchi 04] H. exit or entrance of a clitch et al., " visually controllable beamforming system based on feature tracking and
Loudspeaker array ", the 17th pattern-recognition international conference collection of thesis, in August, 2004 23-26 days ([Mizoguchi 04]
H.Mizoguchi et al,"Visually Steerable Sound Beam Forming System Based On Face
Tracking and Speaker Array",Proceedings of the17th International Conference on
Pattern Recognition,23-26August 2004)。
[Moreton 05] H.P. Mortons (Moreton) et al., " user-programmable geometry engines (User
Programmable Geometry Engine) ", U.S. Patent number US6900810, on May 31st, 2005.
[Muller 11] K. Mullers et al., " using the 3-D representation of video shot of depth map ", IEEE collections of thesis, 99 (4), 2011
April in year ([Muller 11] K.Muller et al, " and 3-D Video Representation) using Depth Maps ",
Proceedings of the IEEE,99(4),April 2011)。
[Naganuma 09] K. bayous et al., " high-resolution KTN optical beam scanners ", NTT technologies look back 7 (12),
On December 12nd, 2009 ([Naganuma 09] K.Naganuma et al, " High-Resolution KTN Optical
Beam Scanner",NTT Technical Review 7(12),12December 2009)。
Village (Nakamura) et al. in [Nakamural O] K., " electro-optical device (Electrooptic Device) " is beautiful
State patent US7764302, on July 27th, 2010.
[Naumov 99] A.F. Nuo Mofu et al., " the control optimizations of sphere mode LCD lens ", optics letter 4 (9),
In April, 1999 ([Naumov99] A.F.Naumov et al, " Control Optimization of Spherical Modal
Liquid Crystal Lenses",Optics Express 4(9),April 1999)。
[Neukermans 97] A.P. Niu Kemansi (Neukermans) and T.G. Si Laite (Slater), " micromechanics is turned round
Turn scanner (Micromachined Torsional Scanner) ", U.S. Patent number US5629790, on May 13rd, 1997.
[Ng 05a] R. 5 etc., " being photographed using the light field of the full light camera of hand-held ", Stamford technical report CTSR
2005-02, ([Ng 05a] R.Ng et al, " Light Field Photography with a Hand-held in 2005
Plenoptic Camera",Stanford Tech Report CTSR 2005-02,2005)。
[Ng 05b] R. 5, " photography of Fourier leaf slice ", on the ACM journals of figure, 24 (3), in July, 2005 ([Ng
05b]R.Ng,"Fourier Slice Photography",ACM Transactions on Graphics,24(3),July
2005)。
On the western tails of [Nishio 09] M. (Nishio) and K. villages (Murakami), " deformable mirror (Deformable
Mirror) ", U.S. Patent number US7474455, on January 6th, 2009.
[Oggier l1] T. Ao Gejie (Oggier) et al., " time-based digitlization conversion on the chip of pixel output
(On-Chip Time-Based Digital Conversion of Pixel Outputs) ", U.S. Patent number
US7889257, on 2 15th, 2011.
[Ong 93] E. is high etc., " the static adaptability regulation of congenital nystagmus ", investigation ophthalmology and visual science, 34
(1), in January, 1993 ([Ong93] E.Ong et al, " Static Accommodation in Congenital
Nystagmus",Investigative Ophthalmology&Visual Science,34(1),January 1993)。
[Palmer 99] S.E. Palmers, visual science-photon phenomenon, publishing house of the Massachusetts Institute of Technology, 1999
([Palmer99]S.E.Palmer,Vision Science-Photons to Phenomenology,MIT Press,
1999)。
[Patel 78] M.P. Pa Teer (Patel), " bucket chain type circuit (Bucket Brigade Circuit) ", the U.S.
Patent No. US4130766, on December 19th, 1978.
[Petersen82] C.C. Peters are gloomy (Petersen), " linear electric motors (Linear Motor) ", U.S. Patent number
US4363980, December 14 nineteen eighty-two.
[Plainis 05] S. Preies Nice et al., " influence of the aberration to the regulation reaction of steady-state error ", vision is miscellaneous
Will, 5 (5):On May 23rd, 7,2005 ([Plainis 05] S.Plainis et al, " The Effect of Ocular
Aberrations on Steady-State Errors of Accommodative Response",Journal of
Vision,5(5):7,23May 2005)。
[Qi 05] J. is neat et al., " being used for the oval holographic diffuser cut out that LCD is applied ", SID magazines, 13 (9),
([Qi 05] J.Qi et al, " Tailored Elliptical Holographic Diffusers for LCD in 2005
Applications",Journal of the SID,13(9),2005)。
[Saleh 07] B.E.A. Sa Lihe and M.C. Thailand are uncommon, photonic propulsion basis (second edition), power publishing house, 2007
([Saleh 07]B.E.A.Saleh and M.C.Teich,Fundamentals of Photonics(Second
Edition),Wiley,2007)
[Sangster77] F.L.J. Sang Site (Sangster), " electric charge transmission set (Charge Transfer
Device) ", United States Patent (USP) US4001, on January 4th, 862,1977.
[Schwerdtner 06] A. Xi Weidele (Schwerdtner) et al., " are used for the holographic reconstruction of three-dimensional scenic
(Device for Holographic Reconstruction of Three-Dimensional Scenes) ", United States Patent (USP)
Application publication number US2006/0250671, on November 9th, 2006.
[Seitz 06] S.M. plug thatches et al., " comparison and evaluation of multi-viewpoint three-dimensional algorithm for reconstructing " is regarded on computer
Feel the IEEE ACMs meeting with pattern-recognition, 17-22 days in June, 2006 ([Seitz 06] S.M.Seitz et al, "
A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms",
IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
17-22June,2006)。
The uncommon top grade people of [Shechter 02] R. houses, " the compact optical beam expander with striated pattern ", Application Optics,
41 (7), on March 1st, 2002 ([Shechter 02] R.Shechter et al, " Compact Beam Expander with
Linear Gratings",Applied Optics 41(7),1March 2002)。
[Sharp 00] G.D. Sharp (Sharp), " changeable composite achromatic circumvolution polarimeter (Switchable
Achromatic Polarization Rotator) ", U.S. Patent number US6097461, on August 1st, 2000.
[Shibaguchi 92] T. sesames mouthful (Shibaguchi) " electro-optical device (Electrooptic Device) ", the U.S.
Patent No. US5140454, on August 18th, 1992.
[Shih 00] S-W. histories and Liu J., " no calibration Visual Trace Technology ", the 15th international meeting on pattern-recognition
Discuss collected works, 3-7, in September, 2000 ([Shih 00] S-W.Shih and J.Liu, " A Calibration-Free Gaze
Tracking Technique",Proceedings of the 15th International Conference on
Pattern Recognition,3-7September 2000)。
[Simmonds 11] M.D. Symonds (Simmonds) and R.K. Howards (Howard), " projection display
(Projection Display) ", U.S. Patent number US7907342, on March 15th, 2011.
[Spors08] S. Shi Baisi et al., " wave field blending theory is visited again ", the 124th meeting of Audio Engineering Society,
17-20 days in May, 2008 ([Spors08] S.Spors et al, " The Theory of Wave Field Synthesis
Revisited",124th Convention of the Audio Engineering Society,17-20May 2008)
[Svelto 10] O. swell supports, Principles of Laser (the 5th edition), Springer Verlag, 2010 ([Sveltol O]
O.Svelto,Principles of Lasers(Fifth Edition),Springer,2010)。
[Szeliski 99] R.S. Si Zelisiji (Szeliski) and P. Garlands moral (Golland), " are performed stereo
Matching is with method (the Method for Performing Stereo of the depth, color and opacity of recovering surface-element
Matching to Recover Depths, Colors and Opacities of Surface Elements) ", the U.S. is special
Profit US5917937, on June 29th, 1999.
[Tashev 08] I. Ta Xiefu (Tashev) and H. Malwas (Malvar), " use microphone array formation wave beam
System and method (System and Method for Beamforming Using a Microphone Array) ", it is beautiful
State patent US7,415,117 days, the on the 19th of August in 2008
[Titus 99] C.M. Titus et al., " efficient, accurate liquid crystal digital light deflector ", international light engineering science
Can minutes, volume 3633, diffraction and holographic technique, system and spatial light modulator VI, in June, 1999 ([Titus 99]
CM.Titus et al,"Efficient,Accurate Liquid Crystal Digital Light Deflector",
Proc.SPIE Vol.3633,Diffractive and Holographic Technologies,Systems,and
Spatial Light Modulators VI,June 1999)。
The general C1-esteraseremmer-N of [Tompsett 78] M.F. soup (Tompsett), " electric charge transfer imaging device (Charge Transfer
Imaging Devices) ", U.S. Patent number US4085456, on April 18th, 1978.
[Turk 92] M. spies gram (Turk) and AP Peng Telan (Pentland), " facial-recognition security systems (Face
Recognition System) ", United States Patent (USP) US5164992, on November 17th, 1992.
[Turner 05] A.M. Teners (Turner) et al., " pulsed drive (the Pulse Drive of resonant MEMS devices
Of Resonant MEMS Devices) ", U.S. Patent number US6965177, on November 15th, 2005.
[Urey 11] H. arranges (Urey) and M. Sai Yinte (Sayinta) especially, " is used for the device for showing 3D rendering
(Apparatus for Displaying 3D Images) ", U.S. Patent Application Publication No. US2011/0001804,2011
January 6.
[Vallese 70] L.M. Ba Leize (Vallese), " optical scanning device (Light Scanning Device) ",
U.S. Patent number US3502879, on March 24th, 1970.
[VanVeen 88] B.D. model essays and K.M. Barks are sharp (Buckley), " Wave beam forming:Space filtering it is multi-functional
Method ", IEEE ASSP magazines, in April, 1988 ([VanVeen 88] B.D.Van Veen and K.M.Buckley, "
Beamforming:A Versatile Approach to Spatial Filtering",IEEE ASSP Magazine,
April 1988)。
[Vetro 11] A. Wei Teluo et al., " H.264/MPEG-4AVC the stereo and multiple view video coding of standard expands
The general view of exhibition ", IEEE collections of thesis, 99 (4), in April, 2011 ([Vetro 11] A.Vetro et al, " Overview of the
Stereo and Multiview Video Coding Extensions of the H.264/MPEG-4AVC
Standard",Proceedings of the IEEE,99(4),April 2011)。
[Vetterli 09] M. Wei Teli (Vetterli) and F.P.C. lift (Pinto), and " audio wave field is encoded
(Audio Wave Field Encoding) ", patent application publication US on October 1st, 2009/0248425,2009.
[vonGunten 97] M.K. Feng Gong rise (von Gunten) and P. Bei Weisi (Bevis), " wideband polarization beam splitting
Device (Broad Band Polarizing Beam Splitter) ", U.S. Patent number US5625491, on April 29th, 1997.
[Watanabe96] M. crosses side, " according to the minimum operator defocused as the setting of passive depth ", on computer vision
With the IEEE ACMs proceeding of pattern-recognition, 18-20 days in June, 1996 ([Watanabe96]
M.Watanabe,"Minimal Operator Set for Passive Depth from Defocus",Proceedings
of the IEEE Computer Society Conference on Computer Vision and Pattern
Recognition,18-20June,1996)
[Wilburn 05] B. Wilbournes etc., " being imaged with the high-performance of big camera array ", the ACM on figure
Report-ACM collections of thesis, computer graphics special interest group, ([Wilburn05] B.Wilburn et al, " High in 2005
Performance Imaging Using Large Camera Arrays",ACM Transactions on Graphics-
Proceedings of ACM SIGGRAPH 2005)
[Wilburn l1] B. Wilbournes (Wilburn) et al., " are interlocked triggering capturing scenes using dense camera array
Instrument and method (Apparatus and Method for Capturing a Scene Using Staggered
Triggering of Dense Camera Arrays) ", U.S. Patent number US8027531, on September 27th, 2011.
[Xiaobo 10] C. dawn ripples et al., " the Pipelining analog to digital conversions of 12 bit 100MS/s without calibration
Device ", magazine semiconductor 31 (11), in November, 2010 ([Xiaobo 10] C.Xiaobo et al, " A12bit 100MS/s
pipelined analog to digital converter without calibration”Journal of
Semiconductors 31(11),November 2010).
[Yan 06] J. tight (Yan) et al., " has the double electric capacity of magnetic force and MEMS scanners (the MEMS Scanner of driving
With Dual Magnetic and Capacitive Drive) ", United States Patent (USP) US7071, on July 4th, 594,2006.
[Yaras 10] F. Asias Lars et al., " state of holographic display field ", Display Technique magazine, 6 (10), 2010
October ([Yaras 10] F.Yaras et al, " State of the Art in Holographic Dsiplays ",
Journal of Display Technology,6(10),October 2010)。
Claims (20)
1. a kind of two-way light field display device, includes the array of two-way light field display element, each two-way light field display element bag
Include:
(a) scanner, for scanning inputs light beam and output beam on two-dimensional field of view;
(b) input focus modulator, the focus for modulating inputs light beam over time;
(c) radiance sensor, the radiance for sensing inputs light beam over time;
(d) radiance sampler, the radiance for inputs light beam of being sampled at discrete time;
(e) light beam generator, for producing output beam;
(f) radiance modulator, the radiance for modulating the output beam over time;And
(g) focal spot modulation device, the focus for modulating the output beam over time are exported.
2. equipment according to claim 1, wherein the scanner is selected from the following group, the group includes:Electromechanics scanning
Speculum, addressable deflector storehouse, acousto optic scanner and electropical scanning instrument.
3. equipment according to claim 1, wherein the scanner includes having selected from least one of following set of
The biaxial plane electric scanning speculum of drive mechanism, the group includes:Electrostatic drive mechanism, Magnetic driving mechanism and capacitive driving machine
Structure.
4. equipment according to claim 1, wherein, the scanner includes biaxial plane electric scanning speculum, the biaxial plane
Electric scanning speculum includes:Speculum, platform, inner frame and outer framework, the speculum are connected to the platform, institute via post
State platform and be hingedly connected to inner frame via first pair, and the inner frame is hingedly connected to outer framework via second pair, the
A pair of hinges are disposed substantially perpendicular to second pair of hinge, thus allow the biaxial movement of speculum.
5. equipment according to claim 1, wherein the scanner includes:First scanner, for sweeping in the first direction
Retouch the inputs light beam and output beam;Second scanner, for scanning inputs light beam and output beam simultaneously in a second direction,
The second direction is substantially perpendicular to first direction.
6. equipment according to claim 5, wherein, the first and second scanners are selected from following set of, and the group includes:
Electromechanical scanning reflection mirror, addressable deflector storehouse, acousto optic scanner and electropical scanning instrument.
7. equipment according to claim 1, wherein the input focus modulator, selected from following set of, the group includes:
Liquid crystal lens, liquid lens, deformable films speculum, deformable films liquid-filled lens, addressable lens stack and electric light
Lens.
8. equipment as claimed in claim 1, wherein the radiance sensor is selected from following set of, the group includes:It is single
Color radiance sensor and polychromatic radiation rate sensor.
9. equipment according to claim 1, wherein radiance sensor, which include at least one, is selected from following set of light
Photodetector, the group includes:The photodiode run under photoconductive mode, the photoelectricity two run under photovoltaic pattern
Pole pipe, photistor and photo-resistor.
10. equipment according to claim 1, wherein the radiance sensor includes multiple photoelectric detectors, Mei Geguang
Photodetector is adapted for different spectral responses.
11. equipment according to claim 1, wherein, the radiance sampler includes at least one analog-digital converter
(ADC)。
12. equipment according to claim 1, wherein, the light beam generator includes at least one selected from following set of
Optical transmitting set, the group includes:Laser, laser diode, light emitting diode, fluorescent lamp and incandescent lamp.
13. equipment according to claim 1, wherein the light beam generator includes multiple optical transmitting sets, each light transmitting
Utensil has a different emission spectrum, and each optical transmitting set, selected from following set of, the group includes:Laser, the pole of laser two
Pipe, light emitting diode, fluorescent lamp and incandescent lamp.
14. equipment according to claim 1, wherein, the radiance modulator be the light beam generator it is intrinsic.
15. equipment according to claim 1, wherein the radiance modulator, selected from following set of, the group includes:Sound
Optical modulator, absorbability electrooptic modulator and refraction electrooptic modulator.
16. equipment according to claim 1, in addition to act on the angle reconstruction wave filter of output beam.
17. equipment according to claim 1, wherein, the output focal spot modulation device is selected from following set of, the group bag
Include:Liquid crystal lens, liquid lens, deformable films speculum, deformable films liquid-filled lens, addressable lens stack and electricity
Optical lens.
18. equipment according to claim 1, in addition at least one actuator, for causing the array at least two
Swung between individual position.
19. a kind of method for being used to capture input light field and display output light field, each bag of this method for the array of position
Include following steps:
(a) inputs light beam and output beam of the scanning on two-dimensional field of view;
(b) focus of inputs light beam is modulated over time;
(c) radiance of inputs light beam is sensed over time;
(d) radiance for inputs light beam of being sampled at discrete time;
(e) output beam is generated;
(f) radiance of the output beam is modulated over time;And
(g) focus of the output beam is modulated over time.
20. method according to claim 19, wherein step (b) include basis and the inputs light beam scanned in visual field
Position and the corresponding specified input depth value of instantaneous direction modulate the focus of the inputs light beam, and step (g) includes root
The output is modulated according to specified output depth value corresponding with the position and instantaneous direction of the output beam scanned in visual field
The focus of light beam.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/567,010 US8754829B2 (en) | 2012-08-04 | 2012-08-04 | Scanning light field camera and display |
US13/567,010 | 2012-08-04 | ||
PCT/IB2013/056412 WO2014024121A1 (en) | 2012-08-04 | 2013-08-05 | Scanning two-way light field camera and display |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104704821A CN104704821A (en) | 2015-06-10 |
CN104704821B true CN104704821B (en) | 2017-10-24 |
Family
ID=50025045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201380052016.0A Active CN104704821B (en) | 2012-08-04 | 2013-08-05 | Scan two-way light-field camera and display |
Country Status (5)
Country | Link |
---|---|
US (12) | US8754829B2 (en) |
EP (1) | EP2880864A4 (en) |
CN (1) | CN104704821B (en) |
AU (1) | AU2013301200B2 (en) |
WO (1) | WO2014024121A1 (en) |
Families Citing this family (222)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
WO2009094399A1 (en) | 2008-01-22 | 2009-07-30 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Head-mounted projection display using reflective microdisplays |
US20110075257A1 (en) | 2009-09-14 | 2011-03-31 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | 3-Dimensional electro-optical see-through displays |
EP2403234A1 (en) * | 2010-06-29 | 2012-01-04 | Koninklijke Philips Electronics N.V. | Method and system for constructing a compound image from data obtained by an array of image capturing devices |
US10156722B2 (en) | 2010-12-24 | 2018-12-18 | Magic Leap, Inc. | Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality |
US8885882B1 (en) * | 2011-07-14 | 2014-11-11 | The Research Foundation For The State University Of New York | Real time eye tracking for human computer interaction |
EP3761072A1 (en) | 2012-01-24 | 2021-01-06 | Augmented Vision Inc. | Compact eye-tracked head-mounted display |
US8995785B2 (en) * | 2012-02-28 | 2015-03-31 | Lytro, Inc. | Light-field processing and analysis, camera control, and user interfaces and interaction on light-field capture devices |
US9858649B2 (en) | 2015-09-30 | 2018-01-02 | Lytro, Inc. | Depth-based image blurring |
WO2014039555A1 (en) * | 2012-09-04 | 2014-03-13 | SoliDDD Corp. | Switchable lenticular array for autostereoscopic video displays |
CN104104934B (en) * | 2012-10-04 | 2019-02-19 | 陈笛 | The component and method of the more spectators' Three-dimensional Displays of glasses-free |
EP2910022B1 (en) | 2012-10-18 | 2023-07-12 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Stereoscopic displays with addressable focus cues |
TWI508524B (en) * | 2012-10-26 | 2015-11-11 | Quanta Comp Inc | Method and system for automatic adjusting auto-stereoscopic 3d display device |
US9001226B1 (en) * | 2012-12-04 | 2015-04-07 | Lytro, Inc. | Capturing and relighting images using multiple devices |
US9699433B2 (en) | 2013-01-24 | 2017-07-04 | Yuchen Zhou | Method and apparatus to produce re-focusable vision with detecting re-focusing event from human eye |
US9225969B2 (en) * | 2013-02-11 | 2015-12-29 | EchoPixel, Inc. | Graphical system with enhanced stereopsis |
US9477087B2 (en) * | 2013-03-12 | 2016-10-25 | 3DIcon Corporation | Holoform 3D projection display |
WO2014144989A1 (en) * | 2013-03-15 | 2014-09-18 | Ostendo Technologies, Inc. | 3d light field displays and methods with improved viewing angle depth and resolution |
US10334151B2 (en) | 2013-04-22 | 2019-06-25 | Google Llc | Phase detection autofocus using subaperture images |
US9519144B2 (en) | 2013-05-17 | 2016-12-13 | Nvidia Corporation | System, method, and computer program product to produce images for a near-eye light field display having a defect |
US9582922B2 (en) | 2013-05-17 | 2017-02-28 | Nvidia Corporation | System, method, and computer program product to produce images for a near-eye light field display |
KR102093341B1 (en) * | 2013-06-24 | 2020-03-25 | 삼성전자주식회사 | OASLM based holographic display |
US10203399B2 (en) | 2013-11-12 | 2019-02-12 | Big Sky Financial Corporation | Methods and apparatus for array based LiDAR systems with reduced interference |
US11402629B2 (en) | 2013-11-27 | 2022-08-02 | Magic Leap, Inc. | Separated pupil optical systems for virtual and augmented reality and methods for displaying images using same |
US9594247B2 (en) * | 2013-12-19 | 2017-03-14 | Nvidia Corporation | System, method, and computer program product for a pinlight see-through near-eye display |
US10244223B2 (en) * | 2014-01-10 | 2019-03-26 | Ostendo Technologies, Inc. | Methods for full parallax compressed light field 3D imaging systems |
US9841599B2 (en) * | 2014-06-05 | 2017-12-12 | Osterhout Group, Inc. | Optical configurations for head-worn see-through displays |
EP4071537A1 (en) | 2014-01-31 | 2022-10-12 | Magic Leap, Inc. | Multi-focal display system |
EP3100098B8 (en) * | 2014-01-31 | 2022-10-05 | Magic Leap, Inc. | Multi-focal display system and method |
EP3114526B1 (en) | 2014-03-05 | 2021-10-20 | Arizona Board of Regents on Behalf of the University of Arizona | Wearable 3d augmented reality display |
US10417824B2 (en) | 2014-03-25 | 2019-09-17 | Apple Inc. | Method and system for representing a virtual object in a view of a real environment |
US9360554B2 (en) | 2014-04-11 | 2016-06-07 | Facet Technology Corp. | Methods and apparatus for object detection and identification in a multiple detector lidar array |
US9414087B2 (en) * | 2014-04-24 | 2016-08-09 | Lytro, Inc. | Compression of light field images |
US9712820B2 (en) * | 2014-04-24 | 2017-07-18 | Lytro, Inc. | Predictive light field compression |
US9672387B2 (en) * | 2014-04-28 | 2017-06-06 | Sony Corporation | Operating a display of a user equipment |
CN103990998B (en) | 2014-05-20 | 2017-01-25 | 广东工业大学 | Stiffness frequency adjustable two-dimensional micro-motion platform based on stress stiffening principle |
AU2015266670B2 (en) | 2014-05-30 | 2019-05-09 | Magic Leap, Inc. | Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality |
EP4235252A1 (en) | 2014-05-30 | 2023-08-30 | Magic Leap, Inc. | Methods and system for creating focal planes in virtual augmented reality |
US10618131B2 (en) | 2014-06-05 | 2020-04-14 | Nlight, Inc. | Laser patterning skew correction |
TWI556624B (en) * | 2014-07-18 | 2016-11-01 | 友達光電股份有限公司 | Image displaying method and image dispaly device |
KR101638412B1 (en) * | 2014-08-19 | 2016-07-21 | 한국과학기술연구원 | A three-dimensional image display apparatus using projection optical system of laser beam scanning type |
KR102281690B1 (en) * | 2014-12-01 | 2021-07-26 | 삼성전자주식회사 | Method and apparatus for generating 3 dimension image |
KR102270055B1 (en) | 2014-12-29 | 2021-06-25 | 매직 립, 인코포레이티드 | Light projector using an acousto-optical control device |
US9883007B2 (en) * | 2015-01-20 | 2018-01-30 | Microsoft Technology Licensing, Llc | Downloading an application to an apparatus |
US10531071B2 (en) * | 2015-01-21 | 2020-01-07 | Nextvr Inc. | Methods and apparatus for environmental measurements and/or stereoscopic image capture |
US10176961B2 (en) | 2015-02-09 | 2019-01-08 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Small portable night vision system |
US11468639B2 (en) * | 2015-02-20 | 2022-10-11 | Microsoft Technology Licensing, Llc | Selective occlusion system for augmented reality devices |
US10036801B2 (en) | 2015-03-05 | 2018-07-31 | Big Sky Financial Corporation | Methods and apparatus for increased precision and improved range in a multiple detector LiDAR array |
CN106034232B (en) * | 2015-03-12 | 2018-03-23 | 北京智谷睿拓技术服务有限公司 | display control method and device |
EP3070942B1 (en) * | 2015-03-17 | 2023-11-22 | InterDigital CE Patent Holdings | Method and apparatus for displaying light field video data |
WO2016154026A2 (en) | 2015-03-20 | 2016-09-29 | Castar, Inc. | Retroreflective light field display |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10440407B2 (en) | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US10565734B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
US11328446B2 (en) | 2015-04-15 | 2022-05-10 | Google Llc | Combining light-field data with active depth data for depth map generation |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
JP2018524952A (en) * | 2015-04-21 | 2018-08-30 | ユニバーシティー オブ ロチェスター | Cloaking system and method |
WO2016172385A1 (en) | 2015-04-23 | 2016-10-27 | Ostendo Technologies, Inc. | Methods for full parallax compressed light field synthesis utilizing depth information |
JP6866299B2 (en) | 2015-04-23 | 2021-04-28 | オステンド・テクノロジーズ・インコーポレーテッド | Methods and equipment for omnidirectional parallax light field display systems |
EP4236310A3 (en) | 2015-04-30 | 2023-09-20 | Google LLC | Virtual eyeglass set for viewing actual scene that corrects for different location of lenses than eyes |
CN107533166B (en) * | 2015-05-04 | 2021-03-16 | 奇跃公司 | Split pupil optical system for virtual and augmented reality and method for displaying image using the same |
US20160343173A1 (en) * | 2015-05-20 | 2016-11-24 | Daqri, Llc | Acousto-optical display for augmented reality |
EP3304172A4 (en) * | 2015-05-28 | 2019-01-02 | North Inc. | Systems, devices, and methods that integrate eye tracking and scanning laser projection in wearable heads-up displays |
EP3099076B1 (en) | 2015-05-29 | 2019-08-07 | InterDigital CE Patent Holdings | Method for displaying a content from 4d light field data |
EP3099079A1 (en) | 2015-05-29 | 2016-11-30 | Thomson Licensing | Method for displaying, in a vehicle, a content from 4d light field data associated with a scene |
EP3099077B1 (en) | 2015-05-29 | 2020-07-15 | InterDigital CE Patent Holdings | Method for displaying a content from 4d light field data |
JP6558088B2 (en) * | 2015-06-12 | 2019-08-14 | リコーイメージング株式会社 | Imaging apparatus, imaging control apparatus, and imaging control method |
US9979909B2 (en) | 2015-07-24 | 2018-05-22 | Lytro, Inc. | Automatic lens flare detection and correction for light-field images |
US10761599B2 (en) * | 2015-08-04 | 2020-09-01 | Artilux, Inc. | Eye gesture tracking |
CA2995978A1 (en) * | 2015-08-18 | 2017-02-23 | Magic Leap, Inc. | Virtual and augmented reality systems and methods |
JP6663193B2 (en) * | 2015-09-10 | 2020-03-11 | キヤノン株式会社 | Imaging device and control method thereof |
BR112018005399A2 (en) * | 2015-09-17 | 2018-10-09 | Thomson Licensing | light field data representation |
EP3144885A1 (en) | 2015-09-17 | 2017-03-22 | Thomson Licensing | Light field data representation |
EP3144879A1 (en) * | 2015-09-17 | 2017-03-22 | Thomson Licensing | A method and an apparatus for generating data representative of a light field |
EP3151534A1 (en) | 2015-09-29 | 2017-04-05 | Thomson Licensing | Method of refocusing images captured by a plenoptic camera and audio based refocusing image system |
US9787963B2 (en) * | 2015-10-08 | 2017-10-10 | Soraa Laser Diode, Inc. | Laser lighting having selective resolution |
US11609427B2 (en) | 2015-10-16 | 2023-03-21 | Ostendo Technologies, Inc. | Dual-mode augmented/virtual reality (AR/VR) near-eye wearable displays |
US11106273B2 (en) | 2015-10-30 | 2021-08-31 | Ostendo Technologies, Inc. | System and methods for on-body gestural interfaces and projection displays |
NZ742518A (en) | 2015-11-04 | 2019-08-30 | Magic Leap Inc | Dynamic display calibration based on eye-tracking |
US10448030B2 (en) * | 2015-11-16 | 2019-10-15 | Ostendo Technologies, Inc. | Content adaptive light field compression |
US11179807B2 (en) | 2015-11-23 | 2021-11-23 | Nlight, Inc. | Fine-scale temporal control for laser material processing |
EP3978184A1 (en) | 2015-11-23 | 2022-04-06 | NLIGHT, Inc. | Method and apparatus for fine-scale temporal control for laser beam material processing |
WO2017100485A1 (en) * | 2015-12-08 | 2017-06-15 | The Regents Of The University Of Michigan | 3d mems scanner for real-time cross-sectional endomicroscopy |
US10345594B2 (en) | 2015-12-18 | 2019-07-09 | Ostendo Technologies, Inc. | Systems and methods for augmented near-eye wearable displays |
US10440307B2 (en) * | 2015-12-22 | 2019-10-08 | Casio Computer Co., Ltd. | Image processing device, image processing method and medium |
US10578882B2 (en) | 2015-12-28 | 2020-03-03 | Ostendo Technologies, Inc. | Non-telecentric emissive micro-pixel array light modulators and methods of fabrication thereof |
CN106254857B (en) * | 2015-12-31 | 2018-05-04 | 北京智谷睿拓技术服务有限公司 | Light field display control method and device, light field display device |
CN106254804B (en) * | 2016-01-18 | 2019-03-29 | 北京智谷睿拓技术服务有限公司 | Light field display control method and device, light field show equipment |
CN109074674B (en) * | 2016-02-26 | 2023-10-24 | 南加州大学 | Optimized volumetric imaging with selective volumetric illumination and light field detection |
US9866816B2 (en) | 2016-03-03 | 2018-01-09 | 4D Intellectual Properties, Llc | Methods and apparatus for an active pulsed 4D camera for image acquisition and analysis |
US10353203B2 (en) | 2016-04-05 | 2019-07-16 | Ostendo Technologies, Inc. | Augmented/virtual reality near-eye displays with edge imaging lens comprising a plurality of display devices |
CN113014906B (en) | 2016-04-12 | 2023-06-30 | 奎蒂安特有限公司 | 3D scene reconstruction method, system and computer program storage medium |
US9726896B2 (en) * | 2016-04-21 | 2017-08-08 | Maximilian Ralph Peter von und zu Liechtenstein | Virtual monitor display technique for augmented reality environments |
CN109313509B (en) * | 2016-04-21 | 2022-01-07 | 奇跃公司 | Visual halo around the field of vision |
US10453431B2 (en) * | 2016-04-28 | 2019-10-22 | Ostendo Technologies, Inc. | Integrated near-far light field display systems |
US10522106B2 (en) | 2016-05-05 | 2019-12-31 | Ostendo Technologies, Inc. | Methods and apparatus for active transparency modulation |
US10275892B2 (en) | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
EP3264759A1 (en) | 2016-06-30 | 2018-01-03 | Thomson Licensing | An apparatus and a method for generating data representative of a pixel beam |
EP3264755A1 (en) * | 2016-06-30 | 2018-01-03 | Thomson Licensing | Plenoptic sub aperture view shuffling for a richer color sampling |
KR101894375B1 (en) * | 2016-07-13 | 2018-09-04 | 이화여자대학교 산학협력단 | Scanning micromirror |
CA3030848A1 (en) | 2016-07-15 | 2018-01-18 | Light Field Lab, Inc. | Energy propagation and transverse anderson localization with two-dimensional, light field and holographic relays |
US10215983B2 (en) | 2016-07-19 | 2019-02-26 | The Board Of Trustees Of The University Of Illinois | Method and system for near-eye three dimensional display |
EP3493538B1 (en) * | 2016-07-27 | 2023-05-03 | Toppan Printing Co., Ltd. | Color calibration device, color calibration system, color calibration hologram, color calibration method, and program |
EP3494457A1 (en) | 2016-08-05 | 2019-06-12 | University of Rochester | Virtual window |
US9766060B1 (en) * | 2016-08-12 | 2017-09-19 | Microvision, Inc. | Devices and methods for adjustable resolution depth mapping |
US10496238B2 (en) | 2016-08-22 | 2019-12-03 | University Of Rochester | 3D display ray principles and methods, zooming, and real-time demonstration |
EP3519871A1 (en) | 2016-09-29 | 2019-08-07 | NLIGHT, Inc. | Adjustable beam characteristics |
CN116699634A (en) | 2016-11-10 | 2023-09-05 | 莱卡地球系统公开股份有限公司 | Laser scanner |
WO2018098436A1 (en) | 2016-11-28 | 2018-05-31 | Spy Eye, Llc | Unobtrusive eye mounted display |
US10091496B2 (en) * | 2016-11-28 | 2018-10-02 | X Development Llc | Systems, devices, and methods for calibrating a light field projection system |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
CN108206902B (en) * | 2016-12-16 | 2020-09-22 | 深圳超多维科技有限公司 | Light field camera |
US10088686B2 (en) | 2016-12-16 | 2018-10-02 | Microsoft Technology Licensing, Llc | MEMS laser scanner having enlarged FOV |
CN108206901B (en) * | 2016-12-16 | 2020-09-22 | 深圳超多维科技有限公司 | Light field camera |
US10685492B2 (en) | 2016-12-22 | 2020-06-16 | Choi Enterprise, LLC | Switchable virtual reality and augmented/mixed reality display device, and light field methods |
US10448824B2 (en) * | 2016-12-29 | 2019-10-22 | Intel Corporation | Focus adjustment method and apparatus |
US10904514B2 (en) | 2017-02-09 | 2021-01-26 | Facebook Technologies, Llc | Polarization illumination using acousto-optic structured light in 3D depth sensing |
US10848721B2 (en) * | 2017-03-07 | 2020-11-24 | Goertek Inc. | Laser projection device and laser projection system |
IL269042B1 (en) * | 2017-03-09 | 2024-02-01 | Univ Arizona | Head-Mounted Light Field Display with Integral Imaging and Relay Optics |
CA3055545A1 (en) * | 2017-03-09 | 2018-09-13 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Head-mounted light field display with integral imaging and waveguide prism |
US10593718B2 (en) | 2017-03-28 | 2020-03-17 | Mitutoyo Corporation | Surface profiling and imaging system including optical channels providing distance-dependent image offsets |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
KR102611837B1 (en) * | 2017-04-04 | 2023-12-07 | 엔라이트 인크. | Generating optical references for calibrating galvanometer scanners |
EP3389265A1 (en) * | 2017-04-13 | 2018-10-17 | Ultra-D Coöperatief U.A. | Efficient implementation of joint bilateral filter |
EP4220258A1 (en) | 2017-04-19 | 2023-08-02 | Magic Leap, Inc. | Multimodal task execution and text editing for a wearable system |
US9958684B1 (en) | 2017-04-28 | 2018-05-01 | Microsoft Technology Licensing, Llc | Compact display engine with MEMS scanners |
US9971150B1 (en) | 2017-04-28 | 2018-05-15 | Microsoft Technology Licensing, Llc | Compact display engine with MEMS scanners |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
JP6952123B2 (en) * | 2017-05-26 | 2021-10-20 | グーグル エルエルシーGoogle LLC | Near-eye display with extended adjustment range adjustment |
EP3566093A1 (en) * | 2017-05-26 | 2019-11-13 | Google LLC | Near-eye display with extended accommodation range adjustment |
US10613413B1 (en) | 2017-05-31 | 2020-04-07 | Facebook Technologies, Llc | Ultra-wide field-of-view scanning devices for depth sensing |
EP3416371A1 (en) | 2017-06-12 | 2018-12-19 | Thomson Licensing | Method for displaying, on a 2d display device, a content derived from light field data |
EP3416381A1 (en) | 2017-06-12 | 2018-12-19 | Thomson Licensing | Method and apparatus for providing information to a user observing a multi view content |
US10181200B1 (en) | 2017-06-28 | 2019-01-15 | Facebook Technologies, Llc | Circularly polarized illumination and detection for depth sensing |
US10489383B1 (en) * | 2017-06-30 | 2019-11-26 | Quantcast Corporation | Identifying variance in distributed systems |
US10388026B1 (en) * | 2017-07-07 | 2019-08-20 | Facebook Technologies, Llc | Fast scanning large field-of-view devices for depth sensing |
DE102017211932A1 (en) * | 2017-07-12 | 2019-01-17 | Robert Bosch Gmbh | Projection device for data glasses, data glasses and methods for operating a projection device |
US10574973B2 (en) | 2017-09-06 | 2020-02-25 | Facebook Technologies, Llc | Non-mechanical beam steering for depth sensing |
KR102447101B1 (en) * | 2017-09-12 | 2022-09-26 | 삼성전자주식회사 | Image processing method and apparatus for autostereoscopic three dimensional display |
US10545215B2 (en) | 2017-09-13 | 2020-01-28 | Google Llc | 4D camera tracking and optical stabilization |
US10930709B2 (en) | 2017-10-03 | 2021-02-23 | Lockheed Martin Corporation | Stacked transparent pixel structures for image sensors |
US10776995B2 (en) | 2017-10-17 | 2020-09-15 | Nvidia Corporation | Light fields as better backgrounds in rendering |
KR102395792B1 (en) * | 2017-10-18 | 2022-05-11 | 삼성디스플레이 주식회사 | Display device and driving method thereof |
US10510812B2 (en) | 2017-11-09 | 2019-12-17 | Lockheed Martin Corporation | Display-integrated infrared emitter and sensor structures |
US10616562B2 (en) * | 2017-12-21 | 2020-04-07 | X Development Llc | Directional light emitters and electronic displays featuring the same |
JP7311097B2 (en) | 2018-01-14 | 2023-07-19 | ライト フィールド ラボ、インコーポレイテッド | 4D energy field package assembly |
EP3737980A4 (en) | 2018-01-14 | 2021-11-10 | Light Field Lab, Inc. | Systems and methods for transverse energy localization in energy relays using ordered structures |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
US10673414B2 (en) | 2018-02-05 | 2020-06-02 | Tectus Corporation | Adaptive tuning of a contact lens |
US10838250B2 (en) | 2018-02-07 | 2020-11-17 | Lockheed Martin Corporation | Display assemblies with electronically emulated transparency |
US10652529B2 (en) | 2018-02-07 | 2020-05-12 | Lockheed Martin Corporation | In-layer Signal processing |
US11616941B2 (en) | 2018-02-07 | 2023-03-28 | Lockheed Martin Corporation | Direct camera-to-display system |
US10979699B2 (en) | 2018-02-07 | 2021-04-13 | Lockheed Martin Corporation | Plenoptic cellular imaging system |
US10594951B2 (en) | 2018-02-07 | 2020-03-17 | Lockheed Martin Corporation | Distributed multi-aperture camera array |
US10690910B2 (en) | 2018-02-07 | 2020-06-23 | Lockheed Martin Corporation | Plenoptic cellular vision correction |
US10129984B1 (en) | 2018-02-07 | 2018-11-13 | Lockheed Martin Corporation | Three-dimensional electronics distribution by geodesic faceting |
US10951883B2 (en) | 2018-02-07 | 2021-03-16 | Lockheed Martin Corporation | Distributed multi-screen array for high density display |
US10580349B2 (en) | 2018-02-09 | 2020-03-03 | Tectus Corporation | Backplane for eye-mounted display |
KR102561101B1 (en) | 2018-02-19 | 2023-07-28 | 삼성전자주식회사 | Holographic display apparatus providing expanded viewing window |
CN108508281B (en) * | 2018-03-19 | 2020-04-17 | 江苏伏波海洋探测科技有限公司 | Depth conversion method of ship electrostatic field based on point power supply frequency domain method |
JP7185331B2 (en) | 2018-03-22 | 2022-12-07 | アリゾナ ボード オブ リージェンツ オン ビハーフ オブ ザ ユニバーシティ オブ アリゾナ | How to render light field images for integral imaging light field displays |
US10505394B2 (en) | 2018-04-21 | 2019-12-10 | Tectus Corporation | Power generation necklaces that mitigate energy absorption in the human body |
US10838239B2 (en) | 2018-04-30 | 2020-11-17 | Tectus Corporation | Multi-coil field generation in an electronic contact lens system |
US10895762B2 (en) | 2018-04-30 | 2021-01-19 | Tectus Corporation | Multi-coil field generation in an electronic contact lens system |
AU2019262147A1 (en) | 2018-05-02 | 2020-12-10 | Quidient, Llc | A codec for processing scenes of almost unlimited detail |
US10790700B2 (en) | 2018-05-18 | 2020-09-29 | Tectus Corporation | Power generation necklaces with field shaping systems |
US10996769B2 (en) | 2018-06-11 | 2021-05-04 | Tectus Corporation | Contact lens-based eye tracking |
US10867198B2 (en) * | 2018-07-03 | 2020-12-15 | Gingy Technology Inc. | Image capturing device and image capturing module |
TWM570473U (en) * | 2018-07-03 | 2018-11-21 | 金佶科技股份有限公司 | Image capturing module |
CN108921908B (en) * | 2018-07-03 | 2020-07-28 | 百度在线网络技术(北京)有限公司 | Surface light field acquisition method and device and electronic equipment |
US20200018962A1 (en) * | 2018-07-11 | 2020-01-16 | Facebook Technologies, Llc | Adaptive lenses for near-eye displays |
US11137622B2 (en) | 2018-07-15 | 2021-10-05 | Tectus Corporation | Eye-mounted displays including embedded conductive coils |
EP3598730A1 (en) * | 2018-07-19 | 2020-01-22 | Thomson Licensing | Light field capturing system and corresponding calibration device, method and computer program product |
KR20210034585A (en) | 2018-07-25 | 2021-03-30 | 라이트 필드 랩 인코포레이티드 | Amusement park equipment based on light field display system |
US10529107B1 (en) | 2018-09-11 | 2020-01-07 | Tectus Corporation | Projector alignment in a contact lens |
US10979681B2 (en) * | 2018-09-13 | 2021-04-13 | Varjo Technologies Oy | Display apparatus and method of displaying using light source and beam scanning arrangement |
WO2020061714A1 (en) * | 2018-09-28 | 2020-04-02 | Peckham Jordan | Direct projection light field display |
CN109188700B (en) * | 2018-10-30 | 2021-05-11 | 京东方科技集团股份有限公司 | Optical display system and AR/VR display device |
CN109361794B (en) * | 2018-11-19 | 2021-04-20 | Oppo广东移动通信有限公司 | Zoom control method and device of mobile terminal, storage medium and mobile terminal |
US10838232B2 (en) | 2018-11-26 | 2020-11-17 | Tectus Corporation | Eye-mounted displays including embedded solenoids |
CN109212771A (en) * | 2018-11-27 | 2019-01-15 | 上海天马微电子有限公司 | A kind of three-dimensional display apparatus and display methods |
US10866413B2 (en) | 2018-12-03 | 2020-12-15 | Lockheed Martin Corporation | Eccentric incident luminance pupil tracking |
US10644543B1 (en) | 2018-12-20 | 2020-05-05 | Tectus Corporation | Eye-mounted display system including a head wearable object |
RU191844U1 (en) * | 2019-01-24 | 2019-08-23 | Андрей Игоревич Бурсов | FUSION SIMULATOR |
WO2020155092A1 (en) * | 2019-02-01 | 2020-08-06 | Boe Technology Group Co., Ltd. | Apparatus integrated with display panel for tof 3d spatial positioning |
JP7201486B2 (en) * | 2019-03-13 | 2023-01-10 | 日立建機株式会社 | working machine |
CN113826402A (en) * | 2019-03-26 | 2021-12-21 | Pcms控股公司 | System and method for multiplexed rendering of light fields |
US10698201B1 (en) | 2019-04-02 | 2020-06-30 | Lockheed Martin Corporation | Plenoptic cellular axis redirection |
US11428933B2 (en) * | 2019-05-13 | 2022-08-30 | Light Field Lab, Inc. | Light field display system for performance events |
CN110208267B (en) * | 2019-06-21 | 2020-06-02 | 中国海洋大学 | Marine organism bidirectional optical field in-situ observation method suitable for targets with different transmittances |
CN110275288B (en) * | 2019-06-21 | 2021-06-22 | 无锡迅杰光远科技有限公司 | Multidirectional MEMS scanning device |
EP3990966A1 (en) * | 2019-07-01 | 2022-05-04 | PCMS Holdings, Inc. | Method and system for continuous calibration of a 3d display based on beam steering |
US10944290B2 (en) | 2019-08-02 | 2021-03-09 | Tectus Corporation | Headgear providing inductive coupling to a contact lens |
EP4010756A4 (en) | 2019-08-09 | 2023-09-20 | Light Field Lab, Inc. | Light field display system based digital signage system |
US11493836B2 (en) | 2019-08-14 | 2022-11-08 | Avalon Holographics Inc. | Light field projector device |
AU2019464886A1 (en) | 2019-09-03 | 2022-03-24 | Light Field Lab, Inc. | Light field display system for gaming environments |
CN113225471B (en) * | 2019-10-14 | 2023-01-03 | Oppo广东移动通信有限公司 | Camera module and terminal equipment |
WO2021076424A1 (en) | 2019-10-15 | 2021-04-22 | Pcms Holdings, Inc. | Method for projecting an expanded virtual image with a small light field display |
CN114761095A (en) | 2019-12-03 | 2022-07-15 | 光场实验室公司 | Light field display system for video games and electronic competitions |
KR20210096449A (en) * | 2020-01-28 | 2021-08-05 | 삼성전자주식회사 | Method of playing image on hud system and hud system |
US11579520B2 (en) | 2020-05-05 | 2023-02-14 | The University Of British Columbia | Light field display |
CN111610634B (en) * | 2020-06-23 | 2022-05-27 | 京东方科技集团股份有限公司 | Display system based on four-dimensional light field and display method thereof |
CN111736420A (en) * | 2020-06-28 | 2020-10-02 | 瑞声通讯科技(常州)有限公司 | Three-dimensional imaging module and manufacturing method thereof |
US11900521B2 (en) | 2020-08-17 | 2024-02-13 | LiquidView Corp | Virtual window apparatus and system |
CN111781734B (en) * | 2020-08-30 | 2023-08-15 | 成都航空职业技术学院 | Dual-view 3D display device and method based on dual display screens |
CN114167605A (en) | 2020-09-11 | 2022-03-11 | 中强光电股份有限公司 | Near-to-eye light field display device |
WO2022063728A1 (en) * | 2020-09-25 | 2022-03-31 | Interdigital Ce Patent Holdings, Sas | Light field display |
CN112445006A (en) * | 2020-09-28 | 2021-03-05 | 南京工程学院 | Irregular diamond lens array capable of realizing 2D/3D on-screen display and preparation method |
CN112967369A (en) * | 2021-04-20 | 2021-06-15 | 北京天空卫士网络安全技术有限公司 | Light ray display method and device |
US11961240B2 (en) | 2021-06-11 | 2024-04-16 | Mechanical Solutions Inc. | Systems and methods for improved observation and detection using time video synchronization and synchronous time averaging |
CN216990332U (en) * | 2021-06-15 | 2022-07-19 | 苏州创鑫激光科技有限公司 | Laser processing device |
US11967335B2 (en) | 2021-09-03 | 2024-04-23 | Google Llc | Foveated beamforming for augmented reality devices and wearables |
CN114078153B (en) * | 2021-11-18 | 2022-06-14 | 清华大学 | Light field coding camera shooting method and device for scattering scene |
KR102421512B1 (en) * | 2022-03-14 | 2022-07-15 | 정현인 | Portable Stereoscopic Cameras and Systems |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1269103A (en) * | 1997-06-28 | 2000-10-04 | 英国国防部 | Autostereoscopic display |
CN101518096A (en) * | 2006-09-20 | 2009-08-26 | 苹果公司 | Three-dimensional display system |
Family Cites Families (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5488862A (en) * | 1993-10-18 | 1996-02-06 | Armand P. Neukermans | Monolithic silicon rate-gyro with integrated sensors |
US5629790A (en) * | 1993-10-18 | 1997-05-13 | Neukermans; Armand P. | Micromachined torsional scanner |
US5535047A (en) | 1995-04-18 | 1996-07-09 | Texas Instruments Incorporated | Active yoke hidden hinge digital micromirror device |
JP3787939B2 (en) * | 1997-02-27 | 2006-06-21 | コニカミノルタホールディングス株式会社 | 3D image display device |
US6201629B1 (en) | 1997-08-27 | 2001-03-13 | Microoptical Corporation | Torsional micro-mechanical mirror system |
US6140979A (en) | 1998-08-05 | 2000-10-31 | Microvision, Inc. | Scanned display with pinch, timing, and distortion correction |
US6317169B1 (en) * | 1999-04-28 | 2001-11-13 | Intel Corporation | Mechanically oscillated projection display |
US6525310B2 (en) * | 1999-08-05 | 2003-02-25 | Microvision, Inc. | Frequency tunable resonant scanner |
CN1214268C (en) | 2000-05-19 | 2005-08-10 | 蒂博尔·包洛格 | Method and apparatus for displaying 3D images |
US7072086B2 (en) * | 2001-10-19 | 2006-07-04 | Batchko Robert G | Digital focus lens system |
US7170480B2 (en) * | 2000-11-01 | 2007-01-30 | Visioneered Image Systems, Inc. | Video display apparatus |
US20020117605A1 (en) * | 2001-01-08 | 2002-08-29 | Alden Ray M. | Three-dimensional receiving and displaying process and apparatus with military application |
WO2002073955A1 (en) * | 2001-03-13 | 2002-09-19 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, studio apparatus, storage medium, and program |
US7034939B2 (en) * | 2002-03-11 | 2006-04-25 | Spectir Corporation | Calibration system and method for calibration of various types of polarimeters |
US6803938B2 (en) * | 2002-05-07 | 2004-10-12 | Texas Instruments Incorporated | Dynamic laser printer scanning alignment using a torsional hinge mirror |
US7071594B1 (en) * | 2002-11-04 | 2006-07-04 | Microvision, Inc. | MEMS scanner with dual magnetic and capacitive drive |
JP4003129B2 (en) | 2003-01-22 | 2007-11-07 | ソニー株式会社 | Stereoscopic imaging device, stereoscopic display device, stereoscopic imaging display device, and information recording method |
US20060192869A1 (en) | 2005-02-28 | 2006-08-31 | Kazutora Yoshino | Multi-dimensional input, transfer, optional memory, and output method |
US7408145B2 (en) * | 2003-09-23 | 2008-08-05 | Kyle Holland | Light sensing instrument with modulated polychromatic source |
SG155167A1 (en) * | 2004-08-03 | 2009-09-30 | Silverbrook Res Pty Ltd | Walk-up printing |
EP1784988A1 (en) * | 2004-08-06 | 2007-05-16 | University of Washington | Variable fixation viewing distance scanned light displays |
US7535607B2 (en) | 2005-05-06 | 2009-05-19 | Seereal Technologies S.A. | Device for holographic reconstruction of three-dimensional scenes |
JP2007086145A (en) * | 2005-09-20 | 2007-04-05 | Sony Corp | Three-dimensional display |
US7312917B2 (en) * | 2005-09-23 | 2007-12-25 | Hewlett-Packard Development Company, L.P. | Variable focal length electro-optic lens |
US7916934B2 (en) * | 2006-04-04 | 2011-03-29 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for acquiring, encoding, decoding and displaying 3D light fields |
US7620309B2 (en) | 2006-04-04 | 2009-11-17 | Adobe Systems, Incorporated | Plenoptic camera |
US7513623B2 (en) * | 2006-04-17 | 2009-04-07 | Third Dimension Ip Llc | System and methods for angular slice true 3-D display |
US20110149018A1 (en) * | 2006-10-26 | 2011-06-23 | Seereal Technologies S.A. | Holographic display device comprising magneto-optical spatial light modulator |
US20080239061A1 (en) * | 2007-03-30 | 2008-10-02 | Cok Ronald S | First portable communication device |
US20090015956A1 (en) * | 2007-07-10 | 2009-01-15 | Motorola, Inc. | Method and Apparatus for Coupling a Microelectromechanical System Scanner Mirror to a Frame |
GB0716776D0 (en) * | 2007-08-29 | 2007-10-10 | Setred As | Rendering improvement for 3D display |
JP5216761B2 (en) * | 2007-09-26 | 2013-06-19 | パナソニック株式会社 | Beam scanning display device |
US7962033B2 (en) * | 2008-01-23 | 2011-06-14 | Adobe Systems Incorporated | Methods and apparatus for full-resolution light-field capture and rendering |
US20110001804A1 (en) | 2008-05-06 | 2011-01-06 | Microvision, Inc. | Apparatus for Displaying 3D Images |
US20100098323A1 (en) * | 2008-07-18 | 2010-04-22 | Agrawal Amit K | Method and Apparatus for Determining 3D Shapes of Objects |
US8704822B2 (en) * | 2008-12-17 | 2014-04-22 | Microsoft Corporation | Volumetric display system enabling user interaction |
US8416289B2 (en) * | 2009-04-28 | 2013-04-09 | Microsoft Corporation | Light-field display |
US7978407B1 (en) | 2009-06-27 | 2011-07-12 | Holovisions LLC | Holovision (TM) 3D imaging with rotating light-emitting members |
US8218014B2 (en) * | 2009-07-30 | 2012-07-10 | Microvision, Inc. | Electromagnetic scanner having variable coil width |
EP2470965A4 (en) * | 2009-08-25 | 2013-02-27 | Lg Electronics Inc | An apparatus and a method for reconstructing a hologram |
WO2011071958A2 (en) * | 2009-12-07 | 2011-06-16 | William Marsh Rice University | Apparatus and method for compressive imaging and sensing through multiplexed modulation |
EP2398235A3 (en) | 2010-02-08 | 2013-08-21 | Bronstein Bronstein Kimmel Technologies Ltd. | Imaging and projection devices and methods |
US8072664B1 (en) | 2010-05-26 | 2011-12-06 | Hong Kong Applied Science & Technology Research Institute, Ltd. | Biaxial scanning mirror having resonant frequency adjustment |
US9007444B2 (en) * | 2010-10-07 | 2015-04-14 | Massachusetts Institute Of Technology | Array directed light-field display for autostereoscopic viewing |
KR101670927B1 (en) * | 2010-11-05 | 2016-11-01 | 삼성전자주식회사 | Display apparatus and method |
US8628193B2 (en) * | 2010-11-20 | 2014-01-14 | Yibin TIAN | Automatic accommodative spectacles using a scene analyzer and focusing elements |
KR101675961B1 (en) * | 2010-12-29 | 2016-11-14 | 삼성전자주식회사 | Apparatus and Method for Rendering Subpixel Adaptively |
KR101807691B1 (en) * | 2011-01-11 | 2017-12-12 | 삼성전자주식회사 | Three-dimensional image display apparatus |
US9641823B2 (en) * | 2011-01-31 | 2017-05-02 | Hewlett-Packard Development Company, L.P. | Embedded light field display architecture to process and display three-dimensional light field data |
US20120200829A1 (en) * | 2011-02-09 | 2012-08-09 | Alexander Bronstein | Imaging and projecting devices and methods |
US8817379B2 (en) * | 2011-07-12 | 2014-08-26 | Google Inc. | Whole image scanning mirror display system |
US8651678B2 (en) * | 2011-11-29 | 2014-02-18 | Massachusetts Institute Of Technology | Polarization fields for dynamic light field display |
US8848006B2 (en) * | 2012-01-25 | 2014-09-30 | Massachusetts Institute Of Technology | Tensor displays |
US8690321B2 (en) * | 2012-04-21 | 2014-04-08 | Paul Lapstun | Fixation-based control of electroactive spectacles |
US9179126B2 (en) * | 2012-06-01 | 2015-11-03 | Ostendo Technologies, Inc. | Spatio-temporal light field cameras |
US8581929B1 (en) * | 2012-06-05 | 2013-11-12 | Francis J. Maguire, Jr. | Display of light field image data using a spatial light modulator at a focal length corresponding to a selected focus depth |
-
2012
- 2012-08-04 US US13/567,010 patent/US8754829B2/en active Active
-
2013
- 2013-08-05 WO PCT/IB2013/056412 patent/WO2014024121A1/en active Application Filing
- 2013-08-05 CN CN201380052016.0A patent/CN104704821B/en active Active
- 2013-08-05 EP EP13827401.4A patent/EP2880864A4/en not_active Withdrawn
- 2013-08-05 AU AU2013301200A patent/AU2013301200B2/en active Active
-
2014
- 2014-05-02 US US14/269,071 patent/US9456116B2/en active Active
- 2014-05-24 US US14/286,986 patent/US8933862B2/en active Active
- 2014-06-16 US US14/305,044 patent/US9965982B2/en active Active
- 2014-11-25 US US14/553,148 patent/US20150116528A1/en not_active Abandoned
- 2014-11-25 US US14/553,372 patent/US20150319344A1/en not_active Abandoned
-
2015
- 2015-01-25 US US14/604,750 patent/US10311768B2/en active Active
- 2015-02-14 US US14/622,828 patent/US20150319355A1/en not_active Abandoned
- 2015-02-14 US US14/622,829 patent/US20150319430A1/en not_active Abandoned
-
2016
- 2016-09-17 US US15/268,539 patent/US10008141B2/en active Active
-
2019
- 2019-05-05 US US16/403,586 patent/US10909892B2/en active Active
- 2019-05-05 US US16/403,590 patent/US10733924B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1269103A (en) * | 1997-06-28 | 2000-10-04 | 英国国防部 | Autostereoscopic display |
CN101518096A (en) * | 2006-09-20 | 2009-08-26 | 苹果公司 | Three-dimensional display system |
Also Published As
Publication number | Publication date |
---|---|
US9456116B2 (en) | 2016-09-27 |
US8933862B2 (en) | 2015-01-13 |
US10733924B2 (en) | 2020-08-04 |
US8754829B2 (en) | 2014-06-17 |
EP2880864A4 (en) | 2016-06-08 |
US20150319429A1 (en) | 2015-11-05 |
US20150319430A1 (en) | 2015-11-05 |
US20150319344A1 (en) | 2015-11-05 |
US20150116528A1 (en) | 2015-04-30 |
US20190259319A1 (en) | 2019-08-22 |
US20150319355A1 (en) | 2015-11-05 |
AU2013301200A1 (en) | 2015-03-19 |
US10311768B2 (en) | 2019-06-04 |
US20140292620A1 (en) | 2014-10-02 |
US20190259320A1 (en) | 2019-08-22 |
US10909892B2 (en) | 2021-02-02 |
US20140240809A1 (en) | 2014-08-28 |
CN104704821A (en) | 2015-06-10 |
EP2880864A1 (en) | 2015-06-10 |
AU2013301200B2 (en) | 2017-09-14 |
WO2014024121A1 (en) | 2014-02-13 |
US20170004750A1 (en) | 2017-01-05 |
US9965982B2 (en) | 2018-05-08 |
US20140253993A1 (en) | 2014-09-11 |
US20140035959A1 (en) | 2014-02-06 |
US10008141B2 (en) | 2018-06-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104704821B (en) | Scan two-way light-field camera and display | |
US11700364B2 (en) | Light field display | |
US10948712B2 (en) | Augmented reality light field display | |
US10432919B2 (en) | Shuttered waveguide light field display | |
US9841563B2 (en) | Shuttered waveguide light field display | |
US9860522B2 (en) | Head-mounted light field display | |
CN105934902A (en) | Virtual and augmented reality systems and methods | |
US20230045982A1 (en) | Shuttered Light Field Display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210519 Address after: New South Wales, Australia Patentee after: Convergence research Pte. Ltd. Address before: New South Wales, Australia Patentee before: Paul Rapston |