WO2015100301A1 - 3-d light field camera and photography method - Google Patents

3-d light field camera and photography method Download PDF

Info

Publication number
WO2015100301A1
WO2015100301A1 PCT/US2014/072099 US2014072099W WO2015100301A1 WO 2015100301 A1 WO2015100301 A1 WO 2015100301A1 US 2014072099 W US2014072099 W US 2014072099W WO 2015100301 A1 WO2015100301 A1 WO 2015100301A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
lens array
cylindrical lens
cylindrical
Prior art date
Application number
PCT/US2014/072099
Other languages
French (fr)
Inventor
Jingyi Yu
Xinqing Guo
Zhan Yu
Original Assignee
Jingyi Yu
Xinqing Guo
Zhan Yu
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingyi Yu, Xinqing Guo, Zhan Yu filed Critical Jingyi Yu
Priority to US15/107,661 priority Critical patent/US10397545B2/en
Priority to CN201480073939.9A priority patent/CN106170822B/en
Priority to EP14874923.7A priority patent/EP3087554A4/en
Priority to AU2014370001A priority patent/AU2014370001A1/en
Priority to KR1020167019217A priority patent/KR102303389B1/en
Priority to JP2016543029A priority patent/JP2017510831A/en
Publication of WO2015100301A1 publication Critical patent/WO2015100301A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/232Image signal generators using stereoscopic image cameras using a single 2D image sensor using fly-eye lenses, e.g. arrangements of circular lenses
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/0006Arrays
    • G02B3/0037Arrays characterized by the distribution or form of lenses
    • G02B3/005Arrays characterized by the distribution or form of lenses arranged along a single direction only, e.g. lenticular sheets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/21Indexing scheme for image data processing or generation, in general involving computational photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Definitions

  • the present invention is related to three-dimensional (3D) imaging, and in particular to 3D light-field cameras, methods and systems for capturing and presenting 3D images.
  • LF light-field
  • An LF camera uses a microlens array to capture four- dimensional (4D) light field information about the scene. Such light field information may be used to improve the resolution of computer graphics and computer vision applications.
  • aspects of the present invention relate to a method of generating an image of a scene.
  • Light representing the scene is directed through a lens module coupled to an imaging sensor.
  • the lens module includes a surface having a slit-shaped aperture and a cylindrical lens array positioned along an optical axis of the imaging sensor.
  • a longitudinal direction of the slit-shaped aperture is arranged orthogonal to a cylindrical axis of the cylindrical lens array.
  • Light directed through the lens module is captured by the imaging sensor to form a 3D LF image.
  • aspects of the present invention also relate to a 3D LF camera.
  • the LF camera includes a surface having a slit-shaped aperture mounted on a lens, an imaging sensor and a cylindrical lens array disposed between the imaging sensor and the lens.
  • the cylindrical lens array is arranged along an optical axis of the imaging sensor.
  • a longitudinal direction of the slit-shaped aperture is arranged orthogonal to a cylindrical axis of the cylindrical lens array.
  • the imaging sensor is configured to capture at least one 3D LF image of a scene.
  • Aspects of the present invention also relate to a 3D photograph.
  • the 3D photograph includes a 3D light field printed image of a scene and a cylindrical lens array disposed on the 3D light field printed image. The combination of the 3D light field printed image and the cylindrical lens array forms a 3D stereoscopic image.
  • FIG. 1 is a functional block diagram of an example system for capturing and presenting 3D images, according to an aspect of the present invention
  • FIG. 2A is a perspective view diagram of an example 3D LF camera of the system shown in FIG. 1, according to an aspect of the present invention
  • FIG. 2B is an exploded perspective view diagram of an example 3D LF camera shown in FIG. 2A, illustrating components of a 3D LF lens module and camera, according to an aspect of the present invention ;
  • FIG. 2C is an exploded perspective view diagram of the example 3D LF camera shown in FIG. 2B, i llustrating light ray geometry for a 3D LF camera, according to an aspect of the present invention
  • FIG. 3 is an exploded perspective view diagram of an example 3D photograph of the system shown in FIG. 1, according to an aspect of the present invention ;
  • FIG. 4A is an example diagram illustrating focusing of a cone of rays by a cylindrical lens, according to an aspect of the present invention ;
  • FIG. 4B is diagram illustrating optical sorting of a sheet of rays onto an imaging sensor by an example 3D LF camera shown in FIG. 1, according to an aspect of the present invention ;
  • FIGS. 5A and 5B are example raw 3D light field images captured by an example 3D LF camera shown in FIG. 1, according to an aspect of the present invention ;
  • FIG. 6A is a flow chart diagram illustrating an example method for rendering a refocused image, according to an aspect of the present invention
  • FIG. 6B is a flow chart diagram illustrating an example method for rendering an image, according to another aspect of the present invention.
  • FIG. 6C is a flow chart diagram illustrating an example method for forming a 3D photograph, according to an aspect of the present invention.
  • FIG. 7 is an example diagram illustrating ray re-parameterization for rendering a refocused 3D image, according to an aspect of the present invention
  • FIGS. 8A, 8B, 8C and 8D are example rendered 3D images for various focus depths, according to an aspect of the present invention.
  • FIGS. 9A and 9B are example rendered 3D images illustrating
  • FIGS. 8B and 8D stereoscopic views of the images shown in respective FIGS. 8B and 8D with different perspective, according to an aspect of the present invention.
  • Current cameras which capture a light field include 4D light field cameras which record both angular and spatial information in all directions.
  • one current 4D light field camera includes a 328 x 328 microlens array attached to an imaging sensor, where each microlens covers about 100 pixels.
  • a light field of about 328 x 328 spatial resolution and about 10 x 10 angular resolution may be obtained.
  • the inherent tradeoff made by the light field camera provides more angular resolution at the expense of lowering spatial resolution.
  • the camera in this example is equipped with an 11 megapixel sensor, it only delivers images with an effective resolution of about 700 x 700.
  • Other 4D light field cameras share a similar design and similar limitations.
  • aspects of the present invention include a 3D light field camera that combines a camera, a cylindrical lens array attached to the imaging sensor of the camera and a modified lens with a narrow-slit aperture.
  • the camera may include a digital single-lens reflex (DSLR) camera.
  • DSLR digital single-lens reflex
  • a 3D light field camera uses a vertical cylindrical lens array. The vertical cylindrical lens array may be used to maintain the vertical resolution while only trading between the horizontal resolution and the angular resolution . To reduce defocus blurs, the cylindrical lens array may be coupled with a slit shaped aperture.
  • Example 3D light field cameras of the present invention go beyond the capability of merely watching 3D content.
  • the 3D content may be captured directly from the scene and then displayed .
  • a consumer DSLR camera may be converted to a 3D light field camera.
  • Users can take pictures with an exemplary 3D light field camera similarly to a conventional camera .
  • aspects of the present invention also relate to exemplary methods and systems for renderi ng 3D stereoscopic images from a raw light field image.
  • 3D stereoscopic images may be rendered from different perspectives with view-dependent features such as occlusion and reflection .
  • view-dependent features such as occlusion and reflection .
  • the 3D light field camera can simultaneously capture the scene from different viewpoints in a single shot, the acquired views will exhibit parallax, i .e., closer objects exhibit larger disparity across views.
  • the capability of preserving parallax enables naked eye 3D visualization of the scene/object.
  • the same capability enables preservation of view-dependent features such as reflections where each view (i .e., sub- image) captures a slightly different image of the scene.
  • the system may render a refocused image at predetermined focus depth from the raw light field image.
  • Example methods use image based rendering (IBR) techniques. Specifically, simple geometry (such as a 3D plane) may be used as a proxy to scene geometry. All captured views can be warped onto the geometry and re-rendered (e.g., via ray-tracing or texture mapping) to the desired view. The process is analogous to specifying a focal depth in commodity SLR cameras. When all views are combined after warping, the results emulate defocus blurs in conventional wide aperture photography.
  • IBR image based rendering
  • the device falls into the category of an autostereoscopic 3D display, i.e., viewing 3D without glasses.
  • System 100 includes 3D LF camera 102, controller 104, rendering module 106, storage 108, display 110, user interface 112, printer 114 and 3D photograph 116.
  • system 100 may be coupled to a remote location, for example via a global network (i.e., the Internet).
  • a global network i.e., the Internet
  • 3D LF camera 102 includes 3D LF lens module 118 and camera 120.
  • lens module 118 includes cylindrical lens array 210 and slit shaped aperture 206.
  • Camera 120 may include any suitable general purpose camera having a main lens (e.g., lens 208 shown in FIG. 2B) and an imaging sensor (e.g., imaging sensor 214).
  • camera 120 includes a DSLR camera.
  • camera 118 includes a DSLR camera model number XSi manufactured by Canon Inc. (Tokyo, Japan).
  • mask 204 To convert camera 120 to 3D LF camera 102, mask 204 (FIG.
  • lens module 118 having slit shaped aperture 206 of lens module 118 may be used to modify the aperture of main lens 208, and cylindrical lens array 210 of lens module 118 may be attached to imaging sensor 214.
  • lens module 118 may be detachably coupled from camera 120.
  • 3D LF camera 102 may be configured to capture (raw) 3D LF image 128 of a scene.
  • 3D LF camera 102 may capture two or more 3D LF images 128 of the scene, such as over a predetermined time period.
  • 3D LF camera 102 may include a video camera.
  • 3D LF camera 102 may capture at least one 3D LF image 128 of the scene.
  • Controller 104 may be coupled to one or more of 3D LF camera 102, rendering module 106, storage 108, display 110, user interface 112 and printer 114, to control capture, storage, display, printing and/or processing of 3D LF image(s) 128.
  • Controller 104 may include, for example, a logic circuit, a digital signal processor or a microprocessor. It is understood that one or more functions of rendering module 106 may be performed by controller 104.
  • Rendering module 106 may be configured to process 3D LF image(s) 128 to form rendered image(s) 130. Rendering module 106 may be configured to calibrate
  • 3D LF camera 102 to locate a lens center of each lens 212 (FIG. 2B) in cylindrical lens array 212.
  • Rendering module 106 may also be configured to render a refocused image (after calibration) for various refocus planes. Refocusing is described further below with respect to FIG. 6A.
  • rendering module 106 may be configured to apply a predetermined perspective to 3D LF image(s) 128.
  • rendering module 106 may be also configured to generate a stereoscopic view of 3D LF image(s) 128. Perspective and stereoscopic processing are described further below with respect to FIG. 6B.
  • rendered image(s) 130 may be processed to include at least one of refocusing to a predetermined focus depth, perspective rendering or stereoscopic viewing. It is understood that refocusing, perspective rendering and stereoscopic view rendering represent example processing by rendering module 106, and that rendering module 106 may perform additional processing of 3D LF image(s) 128 such as, without being limited to, filtering, noise reduction, etc.
  • Rendering module 106 may include, for example, a logic circuit, a digital signal processor or a microprocessor.
  • Storage 108 may be configured to store at least one of raw 3D LF image(s) 128 (from 3D LF camera 102 or via controller 104) or rendered image(s) 130 (from rendering module 106). Storage 108 may also store parameters associated with controller 104 and/or rendering module 106. Although storage 108 is shown separate from 3D LF camera 102, in some examples, storage 108 may be part of 3D LF camera 102. Storage 108 may include any suitable tangible, non-transitory computer readable medium, for example, a magnetic disk, an optical disk or a hard drive.
  • Raw 3D LF image(s) 128 (from 3D LF camera 102) and/or rendered image(s) 130 (from rendering module 106) may be displayed on display 110.
  • Display 110 may include any suitable display device configured to display raw 3D LF image(s) 128/rendered image(s) 130.
  • User interface 112 may include any suitable user interface capable of receiving user input associated with, for example, selection of rendering to be performed by rendering module 106, parameters associated with rendering module 106, storage selection in storage 108 for captured images 128/rendered images 130, display selection for images 128, 130 and/or print selection for images 128, 130.
  • User interface 112 may include, for example, a pointing device, a keyboard and/or a display device. Although user interface 112 and display 110 are illustrated as separate devices, it is understood that the functions of user interface 112 and display 110 may be combined into one device.
  • Raw 3D LF image 128 and/or rendered image 130 may be printed by printer 114, to form printed image 122.
  • Printer 114 may include any suitable printer device configured to print raw 3D LF image 128/rendered image 130.
  • printer 114 may include a laser printer configured to print a color and/or a black and white printed image 122.
  • printed image 122 includes a glossy finish paper.
  • FIG. 3 is an exploded perspective view diagram of example 3D photograph 116.
  • 3D photograph 116 may include cylindrical lens array 124 disposed on printed image 122 (printed by printer 114).
  • raw 3D LF image 128 (or rendered image 130) may be printed (via printer 114) to form printed image 122, for example, on glossy photo paper (to form printed image 120) .
  • Printed image 122 may then be mounted on cylindrical lens array 124 to produce 3D photograph 116. This is a practical 3D photography technique that may allow users to directly perceive solid 3D stereoscopic views from different perspectives, without 3D glasses.
  • An example 3D photograph 116 may appear similar to a photo frame, but with a special photograph (printed image 122) and special cover glass (cylindrical lens array 124).
  • the choice of cylindrical lens array 124 is independent from 3D LF image 128.
  • 3D LF image 128 may be re-sampled to fit the physical properties (e.g., lenslet width, density, focal length, etc.) of cylindrical lens array 124 to produce desirable 3D effects.
  • 3D photograph 116 may be used to capture other objects, such as a sculpture, food, etc.
  • a restaurant may use 3D photograph 116 to generate a 3D menu or display of their food.
  • 3D photograph 116 may be inexpensive and portable, making it suitable for product advertising.
  • system 100 via 3D LF camera 102, may be used to produce a 3D portrait.
  • rendering module 106 of system 100 may generate rendered image(s) 130, enabling people to view the 3D portrait from different perspectives on display 110.
  • system 100 may print raw 3D LF image 128 and/or rendered image 130 (as printed image 122) via printer 114.
  • system 100 may produce 3D photograph 116 (from printed image 122 coupled with cylindrical lens array 124).
  • a suitable 3D LF camera 102, controller 104, rendering module 106, storage 108, display 110, user interface 112, printer 114 and 3D photograph 116 may be understood by the skilled person from the description herein.
  • FIG. 2A is a perspective view diagram of 3D LF camera 102 and FIGS. 2B and 2C are exploded perspective view diagrams of 3D LF camera 102.
  • FIG. 2A illustrates 3D LF lens module 118 coupled to a body of camera 120.
  • FIG. 2A also illustrates slit mask 204 of lens module 118.
  • Lens module 118 may include housing 202 having slit mask 204 and cylindrical lens array 210 (FIG. 2B).
  • FIG. 2B illustrates the arrangement of 3D LF lens module 118 and imaging sensor 214 of camera 120 relative to optical axis 216.
  • FIG. 2C illustrates an example light ray geometry for 3d LF camera 102.
  • 3D LF lens module 118 may include slit mask 204 having slit-shaped aperture 206, main lens 208 and cylindrical lens array 210 disposed along optical axis 216.
  • Cylindrical lens array 210 includes a plurality of cylindrical lenses 212. A width of each cylindrical lens 212 may be microscopic (e.g., a few hundreds of microns) compared to main lens 208.
  • Slit mask 204 is disposed on main lens 208 and arranged such that a longitudinal direction of slit-shaped aperture 206 (i.e., a slit length) is positioned orthogonal to cylindrical axis 410 (FIG. 4A) of cylindrical lenses 212 of cylindrical lens array 210.
  • Aperture 206 is configured to change a shape of the aperture of main lens 208 from circularly-shaped to slit-shaped.
  • main lens 208 includes a consumer lens of camera 120.
  • slit mask 204 includes a plastic sheet having slit-shaped aperture 206 formed therethrough. Cylindrical lens array 210 is disposed on imaging sensor 214.
  • aperture 206 has width of about 1.3 mm.
  • Cylindrical lens array 210 includes 40 lenses 212.
  • Lens array 210 is of size 10 mm by 10 mm where each lens 212 has a pitch of about 0.25 mm and a focal length of about 1.6 mm.
  • the width of aperture 206, the number of lenses 212, the pitch of each lens 212, the focal length of each lens and the size of lens array 210 may be selected to produce a desired resolution for 3D LF lens module 118.
  • the selected parameters of lens module 118 produces an effective resolution of about 2000 x 2000.
  • the number of rays captured by 3D LF camera 102 may also depends on the resolution of imaging sensor 214. In an example, the resolution of imaging sensor 214 is approximately 5,184 x 3,456.
  • FIGS. 5A and 5B show example raw light field images captured by 3D LF camera 102 for two different scenes.
  • a cone of rays 220 emitted by a point on object 402 (FIG. 4B) will be mostly blocked by slit mask 204, allowing only fan of rays 222 to pass through main lens 208.
  • the passing rays 222 may be optically sorted by direction (shown by rays 412 in FIG. 4B) onto the pixels of imaging sensor 214 under cylindrical lens array 210.
  • 3D LF camera 102 Users may capture images with 3D LF camera 102 similarly as with a conventional camera (such as a DSLR camera), by attaching 3D LF lens module 118 to camera 120.
  • 3D LF lens module 118 By simply pressing a shutter button of camera 120, at least one 3D LF image may be captured, the same way a 2D image is typically captured. Accordingly, there may be a minimal learning curve for using 3D LF camera 102.
  • 3D LF images 128 (FIG. 1) captured by 3D LF camera 102 may be tailored by rendering module 106, displayed (via display 110) or printed out (via printer 114) for
  • FIG. 4A is an example diagram illustrating focusing of cone of rays 404 by cylindrical lens 212; and FIG. 4B is a diagram illustrating optical sorting of sheet (i.e., fan) of rays 222 onto imaging sensor 214.
  • each pixel is the integral of many rays across the aperture, which results in a high spatial resolution but very low angular resolution.
  • 3D LF camera 102 is capable of diverging rays in one direction, while maintaining high spatial resolution in the other direction. Specifically, cone of rays 220 emitted from object 402 will be converged and partially blocked by slit mask 204 on main lens 108, becoming sheet of rays 222. Rays 222 may be optically sorted by direction via cylindrical lens array 212, to form sorted rays 412. Sorted rays 412 from cylindrical lens array 210 are then directed onto pixels (not shown) of imaging sensor 214.
  • cylindrical lens 212 is configured such that rays 406 converge in one direction, leaving the other direction unaltered. Therefore, incoming light 404 from object 402 is focused by cylindrical lens 212 into line 408.
  • Cylindrical lens array 210 provides angular information in one direction, by converging rays in that direction, while keeping its high spatial resolution along the other direction (i.e., along the direction of cylindrical axis 410). If cylindrical lens array 210 is replaced with a microlens array, however, slit aperture 206 may result in either overlapping lens images or a waste of resolution.
  • FIG. 6A an example method for rendering a refocused image from 3D LF image 128 (FIG. 1) is shown. Some of the steps illustrated in FIG. 6A may be performed by rendering module 106 (FIG. 1) from 3D LF image 128 captured by 3D LF camera 102. The steps illustrated in FIG. 6A represent an example embodiment of the present invention. It is understood that certain steps may be performed in an order different from what is shown. Although FIG. 6A illustrates rendering a single refocused image, the method shown in FIG. 6A may also be applied to a plurality of captured 3D LF images of the scene.
  • a 3D LF image of a reference scene is captured, for example, via 3D LF camera 102 (FIG. 1) .
  • a lens center of each cylindrical lens 212 of cylindrical lens array 210 is located, based on the captured image of the reference scene (step 600).
  • the located lens centers may be stored in storage 108 (FIG. 1).
  • 3D LF camera 102 may generate images 128 with parallax.
  • the exact placement of cylindrical lens array 210 is unknown, and the baseline between cylindrical lens may be a non-integer multiple of pixel pitch. Therefore, to locate the image centers of lenses 212, an image of a white scene is captured in step 600.
  • Step 602 the brightest line along each lenslet image is taken, in step 602, to approximate the center of cylindrical lens.
  • the lenslet image refers to the image formed by pixels lying right beneath a cylindrical lenslet 212 (FIG. 2B).
  • Steps 600-602 may be performed to calibrate 3D LF camera 102, for example, prior to its first use in system 100. Thus, in some examples, steps 600-602 may be performed once and the results stored in storage 108 (FIG. 1). Steps 604-612 may be performed using the lens centers stored in storage 108, without performing steps 600-602 subsequent to calibration of 3D LF camera 102.
  • a 3D LF image 128 is captured of a desired scene, for example, via 3D LF camera 102 (FIG. 1).
  • the captured 3D LF image 128 may be stored in storage 108 (FIG. 1).
  • a set of sub-aperture images e.g., the vertical segments of the images shown in FIGS. 5A and 5B
  • the captured 3D LF image (step 604) may be reassembled into a set of sub-aperture images.
  • the LF image is split (i.e., separated) into lenslet images.
  • pixels in the lenslet images are reassembled into sub-aperture images.
  • an identical column e.g., column 5
  • pixels in all lenslet images may be selected in all lenslet images and then stitched together to form a sub-aperture image.
  • Different choices of columns correspond to different sub-aperture images. If all lenslet images are captured by cylindrical lenses 212 (FIG. 2B) of identical width, they should have the same number of columns.
  • each lenslet image has 8 columns, 8 different sub-aperture images may be synthesized.
  • a focus depth is selected, for example, via user interface 112 (FIG. 1).
  • each sub-aperture image is shifted to the selected focus depth (step 608), via rendering module 106, based on the located image centers (step 602) according to a ray tracing algorithm.
  • F is the separation between lens 208 (FIG. 2B) and the film (i.e., imaging sensor 214)
  • E F (x,y) s the irradiance of (x,y) position on the film
  • L F is the light field parameterized by lens plane uv and film plane xy
  • is the angle between ray
  • L F (x,y,u, v) and the image plane normal.
  • L F (x, y, u, v) L F (x, y,u,v)(cos ⁇ ) 4 .
  • each object emits sheet of rays 222 (FIG. 2C) after being filtered by slit mask 204 on main lens 208, the following approximation may be used :
  • equation (3) may be re-written as:
  • rays may be traced through the center of each lens (located in step 602) and used to render the refocused image.
  • L F corresponds to the sub-aperture images and the integral can be interpreted as adding transformed sub-aperture images.
  • the shifted sub-aperture images are combined to form refocused (rendered) image 130 (FIG. 1), via rendering module 106.
  • Steps 610 and 612 may be performed via a shift-and-add algorithm. For example, a specific shift amount (corresponding to a in Equation (4)) may be selected.
  • each sub- aperture image may be horizontally shifted according to its position.
  • all resulting shifted images may be blended together with normalized coefficients (as shown in Equation (4)). The results correspond to a pseudo 2D refocused image.
  • a non-transitory computer readable medium may store computer readable instructions for machine execution of the steps 602 and 606- 612.
  • FIG. 6B an example method for rendering an image from raw 3D LF image 128 (FIG. 1) is shown. Some of the steps illustrated in FIG. 6B may be performed by rendering module 106 (FIG. 1) from 3D LF image 128 captured by 3D LF camera 102. The steps illustrated in FIG. 6B represent an example embodiment of the present invention. It is understood that certain steps may be performed in an order different from what is shown. Although FIG. 6B illustrates rendering a single image, the method shown in FIG. 6B may also be applied to a plurality of captured 3D LF images of the scene.
  • steps 604-606 are repeated, to form a set of sub-aperture images.
  • a viewpoint for the image is selected, for example, via user interface 112 (FIG. 1).
  • a different weight may be assigned to different sub-aperture images. For example, higher weight(s) may be assigned to sub-aperture image(s) closer to the selected (synthetic) viewpoint, for example, via rendering module 106.
  • -lower weight(s) may be assigned to other sub-aperture images in the set of sub-aperture images that are farther away from the selected viewpoint, for example, via rendering module 106.
  • rendering module 106 may apply a shift-and-add algorithm to the weighted sub-aperture images (steps 624-626) to form perspective (rendered) image 130 (FIG. 1).
  • step 628 The same shift- and-add algorithm described above in steps 610 and 612 (FIG. 6A) may be applied to generate a synthetically defocused image in step 628, except that, in step 628, a different weight scheme is used when adding (i .e., combining) all of the views.
  • rendering module 106 may generate a stereoscopic view image from the perspective image (step 628) (or from raw 3D LF image 128 or the refocused image in step 612 of FIG. 6A, for example, via a red-cyan anaglyph.
  • the stereoscopic (rendered) image may include two images superimposed with different colors (such as red and cyan or other chromatically opposite colors), producing a stereo effect when the image is viewed through correspondingly colored filters.
  • anaglyph images contain two differently filtered colored images, one for each eye. When viewed through color-coded anaglyph glasses, each of the two images reaches the respective eye, revealing an integrated stereoscopic image.
  • the visual cortex of the brain fuses this image into perception of a three-dimensional scene or composition.
  • a non-transitory computer readable medium may store computer readable instructions for machine execution of the steps 624-630.
  • a raw 3D LF image of a scene is captured, for example, via 3D LF camera 102.
  • the captured (raw) 3D LF image is printed, for example, by printer 114, forming printed image 122.
  • printed image 122 is disposed on cylindrical lens array 124 to form 3D photograph 116.
  • printed image 122 may be permanently disposed on cylindrical lens array 124, for example, such as via an adhesive.
  • printed image 122 may be detachably disposed on cylindrical lens array 124.
  • printed image 122 and lens array 124 may be disposed in a housing (such as a frame) configured to (detachably) fix printed image 122 to lens array 124. Additional processes such as image super-resolution and/or denoising can be applied to the raw 3D LF image prior to forming the 3D photograph (such as via rendering module 106 shown in FIG. 1), to improve the quality of the final result.
  • a housing such as a frame
  • Additional processes such as image super-resolution and/or denoising can be applied to the raw 3D LF image prior to forming the 3D photograph (such as via rendering module 106 shown in FIG. 1), to improve the quality of the final result.
  • Example 3D LF camera 102 may be configured to record a 3D light field 128 of a scene. Specifically, 3D LF camera 102 preserves a high spatial resolution along the cylindrical lens direction while obtaining angular information in the other direction. With the captured light field, system 100 is able to recover and render a 3D representation of the scene that can be visualized with or without 3D glasses. Because 3D LF camera 102 uses a general purpose camera 120, it can be applied to a variety of applications. A conventional 2D image has a fixed viewpoint and lacks depth perception. In contrast, system 100 enables a solid stereoscopic view of an object to be perceived at different viewpoints.
  • FIGS. 5A, 5B, 8A-8D, 9A and 9B example data captured on real scenes using example 3D LF camera 102 (FIG. 1) and rendering using system 100 are described.
  • FIGS. 5A and 5B are example raw 3D light field images captured for two different scenes
  • FIGS. 8A-8D are example rendered refocused 3D images of the raw images shown in respective FIGS. 5A and 5B for different focus depths
  • FIGS. 9A and 9B are example rendered 3D images illustrating stereoscopic views of the images shown in respective FIGS. 8B and 8D with a different perspective.
  • Cylindrical lens array 210 includes 40 cylindrical lenses 212.
  • a pitch of each lens 212 in array 210 is 0.25 mm, and the focal length is 1.6 mm.
  • a size of the cylindrical lens array 210 is 10 mm x 10 mm, which amounts to an effective resolution of 2000 x 2000.
  • the sub- aperture image is first generated by taking the same stripe of pixels underneath each cylindrical lens. Then a refocus plane is selected and the shift-and-add refocus algorithm is used in the light field image to render the refocus images. (See FIG. 6A.)
  • FIGS. 9A and 9B shows that system 100 is able to render objects from different perspectives.
  • FIGS. 9A and 9B also demonstrate rendered images in a stereoscopic view (e.g., by using a red-cyan anaglyph). To render different
  • a higher weight may be assigned to the sub-aperture image with a desired viewpoint and a lower weight may be assigned to other sub-aperture images.
  • a shift-and-add algorithm may then be applied to render the images. (See FIG. 6B.)
  • one or more steps and/or components may be implemented in software for use with microprocessors/general purpose computers (not shown).
  • one or more of the functions of the various components and/or steps described above may be implemented in software that controls a computer.
  • the software may be embodied in non-transitory tangible computer readable media (such as, by way of non-limiting example, a magnetic disk, optical disk, hard drive, etc.) for execution by the computer.
  • devices 104, 106, 110, 112 and 114 shown in FIG.
  • controller 104 may perform certain operations using dedicated circuitry and/or using software contained in a computer-readable medium 108 coupled to controller 104.
  • the software instructions may cause controller 104 and/or rendering module 106 to perform one or more processes described herein.
  • hardwired circuitry may be used in place of, or in combination with, software instructions to implement processes described herein.
  • implementations described herein are not limited to any specific combination of hardware circuitry and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Optics & Photonics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Studio Devices (AREA)
  • Stereoscopic And Panoramic Photography (AREA)

Abstract

Methods and systems for generating three-dimensional (3D) images, 3D light field (LF) cameras and 3D photographs are provided. Light representing a scene is directed through a lens module coupled to an imaging sensor. The lens module includes: a surface having a slit-shaped aperture and a cylindrical lens array positioned along an optical axis of the imaging sensor. A longitudinal direction of the slit-shaped aperture is arranged orthogonal to a cylindrical axis of the cylindrical lens array. The light directed through the lens module is captured by the imaging sensor to form a 3D LF image. A 3D photograph includes a 3D LF printed image of the scene and a cylindrical lens array disposed on the printed image, such that the combination of 3D LF printed image and the cylindrical lens array forms a 3D stereoscopic image.

Description

3-D LIGHT FIELD CAMERA AND PHOTOGRAPHY METHOD
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application is related to and claims the benefit of U.S. Provisional Application No. 61/920,074 entitled 3-D LIGHT FIELD CAMERA AND PHOTOGRAPHY METHOD, filed on December 23, 2013, and U.S. Provisional Application No. 61/931,051 entitled 3-D LIGHT FIELD CAMERA AND PHOTOGRAPHY METHOD, filed on January 24, 2014, the contents of which are incorporated herein by reference.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH
[0002] The present invention was supported in part by Grant Number 0845268 from the National Science Foundation. The United States Government may have certain rights to the invention.
FIELD OF THE INVENTION
[0003] The present invention is related to three-dimensional (3D) imaging, and in particular to 3D light-field cameras, methods and systems for capturing and presenting 3D images.
BACKGROUND
[0004] There is an emerging interest in developing a light-field (LF) camera, also called a plenoptic camera. An LF camera uses a microlens array to capture four- dimensional (4D) light field information about the scene. Such light field information may be used to improve the resolution of computer graphics and computer vision applications.
SUMMARY OF THE INVENTION
[0005] Aspects of the present invention relate to a method of generating an image of a scene. Light representing the scene is directed through a lens module coupled to an imaging sensor. The lens module includes a surface having a slit-shaped aperture and a cylindrical lens array positioned along an optical axis of the imaging sensor. A longitudinal direction of the slit-shaped aperture is arranged orthogonal to a cylindrical axis of the cylindrical lens array. Light directed through the lens module is captured by the imaging sensor to form a 3D LF image.
[0006] Aspects of the present invention also relate to a 3D LF camera. The 3D
LF camera includes a surface having a slit-shaped aperture mounted on a lens, an imaging sensor and a cylindrical lens array disposed between the imaging sensor and the lens. The cylindrical lens array is arranged along an optical axis of the imaging sensor. A longitudinal direction of the slit-shaped aperture is arranged orthogonal to a cylindrical axis of the cylindrical lens array. The imaging sensor is configured to capture at least one 3D LF image of a scene. [0007] Aspects of the present invention also relate to a 3D photograph. The 3D photograph includes a 3D light field printed image of a scene and a cylindrical lens array disposed on the 3D light field printed image. The combination of the 3D light field printed image and the cylindrical lens array forms a 3D stereoscopic image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The invention may be understood from the following detailed description when read in connection with the accompanying drawings. It is emphasized that, according to common practice, various features/elements of the drawings may not be drawn to scale. On the contrary, the dimensions of the various features/elements may be arbitrarily expanded or reduced for clarity. Moreover, in the drawings, common numerical references are used to represent like features/elements. The patent or appl ication file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawings will be provided by the Office upon request and payment of the necessary fee. Included in the drawing are the following figures :
[0009] FIG. 1 is a functional block diagram of an example system for capturing and presenting 3D images, according to an aspect of the present invention ;
[0010] FIG. 2A is a perspective view diagram of an example 3D LF camera of the system shown in FIG. 1, according to an aspect of the present invention;
[0011] FIG. 2B is an exploded perspective view diagram of an example 3D LF camera shown in FIG. 2A, illustrating components of a 3D LF lens module and camera, according to an aspect of the present invention ;
[0012] FIG. 2C is an exploded perspective view diagram of the example 3D LF camera shown in FIG. 2B, i llustrating light ray geometry for a 3D LF camera, according to an aspect of the present invention;
[0013] FIG. 3 is an exploded perspective view diagram of an example 3D photograph of the system shown in FIG. 1, according to an aspect of the present invention ;
[0014] FIG. 4A is an example diagram illustrating focusing of a cone of rays by a cylindrical lens, according to an aspect of the present invention ;
[0015] FIG. 4B is diagram illustrating optical sorting of a sheet of rays onto an imaging sensor by an example 3D LF camera shown in FIG. 1, according to an aspect of the present invention ;
[0016] FIGS. 5A and 5B are example raw 3D light field images captured by an example 3D LF camera shown in FIG. 1, according to an aspect of the present invention ; [0017] FIG. 6A is a flow chart diagram illustrating an example method for rendering a refocused image, according to an aspect of the present invention;
[0018] FIG. 6B is a flow chart diagram illustrating an example method for rendering an image, according to another aspect of the present invention;
[0019] FIG. 6C is a flow chart diagram illustrating an example method for forming a 3D photograph, according to an aspect of the present invention;
[0020] FIG. 7 is an example diagram illustrating ray re-parameterization for rendering a refocused 3D image, according to an aspect of the present invention;
[0021] FIGS. 8A, 8B, 8C and 8D are example rendered 3D images for various focus depths, according to an aspect of the present invention; and
[0022] FIGS. 9A and 9B are example rendered 3D images illustrating
stereoscopic views of the images shown in respective FIGS. 8B and 8D with different perspective, according to an aspect of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0023] Current light field cameras suffer from poor resolution. For example, light field cameras with a 10 megapixel sensor only produce images at a very low resolution (e.g., about 1 megapixel). The low resolution resultant image is inherent to the design of all current light field cameras: they sacrifice spatial resolution for angular resolution. The spatial resolution is defined as the sampling rate in space. In a conventional (non-light field) camera, the spatial resolution amounts to the sensor's resolution. In a light field camera, the total number of sampling points in space is equal to the number of lenses. Given that the size of the lens is usually several times larger than the pixel pitch, the spatial resolution may be reduced. However, pixels underneath each lens will record rays passing through its common sampling point with different directions. The directional specificity defines the camera's angular resolution. Under the assumption that the sensor has limited resolution, there may be a trade-off between the spatial resolution and the angular resolution, which equates to a balance between image resolution and the number of views.
[0024] Current cameras which capture a light field include 4D light field cameras which record both angular and spatial information in all directions. For example, one current 4D light field camera includes a 328 x 328 microlens array attached to an imaging sensor, where each microlens covers about 100 pixels. In this example, a light field of about 328 x 328 spatial resolution and about 10 x 10 angular resolution may be obtained. The inherent tradeoff made by the light field camera provides more angular resolution at the expense of lowering spatial resolution. Although the camera in this example is equipped with an 11 megapixel sensor, it only delivers images with an effective resolution of about 700 x 700. Other 4D light field cameras share a similar design and similar limitations.
[0025] With respect to 3D display, most existing 3D televisions uses shutter- glass technology to display stereoscopic 3D images. A disadvantage of this technique is that it produces flickering (which can be noticed except at very high refresh rates.) In addition, current 3D viewing techniques (such as by shutter glasses) are
inconvenient and expensive for viewing 3D photographs.
[0026] Aspects of the present invention include a 3D light field camera that combines a camera, a cylindrical lens array attached to the imaging sensor of the camera and a modified lens with a narrow-slit aperture. In some examples, the camera may include a digital single-lens reflex (DSLR) camera. In some examples, a 3D light field camera uses a vertical cylindrical lens array. The vertical cylindrical lens array may be used to maintain the vertical resolution while only trading between the horizontal resolution and the angular resolution . To reduce defocus blurs, the cylindrical lens array may be coupled with a slit shaped aperture.
[0027] With the rapid growth of 3D display technology, people are more likely to watch 3D content instead of two dimensional (2D) images. Example 3D light field cameras of the present invention go beyond the capability of merely watching 3D content. With exemplary 3D light field cameras and exemplary systems for capturing and presenting 3D images, the 3D content may be captured directly from the scene and then displayed . By attaching a cylindrical lens array to the sensor and a narrow-slit mask to the aperture, a consumer DSLR camera may be converted to a 3D light field camera. Users can take pictures with an exemplary 3D light field camera similarly to a conventional camera .
[0028] Aspects of the present invention also relate to exemplary methods and systems for renderi ng 3D stereoscopic images from a raw light field image. With the captu red raw light field image, 3D stereoscopic images may be rendered from different perspectives with view-dependent features such as occlusion and reflection . Because the 3D light field camera can simultaneously capture the scene from different viewpoints in a single shot, the acquired views will exhibit parallax, i .e., closer objects exhibit larger disparity across views. The capability of preserving parallax enables naked eye 3D visualization of the scene/object. The same capability enables preservation of view-dependent features such as reflections where each view (i .e., sub- image) captures a slightly different image of the scene. In some examples, the system may render a refocused image at predetermined focus depth from the raw light field image. Example methods use image based rendering (IBR) techniques. Specifically, simple geometry (such as a 3D plane) may be used as a proxy to scene geometry. All captured views can be warped onto the geometry and re-rendered (e.g., via ray-tracing or texture mapping) to the desired view. The process is analogous to specifying a focal depth in commodity SLR cameras. When all views are combined after warping, the results emulate defocus blurs in conventional wide aperture photography.
[0029] Aspects of the present invention also relate to methods and devices for
3D viewing . According to some examples, the device falls into the category of an autostereoscopic 3D display, i.e., viewing 3D without glasses.
[0030] Referring to FIG. 1, a system 100 for capturing and presenting 3D images is shown. System 100 includes 3D LF camera 102, controller 104, rendering module 106, storage 108, display 110, user interface 112, printer 114 and 3D photograph 116. Although not shown, system 100 may be coupled to a remote location, for example via a global network (i.e., the Internet).
[0031] 3D LF camera 102 includes 3D LF lens module 118 and camera 120. As described further below with respect to FIGS. 2A-2C, lens module 118 includes cylindrical lens array 210 and slit shaped aperture 206. Camera 120 may include any suitable general purpose camera having a main lens (e.g., lens 208 shown in FIG. 2B) and an imaging sensor (e.g., imaging sensor 214). In some examples, camera 120 includes a DSLR camera. In one example, camera 118 includes a DSLR camera model number XSi manufactured by Canon Inc. (Tokyo, Japan). To convert camera 120 to 3D LF camera 102, mask 204 (FIG. 2B) having slit shaped aperture 206 of lens module 118 may be used to modify the aperture of main lens 208, and cylindrical lens array 210 of lens module 118 may be attached to imaging sensor 214. In some examples, lens module 118 may be detachably coupled from camera 120.
[0032] 3D LF camera 102 may be configured to capture (raw) 3D LF image 128 of a scene. In some examples, 3D LF camera 102 may capture two or more 3D LF images 128 of the scene, such as over a predetermined time period. Thus, in some examples, 3D LF camera 102 may include a video camera. In general, 3D LF camera 102 may capture at least one 3D LF image 128 of the scene.
[0033] Controller 104 may be coupled to one or more of 3D LF camera 102, rendering module 106, storage 108, display 110, user interface 112 and printer 114, to control capture, storage, display, printing and/or processing of 3D LF image(s) 128.
Controller 104 may include, for example, a logic circuit, a digital signal processor or a microprocessor. It is understood that one or more functions of rendering module 106 may be performed by controller 104.
[0034] Rendering module 106 may be configured to process 3D LF image(s) 128 to form rendered image(s) 130. Rendering module 106 may be configured to calibrate
3D LF camera 102 to locate a lens center of each lens 212 (FIG. 2B) in cylindrical lens array 212. Rendering module 106 may also be configured to render a refocused image (after calibration) for various refocus planes. Refocusing is described further below with respect to FIG. 6A. In some examples, rendering module 106 may be configured to apply a predetermined perspective to 3D LF image(s) 128. In some examples, rendering module 106 may be also configured to generate a stereoscopic view of 3D LF image(s) 128. Perspective and stereoscopic processing are described further below with respect to FIG. 6B. In general, rendered image(s) 130 may be processed to include at least one of refocusing to a predetermined focus depth, perspective rendering or stereoscopic viewing. It is understood that refocusing, perspective rendering and stereoscopic view rendering represent example processing by rendering module 106, and that rendering module 106 may perform additional processing of 3D LF image(s) 128 such as, without being limited to, filtering, noise reduction, etc.
Rendering module 106 may include, for example, a logic circuit, a digital signal processor or a microprocessor.
[0035] Storage 108 may be configured to store at least one of raw 3D LF image(s) 128 (from 3D LF camera 102 or via controller 104) or rendered image(s) 130 (from rendering module 106). Storage 108 may also store parameters associated with controller 104 and/or rendering module 106. Although storage 108 is shown separate from 3D LF camera 102, in some examples, storage 108 may be part of 3D LF camera 102. Storage 108 may include any suitable tangible, non-transitory computer readable medium, for example, a magnetic disk, an optical disk or a hard drive.
[0036] Raw 3D LF image(s) 128 (from 3D LF camera 102) and/or rendered image(s) 130 (from rendering module 106) may be displayed on display 110. Display 110 may include any suitable display device configured to display raw 3D LF image(s) 128/rendered image(s) 130.
[0037] User interface 112 may include any suitable user interface capable of receiving user input associated with, for example, selection of rendering to be performed by rendering module 106, parameters associated with rendering module 106, storage selection in storage 108 for captured images 128/rendered images 130, display selection for images 128, 130 and/or print selection for images 128, 130. User interface 112 may include, for example, a pointing device, a keyboard and/or a display device. Although user interface 112 and display 110 are illustrated as separate devices, it is understood that the functions of user interface 112 and display 110 may be combined into one device.
[0038] Raw 3D LF image 128 and/or rendered image 130 may be printed by printer 114, to form printed image 122. Printer 114 may include any suitable printer device configured to print raw 3D LF image 128/rendered image 130. In some examples, printer 114 may include a laser printer configured to print a color and/or a black and white printed image 122. In some examples, printed image 122 includes a glossy finish paper.
[0039] Referring to FIGS. 1 and 3, 3D photograph 116 is described. FIG. 3 is an exploded perspective view diagram of example 3D photograph 116. 3D photograph 116 may include cylindrical lens array 124 disposed on printed image 122 (printed by printer 114). Thus, in addition to displaying images on display 110, raw 3D LF image 128 (or rendered image 130) may be printed (via printer 114) to form printed image 122, for example, on glossy photo paper (to form printed image 120) . Printed image 122 may then be mounted on cylindrical lens array 124 to produce 3D photograph 116. This is a practical 3D photography technique that may allow users to directly perceive solid 3D stereoscopic views from different perspectives, without 3D glasses. An example 3D photograph 116 may appear similar to a photo frame, but with a special photograph (printed image 122) and special cover glass (cylindrical lens array 124). The choice of cylindrical lens array 124 is independent from 3D LF image 128. For example, 3D LF image 128 may be re-sampled to fit the physical properties (e.g., lenslet width, density, focal length, etc.) of cylindrical lens array 124 to produce desirable 3D effects.
[0040] 3D photograph 116 may be used to capture other objects, such as a sculpture, food, etc. For example, a restaurant may use 3D photograph 116 to generate a 3D menu or display of their food. 3D photograph 116 may be inexpensive and portable, making it suitable for product advertising.
[0041] Referring back to FIG. 1, in some examples, system 100, via 3D LF camera 102, may be used to produce a 3D portrait. In some examples, rendering module 106 of system 100 may generate rendered image(s) 130, enabling people to view the 3D portrait from different perspectives on display 110. In some examples, system 100 may print raw 3D LF image 128 and/or rendered image 130 (as printed image 122) via printer 114. In some examples, system 100 may produce 3D photograph 116 (from printed image 122 coupled with cylindrical lens array 124).
[0042] A suitable 3D LF camera 102, controller 104, rendering module 106, storage 108, display 110, user interface 112, printer 114 and 3D photograph 116 may be understood by the skilled person from the description herein.
[0043] Referring next to FIGS. 2A-2C, example 3D LF camera 102 is shown. In particular, FIG. 2A is a perspective view diagram of 3D LF camera 102 and FIGS. 2B and 2C are exploded perspective view diagrams of 3D LF camera 102. FIG. 2A illustrates 3D LF lens module 118 coupled to a body of camera 120. FIG. 2A also illustrates slit mask 204 of lens module 118. Lens module 118 may include housing 202 having slit mask 204 and cylindrical lens array 210 (FIG. 2B). FIG. 2B illustrates the arrangement of 3D LF lens module 118 and imaging sensor 214 of camera 120 relative to optical axis 216. FIG. 2C illustrates an example light ray geometry for 3d LF camera 102.
[0044] As shown in FIG. 2B, 3D LF lens module 118 may include slit mask 204 having slit-shaped aperture 206, main lens 208 and cylindrical lens array 210 disposed along optical axis 216. Cylindrical lens array 210 includes a plurality of cylindrical lenses 212. A width of each cylindrical lens 212 may be microscopic (e.g., a few hundreds of microns) compared to main lens 208. Slit mask 204 is disposed on main lens 208 and arranged such that a longitudinal direction of slit-shaped aperture 206 (i.e., a slit length) is positioned orthogonal to cylindrical axis 410 (FIG. 4A) of cylindrical lenses 212 of cylindrical lens array 210. Aperture 206 is configured to change a shape of the aperture of main lens 208 from circularly-shaped to slit-shaped. In some examples, main lens 208 includes a consumer lens of camera 120. In some examples, slit mask 204 includes a plastic sheet having slit-shaped aperture 206 formed therethrough. Cylindrical lens array 210 is disposed on imaging sensor 214.
[0045] As an example, aperture 206 has width of about 1.3 mm. Cylindrical lens array 210 includes 40 lenses 212. Lens array 210 is of size 10 mm by 10 mm where each lens 212 has a pitch of about 0.25 mm and a focal length of about 1.6 mm. In general, the width of aperture 206, the number of lenses 212, the pitch of each lens 212, the focal length of each lens and the size of lens array 210 may be selected to produce a desired resolution for 3D LF lens module 118. In the example above, the selected parameters of lens module 118 produces an effective resolution of about 2000 x 2000. The number of rays captured by 3D LF camera 102 may also depends on the resolution of imaging sensor 214. In an example, the resolution of imaging sensor 214 is approximately 5,184 x 3,456. FIGS. 5A and 5B show example raw light field images captured by 3D LF camera 102 for two different scenes.
[0046] As shown in FIG. 2C, because the directions of slit aperture 206 and cylindrical lenses 212 are orthogonal to each other, a cone of rays 220 emitted by a point on object 402 (FIG. 4B) will be mostly blocked by slit mask 204, allowing only fan of rays 222 to pass through main lens 208. The passing rays 222 may be optically sorted by direction (shown by rays 412 in FIG. 4B) onto the pixels of imaging sensor 214 under cylindrical lens array 210.
[0047] Users may capture images with 3D LF camera 102 similarly as with a conventional camera (such as a DSLR camera), by attaching 3D LF lens module 118 to camera 120. Thus, by simply pressing a shutter button of camera 120, at least one 3D LF image may be captured, the same way a 2D image is typically captured. Accordingly, there may be a minimal learning curve for using 3D LF camera 102. 3D LF images 128 (FIG. 1) captured by 3D LF camera 102 may be tailored by rendering module 106, displayed (via display 110) or printed out (via printer 114) for
visualization.
[0048] Referring to FIGS. 2C, 4A and 4B, the light ray geometry of light rays
202 through 3D LF camera 102 is further described. In particular, FIG. 4A is an example diagram illustrating focusing of cone of rays 404 by cylindrical lens 212; and FIG. 4B is a diagram illustrating optical sorting of sheet (i.e., fan) of rays 222 onto imaging sensor 214.
[0049] In a conventional camera, the value of each pixel is the integral of many rays across the aperture, which results in a high spatial resolution but very low angular resolution. 3D LF camera 102 is capable of diverging rays in one direction, while maintaining high spatial resolution in the other direction. Specifically, cone of rays 220 emitted from object 402 will be converged and partially blocked by slit mask 204 on main lens 108, becoming sheet of rays 222. Rays 222 may be optically sorted by direction via cylindrical lens array 212, to form sorted rays 412. Sorted rays 412 from cylindrical lens array 210 are then directed onto pixels (not shown) of imaging sensor 214.
[0050] As shown in FIG. 4A, cylindrical lens 212 is configured such that rays 406 converge in one direction, leaving the other direction unaltered. Therefore, incoming light 404 from object 402 is focused by cylindrical lens 212 into line 408.
[0051] As shown in FIGS. 2C and 4B, because cone of rays 220 becomes sheet of rays 222 after they pass through slit mask 204 and main lens 208, there is no need to converge rays in two directions, as in the case of spherical lens array. Cylindrical lens array 210 provides angular information in one direction, by converging rays in that direction, while keeping its high spatial resolution along the other direction (i.e., along the direction of cylindrical axis 410). If cylindrical lens array 210 is replaced with a microlens array, however, slit aperture 206 may result in either overlapping lens images or a waste of resolution.
[0052] Referring to FIG. 6A, an example method for rendering a refocused image from 3D LF image 128 (FIG. 1) is shown. Some of the steps illustrated in FIG. 6A may be performed by rendering module 106 (FIG. 1) from 3D LF image 128 captured by 3D LF camera 102. The steps illustrated in FIG. 6A represent an example embodiment of the present invention. It is understood that certain steps may be performed in an order different from what is shown. Although FIG. 6A illustrates rendering a single refocused image, the method shown in FIG. 6A may also be applied to a plurality of captured 3D LF images of the scene. [0053] At step 600, a 3D LF image of a reference scene is captured, for example, via 3D LF camera 102 (FIG. 1) . At step 602, a lens center of each cylindrical lens 212 of cylindrical lens array 210 is located, based on the captured image of the reference scene (step 600). The located lens centers may be stored in storage 108 (FIG. 1).
[0054] 3D LF camera 102 may generate images 128 with parallax. In general the exact placement of cylindrical lens array 210 is unknown, and the baseline between cylindrical lens may be a non-integer multiple of pixel pitch. Therefore, to locate the image centers of lenses 212, an image of a white scene is captured in step 600.
Because of vignetting, the brightest line along each lenslet image is taken, in step 602, to approximate the center of cylindrical lens. The lenslet image refers to the image formed by pixels lying right beneath a cylindrical lenslet 212 (FIG. 2B). Steps 600-602 may be performed to calibrate 3D LF camera 102, for example, prior to its first use in system 100. Thus, in some examples, steps 600-602 may be performed once and the results stored in storage 108 (FIG. 1). Steps 604-612 may be performed using the lens centers stored in storage 108, without performing steps 600-602 subsequent to calibration of 3D LF camera 102.
[0055] At step 604, a 3D LF image 128 is captured of a desired scene, for example, via 3D LF camera 102 (FIG. 1). The captured 3D LF image 128 may be stored in storage 108 (FIG. 1). At step 606, a set of sub-aperture images (e.g., the vertical segments of the images shown in FIGS. 5A and 5B) are formed from the captured (raw) 3D LF image (step 604), for example, by rendering module 106 (FIG. 1). The captured 3D LF image (step 604) may be reassembled into a set of sub-aperture images. First, the LF image is split (i.e., separated) into lenslet images. Then, pixels in the lenslet images are reassembled into sub-aperture images. Specifically, an identical column (e.g., column 5) of pixels may be selected in all lenslet images and then stitched together to form a sub-aperture image. Different choices of columns correspond to different sub-aperture images. If all lenslet images are captured by cylindrical lenses 212 (FIG. 2B) of identical width, they should have the same number of columns.
Therefore, if each lenslet image has 8 columns, 8 different sub-aperture images may be synthesized.
[0056] At step 608, a focus depth is selected, for example, via user interface 112 (FIG. 1). At step 610, each sub-aperture image is shifted to the selected focus depth (step 608), via rendering module 106, based on the located image centers (step 602) according to a ray tracing algorithm.
[0057] Based on classical radiometry, the irradiance of a point on the film (or image plane where the film is positioned) is the integral of all the rays across the aperture reaching the point: EF x,y) = - f JJ ( , y, w,v)(cos Θ) 4 dudv (1)
Where F is the separation between lens 208 (FIG. 2B) and the film (i.e., imaging sensor 214), EF (x,y) s the irradiance of (x,y) position on the film, LF is the light field parameterized by lens plane uv and film plane xy , Θ is the angle between ray
(x,y,u, v) and the image plane normal. For simplicity, LF may be redefined as
LF (x, y, u, v) := LF (x, y,u,v)(cos θ)4 .
[0058] To focus at a different plane, the separation between the lens plane and the film plane is changed. For example, to focus at a new depth F' , as shown in FIG. 7, the image can be rendered as described below. In FIG. 7, axes v, y', y extend out from the drawing (orthogonal to respective axes u,x',x ).
[0059] Using a similar triangle, the ray (u,x'), where x' is the coordinate on film
F
plane, can be re-parameterized as u, (x' - u)— - 1 at the original x plane. As a result,
F
if a = F'l F is defined as the relative depth of the film plane, then
\
LF, (x', y',u,v) = LF v (2)
J
Therefore, the final equation for pixel value (x',y') in the film at the depth
F' = · from the lens plane becomes: ,u, v)dudv (3)
Figure imgf000012_0001
[0060] Because each object emits sheet of rays 222 (FIG. 2C) after being filtered by slit mask 204 on main lens 208, the following approximation may be used :
y = y' = v . Thus, equation (3) may be re-written as:
x
1 -— +—, y',u,v)du (4) a u J v a J a
Thus, rays may be traced through the center of each lens (located in step 602) and used to render the refocused image. Here, the term LF corresponds to the sub-aperture images and the integral can be interpreted as adding transformed sub-aperture images.
[0061 ] At step 612, the shifted sub-aperture images (step 610) are combined to form refocused (rendered) image 130 (FIG. 1), via rendering module 106. Steps 610 and 612 may be performed via a shift-and-add algorithm. For example, a specific shift amount (corresponding to a in Equation (4)) may be selected. Next, each sub- aperture image may be horizontally shifted according to its position. Next, all resulting shifted images may be blended together with normalized coefficients (as shown in Equation (4)). The results correspond to a pseudo 2D refocused image.
[0062] It is contemplated that a non-transitory computer readable medium may store computer readable instructions for machine execution of the steps 602 and 606- 612.
[0063] Referring to FIG. 6B, an example method for rendering an image from raw 3D LF image 128 (FIG. 1) is shown. Some of the steps illustrated in FIG. 6B may be performed by rendering module 106 (FIG. 1) from 3D LF image 128 captured by 3D LF camera 102. The steps illustrated in FIG. 6B represent an example embodiment of the present invention. It is understood that certain steps may be performed in an order different from what is shown. Although FIG. 6B illustrates rendering a single image, the method shown in FIG. 6B may also be applied to a plurality of captured 3D LF images of the scene.
[0064] At step 620, steps 604-606 are repeated, to form a set of sub-aperture images. At step 622, a viewpoint for the image is selected, for example, via user interface 112 (FIG. 1).
[0065] At step 624, instead of using a uniform weight, a different weight may be assigned to different sub-aperture images. For example, higher weight(s) may be assigned to sub-aperture image(s) closer to the selected (synthetic) viewpoint, for example, via rendering module 106. At step 626, -lower weight(s) may be assigned to other sub-aperture images in the set of sub-aperture images that are farther away from the selected viewpoint, for example, via rendering module 106. At step 628, rendering module 106 may apply a shift-and-add algorithm to the weighted sub-aperture images (steps 624-626) to form perspective (rendered) image 130 (FIG. 1). The same shift- and-add algorithm described above in steps 610 and 612 (FIG. 6A) may be applied to generate a synthetically defocused image in step 628, except that, in step 628, a different weight scheme is used when adding (i .e., combining) all of the views.
[0066] At optional step 630, rendering module 106 may generate a stereoscopic view image from the perspective image (step 628) (or from raw 3D LF image 128 or the refocused image in step 612 of FIG. 6A, for example, via a red-cyan anaglyph. The stereoscopic (rendered) image may include two images superimposed with different colors (such as red and cyan or other chromatically opposite colors), producing a stereo effect when the image is viewed through correspondingly colored filters. In general, anaglyph images contain two differently filtered colored images, one for each eye. When viewed through color-coded anaglyph glasses, each of the two images reaches the respective eye, revealing an integrated stereoscopic image. The visual cortex of the brain fuses this image into perception of a three-dimensional scene or composition.
[0067] It is contemplated that a non-transitory computer readable medium may store computer readable instructions for machine execution of the steps 624-630.
[0068] Referring to FIG. 6C, an example method for forming a 3D photograph
116 (FIGS. 1 and 3) is shown. At step 640, a raw 3D LF image of a scene is captured, for example, via 3D LF camera 102. At step 642, the captured (raw) 3D LF image is printed, for example, by printer 114, forming printed image 122. At step 644, printed image 122 is disposed on cylindrical lens array 124 to form 3D photograph 116. In some examples, printed image 122 may be permanently disposed on cylindrical lens array 124, for example, such as via an adhesive. In some examples, printed image 122 may be detachably disposed on cylindrical lens array 124. For example, printed image 122 and lens array 124 may be disposed in a housing (such as a frame) configured to (detachably) fix printed image 122 to lens array 124. Additional processes such as image super-resolution and/or denoising can be applied to the raw 3D LF image prior to forming the 3D photograph (such as via rendering module 106 shown in FIG. 1), to improve the quality of the final result.
[0069] Example 3D LF camera 102 (FIG. 1) may be configured to record a 3D light field 128 of a scene. Specifically, 3D LF camera 102 preserves a high spatial resolution along the cylindrical lens direction while obtaining angular information in the other direction. With the captured light field, system 100 is able to recover and render a 3D representation of the scene that can be visualized with or without 3D glasses. Because 3D LF camera 102 uses a general purpose camera 120, it can be applied to a variety of applications. A conventional 2D image has a fixed viewpoint and lacks depth perception. In contrast, system 100 enables a solid stereoscopic view of an object to be perceived at different viewpoints.
[0070] Referring next to FIGS. 5A, 5B, 8A-8D, 9A and 9B, example data captured on real scenes using example 3D LF camera 102 (FIG. 1) and rendering using system 100 are described. In particular, FIGS. 5A and 5B are example raw 3D light field images captured for two different scenes; FIGS. 8A-8D are example rendered refocused 3D images of the raw images shown in respective FIGS. 5A and 5B for different focus depths; and FIGS. 9A and 9B are example rendered 3D images illustrating stereoscopic views of the images shown in respective FIGS. 8B and 8D with a different perspective.
[0071] In the example, an XSi DSLR camera (e.g., camera 120) manufactured by Canon Inc. (Tokyo, Japan) with a sensor resolution of 5, 184 x 3,456 is used to capture the data. The width of slit 206 (FIG. 2B) of aperture mask 204 (at the aperture) measures 1.3 mm. Cylindrical lens array 210 includes 40 cylindrical lenses 212. A pitch of each lens 212 in array 210 is 0.25 mm, and the focal length is 1.6 mm. A size of the cylindrical lens array 210 is 10 mm x 10 mm, which amounts to an effective resolution of 2000 x 2000.
[0072] To generate the refocused image, shown in FIGS. 8A-8D, the sub- aperture image is first generated by taking the same stripe of pixels underneath each cylindrical lens. Then a refocus plane is selected and the shift-and-add refocus algorithm is used in the light field image to render the refocus images. (See FIG. 6A.)
[0073] FIGS. 9A and 9B shows that system 100 is able to render objects from different perspectives. FIGS. 9A and 9B also demonstrate rendered images in a stereoscopic view (e.g., by using a red-cyan anaglyph). To render different
perspectives, a higher weight may be assigned to the sub-aperture image with a desired viewpoint and a lower weight may be assigned to other sub-aperture images. A shift-and-add algorithm may then be applied to render the images. (See FIG. 6B.)
[0074] Although the invention has been described in terms of methods and systems for capturing, processing and presenting 3D images, it is contemplated that one or more steps and/or components may be implemented in software for use with microprocessors/general purpose computers (not shown). In this embodiment, one or more of the functions of the various components and/or steps described above may be implemented in software that controls a computer. The software may be embodied in non-transitory tangible computer readable media (such as, by way of non-limiting example, a magnetic disk, optical disk, hard drive, etc.) for execution by the computer. As described herein, devices 104, 106, 110, 112 and 114, shown in FIG. 1, may perform certain operations using dedicated circuitry and/or using software contained in a computer-readable medium 108 coupled to controller 104. The software instructions may cause controller 104 and/or rendering module 106 to perform one or more processes described herein. Alternatively, hardwired circuitry may be used in place of, or in combination with, software instructions to implement processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
[0075] Although the invention is illustrated and described herein with reference to specific embodiments, the invention is not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the invention.

Claims

WHAT IS CLAIMED :
1. A method of generating an image of a scene, the method comprising : directing light representing the scene through a lens module coupled to an imaging sensor, the lens module including : a surface having a slit-shaped aperture and a cylindrical lens array positioned along an optical axis of the imaging sensor, a longitudinal direction of the slit-shaped aperture being arranged orthogonal to a cylindrical axis of the cylindrical lens array; and
capturing, by the imaging sensor, the light directed through the lens module to form a three-dimensional (3D) light field (LF) image.
2. The method according to claim 1, wherein the directing of the light includes directing the light through the slit-shaped aperture onto a lens of the lens module to form aperture-shaped light and passing the aperture-shaped light from the lens through the cylindrical lens array onto the imaging sensor.
3. The method according to claim 1, the method further including :
processing, by a processor, the 3D LF image to form a rendered image, wherein the processing includes at least one of refocusing the 3D LF image to a predetermined focus depth, adjusting a perspective of the 3D LF image based on a predetermined viewpoint or generating a 3D stereoscopic view image from the 3D LF image.
4. The method according to claim 3, the method further including, prior to refocusing the 3D LF image, locating an image center of each lens in the cylindrical lens array, based on a reference image.
5. The method according to claim 3, the method further including displaying at least one of the 3D LF image or the rendered image.
6. The method according to claim 3, the method further including printing at least one of the 3D LF image or the rendered image to form a printed image.
7. The method according to claim 6, the method further including disposing the printed image on a further cylindrical lens array to form a 3D photograph,
8. The method according to claim 1, further comprising repeating the directing of the light and the capturing of the directed light to capture a plurality of 3D LF images of the scene.
9. A three-dimensional (3D) light field (LF) camera comprising :
a surface having a slit-shaped aperture mounted on a lens;
an imaging sensor; and
a cylindrical lens array disposed between the imaging sensor and the lens, along an optical axis of the imaging sensor, a longitudinal direction of the slit-shaped aperture being arranged orthogonal to a cylindrical axis of the cylindrical lens array, wherein the imaging sensor is configured to capture at least one 3D LF image of a scene.
10. The 3D LF camera according to claim 9, wherein :
the surface, the lens and the cylindrical lens array are disposed in a lens module, and
the lens module is coupled to a camera including the imaging sensor.
11. The 3D LF camera according to claim 10, wherein the lens module is configured to be detachably coupled to the camera.
12. The 3D LF camera according to claim 10, wherein the camera includes a digital single-lens reflex (DSLR) camera.
13. The 3D LF camera according to claim 9, wherein the cylindrical lens array includes a plurality of cylindrical lens arranged to extend in a vertical direction.
14. The 3D LF camera according to claim 9, wherein the at least one 3D LF image includes a plurality of 3D LF images of the scene.
15. The 3D LF camera according to claim 9, wherein the cylindrical lens array is configured to provide higher resolution in a first direction corresponding to the cylindrical axis and higher angular information in a second direction opposite the first direction.
16. The 3D LF camera according to claim 9, wherein the slit-shaped aperture is configured to reduce defocus blurring.
17. A three-dimensional (3D) photograph comprising :
a 3D light field printed image of a scene; and
a cylindrical lens array disposed on the 3D light field printed image, such that the combination of the 3D light field printed image and the cylindrical lens array forms a 3D stereoscopic image.
18. The 3D photograph according to claim 17, wherein the 3D photograph is an autostereoscopic 3D display.
19. The 3D photograph according to claim 17, wherein the 3D light field printed image is captured via a camera including a surface having a slit-shaped aperture and a further cylindrical lens array coupled to an image sensor of the camera, a longitudinal direction of the slit-shaped aperture being arranged orthogonal to a cylindrical axis of the further cylindrical lens array, such that the 3D light field printed image includes a set of sub-aperture images corresponding to the further cylindrical lens array.
20. The 3D photograph according to claim 19, wherein the 3D light field printed image is disposed on the cylindrical lens array such that the set of sub-aperture images are arranged parallel to a cylindrical axis of the cylindrical lens array.
PCT/US2014/072099 2013-12-23 2014-12-23 3-d light field camera and photography method WO2015100301A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US15/107,661 US10397545B2 (en) 2013-12-23 2014-12-23 3-D light field camera and photography method
CN201480073939.9A CN106170822B (en) 2013-12-23 2014-12-23 3D light field camera and photography method
EP14874923.7A EP3087554A4 (en) 2013-12-23 2014-12-23 3-d light field camera and photography method
AU2014370001A AU2014370001A1 (en) 2013-12-23 2014-12-23 3-D light field camera and photography method
KR1020167019217A KR102303389B1 (en) 2013-12-23 2014-12-23 3-d light field camera and photography method
JP2016543029A JP2017510831A (en) 2013-12-23 2014-12-23 3D light field camera and photographing method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201361920074P 2013-12-23 2013-12-23
US61/920,074 2013-12-23
US201461931051P 2014-01-24 2014-01-24
US61/931,051 2014-01-24

Publications (1)

Publication Number Publication Date
WO2015100301A1 true WO2015100301A1 (en) 2015-07-02

Family

ID=53479642

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/072099 WO2015100301A1 (en) 2013-12-23 2014-12-23 3-d light field camera and photography method

Country Status (7)

Country Link
US (1) US10397545B2 (en)
EP (1) EP3087554A4 (en)
JP (2) JP2017510831A (en)
KR (1) KR102303389B1 (en)
CN (1) CN106170822B (en)
AU (1) AU2014370001A1 (en)
WO (1) WO2015100301A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3157244A1 (en) * 2015-10-16 2017-04-19 Thomson Licensing Plenoptic camera and method of controlling the opening of the diaphragm
WO2017087148A1 (en) * 2015-11-20 2017-05-26 Sony Corporation Device and method for generating a panoramic image
WO2017129493A1 (en) * 2016-01-29 2017-08-03 Thales Optical system including a depth-estimating optical detecting unit that is independent of the focal length of said optical system
CN107788947A (en) * 2016-09-07 2018-03-13 爱博诺德(北京)医疗科技有限公司 Eye examination apparatus and method
JP2018537046A (en) * 2015-09-17 2018-12-13 ルミイ・インコーポレイテッド Multi-view display and related systems and methods
WO2019096310A1 (en) * 2017-11-20 2019-05-23 Shanghaitech University Light field image rendering method and system for creating see-through effects
US11577504B2 (en) 2017-08-09 2023-02-14 Fathom Optics Inc. Manufacturing light field prints

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3443735A4 (en) 2016-04-12 2019-12-11 Quidient, LLC Quotidian scene reconstruction engine
US10122990B2 (en) * 2016-12-01 2018-11-06 Varjo Technologies Oy Imaging system and method of producing context and focus images
WO2019064047A1 (en) * 2017-09-26 2019-04-04 Universita' Degli Studi Di Bari Aldo Moro Device and process for the contemporary capture of standard and plenoptic images
US10776995B2 (en) 2017-10-17 2020-09-15 Nvidia Corporation Light fields as better backgrounds in rendering
US20190124313A1 (en) 2017-10-19 2019-04-25 Intel Corporation Three dimensional glasses free light field display using eye location
EP3788595A4 (en) 2018-05-02 2022-01-05 Quidient, LLC A codec for processing scenes of almost unlimited detail
US11475049B2 (en) * 2020-01-31 2022-10-18 Salesforce, Inc. Methods and systems for organization extensibility and cluster scalability
WO2021217031A1 (en) * 2020-04-23 2021-10-28 The Regents Of The University Of California Ultrafast light field tomography
JP2022023359A (en) 2020-07-27 2022-02-08 セイコーエプソン株式会社 Liquid discharge device and head unit
US11748918B1 (en) * 2020-09-25 2023-09-05 Apple Inc. Synthesized camera arrays for rendering novel viewpoints
WO2023177017A1 (en) * 2022-03-17 2023-09-21 주식회사 어큐노스 Light field microscope-based image acquisition method and apparatus

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030052836A1 (en) * 2001-09-13 2003-03-20 Kazumi Matsumoto Three-dimensional image display apparatus and color reproducing method for three-dimensional image display
US20050219693A1 (en) * 2004-04-02 2005-10-06 David Hartkop Scanning aperture three dimensional display device
WO2011029440A1 (en) * 2009-09-14 2011-03-17 Stefan Hahn Image capturing and visualization providing a spatial image impression
US20110261464A1 (en) 2008-09-18 2011-10-27 Hoffman Anthony L Thin film high definition dimensional image display device and methods of making same
US20110273609A1 (en) * 2009-10-19 2011-11-10 Pixar Super light-field lens and image processing methods
WO2013038628A1 (en) * 2011-09-12 2013-03-21 Canon Kabushiki Kaisha Image processing method, image processing apparatus and image pickup apparatus
WO2013068882A2 (en) * 2011-11-09 2013-05-16 Koninklijke Philips Electronics N.V. Display device and method
US20130222606A1 (en) * 2012-02-28 2013-08-29 Lytro, Inc. Compensating for variation in microlens position during light-field image processing

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004239932A (en) * 2003-02-03 2004-08-26 Noriji Ooishi Stereoscopic image photographing device
US7336430B2 (en) * 2004-09-03 2008-02-26 Micron Technology, Inc. Extended depth of field using a multi-focal length lens with a controlled range of spherical aberration and a centrally obscured aperture
US7620309B2 (en) * 2006-04-04 2009-11-17 Adobe Systems, Incorporated Plenoptic camera
JP2008294741A (en) 2007-05-24 2008-12-04 Olympus Corp Imaging system
US8229294B2 (en) * 2007-12-10 2012-07-24 Mitsubishi Electric Research Laboratories, Inc. Cameras with varying spatio-angular-temporal resolutions
US8189065B2 (en) 2008-01-23 2012-05-29 Adobe Systems Incorporated Methods and apparatus for full-resolution light-field capture and rendering
US7962033B2 (en) * 2008-01-23 2011-06-14 Adobe Systems Incorporated Methods and apparatus for full-resolution light-field capture and rendering
JP4483951B2 (en) 2008-01-28 2010-06-16 ソニー株式会社 Imaging device
JP2009290268A (en) * 2008-05-27 2009-12-10 Sony Corp Imaging apparatus
US8530811B2 (en) * 2008-07-25 2013-09-10 Cornell University Light field image sensor, method and applications
JP5351195B2 (en) 2011-03-08 2013-11-27 株式会社東芝 Solid-state imaging device and portable information terminal
JP2012205015A (en) * 2011-03-24 2012-10-22 Casio Comput Co Ltd Image processor and image processing method, and program
JP2013014100A (en) * 2011-07-05 2013-01-24 Pentel Corp Writing utensil
JP2013105151A (en) * 2011-11-16 2013-05-30 Olympus Corp Optical device
CN103019021B (en) * 2012-12-27 2016-05-11 Tcl集团股份有限公司 The processing method of a kind of 3D light field camera and photographic images thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030052836A1 (en) * 2001-09-13 2003-03-20 Kazumi Matsumoto Three-dimensional image display apparatus and color reproducing method for three-dimensional image display
US20050219693A1 (en) * 2004-04-02 2005-10-06 David Hartkop Scanning aperture three dimensional display device
US20110261464A1 (en) 2008-09-18 2011-10-27 Hoffman Anthony L Thin film high definition dimensional image display device and methods of making same
WO2011029440A1 (en) * 2009-09-14 2011-03-17 Stefan Hahn Image capturing and visualization providing a spatial image impression
US20110273609A1 (en) * 2009-10-19 2011-11-10 Pixar Super light-field lens and image processing methods
WO2013038628A1 (en) * 2011-09-12 2013-03-21 Canon Kabushiki Kaisha Image processing method, image processing apparatus and image pickup apparatus
WO2013068882A2 (en) * 2011-11-09 2013-05-16 Koninklijke Philips Electronics N.V. Display device and method
US20130222606A1 (en) * 2012-02-28 2013-08-29 Lytro, Inc. Compensating for variation in microlens position during light-field image processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3087554A4

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018537046A (en) * 2015-09-17 2018-12-13 ルミイ・インコーポレイテッド Multi-view display and related systems and methods
US11652980B2 (en) 2015-09-17 2023-05-16 Fathom Optics Inc. Multi-view displays and associated systems and methods
US10999572B2 (en) 2015-09-17 2021-05-04 Fathom Optics Inc. Multi-view displays and associated systems and methods
EP3157244A1 (en) * 2015-10-16 2017-04-19 Thomson Licensing Plenoptic camera and method of controlling the opening of the diaphragm
US10298841B2 (en) 2015-11-20 2019-05-21 Sony Corporation Device and method for generating a panoramic image
WO2017087148A1 (en) * 2015-11-20 2017-05-26 Sony Corporation Device and method for generating a panoramic image
FR3047322A1 (en) * 2016-01-29 2017-08-04 Thales Sa OPTICAL SYSTEM COMPRISING AN OPTICAL DETECTION BLOCK WITH DEPTH ESTIMATION INDEPENDENT OF THE FOCAL OF THE OPTICAL SYSTEM
WO2017129493A1 (en) * 2016-01-29 2017-08-03 Thales Optical system including a depth-estimating optical detecting unit that is independent of the focal length of said optical system
CN107788947A (en) * 2016-09-07 2018-03-13 爱博诺德(北京)医疗科技有限公司 Eye examination apparatus and method
US11577504B2 (en) 2017-08-09 2023-02-14 Fathom Optics Inc. Manufacturing light field prints
WO2019096310A1 (en) * 2017-11-20 2019-05-23 Shanghaitech University Light field image rendering method and system for creating see-through effects
CN111480183A (en) * 2017-11-20 2020-07-31 上海科技大学 Light field image rendering method and system for generating perspective effect
US11615547B2 (en) 2017-11-20 2023-03-28 Shanghaitech University Light field image rendering method and system for creating see-through effects
CN111480183B (en) * 2017-11-20 2023-08-08 上海科技大学 Light field image rendering method and system for generating perspective effect

Also Published As

Publication number Publication date
US20160330432A1 (en) 2016-11-10
CN106170822A (en) 2016-11-30
JP2020126245A (en) 2020-08-20
JP7048995B2 (en) 2022-04-06
AU2014370001A1 (en) 2016-07-07
EP3087554A4 (en) 2017-08-02
EP3087554A1 (en) 2016-11-02
JP2017510831A (en) 2017-04-13
CN106170822B (en) 2020-03-24
US10397545B2 (en) 2019-08-27
KR20160124421A (en) 2016-10-27
KR102303389B1 (en) 2021-09-16

Similar Documents

Publication Publication Date Title
US10397545B2 (en) 3-D light field camera and photography method
Wu et al. Light field image processing: An overview
Balram et al. Light‐field imaging and display systems
US20110109720A1 (en) Stereoscopic editing for video production, post-production and display adaptation
US9443338B2 (en) Techniques for producing baseline stereo parameters for stereoscopic computer animation
CN108141578A (en) Camera is presented
KR20170005009A (en) Generation and use of a 3d radon image
CN101636747A (en) Two dimensional/three dimensional digital information obtains and display device
JP5450330B2 (en) Image processing apparatus and method, and stereoscopic image display apparatus
CN107209949B (en) Method and system for generating magnified 3D images
JP6166985B2 (en) Image generating apparatus and image generating program
KR20070010306A (en) Device taking a picture and method to generating the image with depth information
TWI462569B (en) 3d video camera and associated control method
CA2861212A1 (en) Image processing device and method, and program
CN104584075B (en) Object-point for description object space and the connection method for its execution
Gurrieri et al. Stereoscopic cameras for the real-time acquisition of panoramic 3D images and videos
TWI572899B (en) Augmented reality imaging method and system
KR101741227B1 (en) Auto stereoscopic image display device
CN108833893A (en) A kind of 3D rendering bearing calibration shown based on light field
CN114637391A (en) VR content processing method and equipment based on light field
KR101831978B1 (en) Generation method of elemental image contents for display system with rotated lenticular sheet
EP3185560A1 (en) System and method for encoding and decoding information representative of a bokeh model to be applied to an all-in-focus light-field content
Akeley 35.1: Invited Paper: Envisioning a Light Field Ecosystem
Liao et al. Long visualization depth autostereoscopic display using light field rendering based integral videography
KR20180013607A (en) Apparatus and method for glass-free hologram display without flipping images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14874923

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016543029

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 15107661

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2014370001

Country of ref document: AU

Date of ref document: 20141223

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20167019217

Country of ref document: KR

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2014874923

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014874923

Country of ref document: EP