US20160353087A1 - Method for displaying a content from 4d light field data - Google Patents

Method for displaying a content from 4d light field data Download PDF

Info

Publication number
US20160353087A1
US20160353087A1 US15/168,003 US201615168003A US2016353087A1 US 20160353087 A1 US20160353087 A1 US 20160353087A1 US 201615168003 A US201615168003 A US 201615168003A US 2016353087 A1 US2016353087 A1 US 2016353087A1
Authority
US
United States
Prior art keywords
content
changing
field data
point
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/168,003
Other versions
US10484671B2 (en
Inventor
Marc Eluard
Antoine Monsifrot
Olivier Heen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
InterDigital Madison Patent Holdings SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Publication of US20160353087A1 publication Critical patent/US20160353087A1/en
Assigned to THOMSON LICENSING reassignment THOMSON LICENSING ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ELUAED, MARC, HEEN, OLIVIER, MONSIFROT, ANTOINE
Assigned to INTERDIGITAL CE PATENT HOLDINGS reassignment INTERDIGITAL CE PATENT HOLDINGS ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THOMSON LICENSING
Application granted granted Critical
Publication of US10484671B2 publication Critical patent/US10484671B2/en
Assigned to INTERDIGITAL MADISON PATENT HOLDINGS, SAS reassignment INTERDIGITAL MADISON PATENT HOLDINGS, SAS ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERDIGITAL CE PATENT HOLDINGS,SAS
Assigned to INTERDIGITAL CE PATENT HOLDINGS, SAS reassignment INTERDIGITAL CE PATENT HOLDINGS, SAS CORRECTIVE ASSIGNMENT TO CORRECT THE RECEIVING PARTY NAME FROM INTERDIGITAL CE PATENT HOLDINGS TO INTERDIGITAL CE PATENT HOLDINGS, SAS. PREVIOUSLY RECORDED AT REEL: 47332 FRAME: 511. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: THOMSON LICENSING
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/23439Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
    • H04N13/0402
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • G06T5/002
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2541Rights Management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • H04N21/2543Billing, e.g. for subscription services
    • H04N21/25435Billing, e.g. for subscription services involving characteristics of content or additional data, e.g. video resolution or the amount of advertising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4627Rights management associated to the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8453Structuring of content, e.g. decomposing content into time segments by locking or enabling a set of features, e.g. optional functionalities in an executable program

Definitions

  • the disclosure relates to 4D light field data processing. More precisely, the disclosure relates to a technique for displaying a content (either a 2D image derived/extracted from a 4D light field data, or a set of images derived/extracted from a 4D light field data that can be interpreted as displayed 4D light field data).
  • 4D light-field data enable a user to have access to more post processing features that enhance the rendering of images and/or the interactivity with the user.
  • 4D light-field data it is possible to perform with ease refocusing of images a posteriori (i.e. refocusing with freely selected distances of focalization meaning that the position of a focal plane can be specified/selected a posteriori), as well as changing slightly the point of view in the scene of an image.
  • the acquisition of 4D light-field data can be done by different techniques (for example via the use of plenoptic camera, as depicted in document WO 2013/180192 or in document GB 2488905, or via the use a camera array as depicted in document WO 2014/149403).
  • 4D light-field data can be represented, when recorded by a plenoptic camera by a collection of micro-lens images.
  • 4D light-field data in this representation are named raw images (or raw 4D light-field data).
  • 4D light-field data can be represented, by a set of sub-aperture images.
  • a sub-aperture image corresponds to a captured image of a scene from a point of view, the point of view being slightly different between two sub-aperture images. These sub-aperture images give information about the parallax and depth of the imaged scene.
  • 4D light-field data can be represented by a set of epipolar images (see for example the article entitled : “ Generating EPI Representation of a 4 D Light Fields with a Single Lens Focused Plenoptic Camera” , by S. Wanner et al., published in the conference proceedings of ISVC 2011).
  • 4D light-field data can be used for displaying at least one 2D image in which refocusing a posteriori can be done (i.e. the display device is a conventional display device).
  • the display device is a conventional display device.
  • the light field display device can be the one depicted in the article entitled “ A Compressive Light Field Projection System” by M. Hirsch, G. Wetzstein, R. Raska, published in the conference proceedings of SIGGRAPH 2014.
  • DRM Digital Right Management
  • the present technique provides an alternative to this approach, that is less complex to implement.
  • references in the specification to “one embodiment”, “an embodiment”, “an example embodiment”, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • the present disclosure is directed to a method for displaying a content from 4D light field data.
  • the method is executed by an electronic device, and is remarkable in that it comprises changing point of view of said content and/or a focus plane associated with said content, according to viewing rights.
  • the concept of changing point of view correspond to changing perspective shift/parallax selection as explained in for example in the following link: http://lightfield-forum.com/lightfield-features/.
  • the method is remarkable in that said changing said point of view of said content and/or said focus plane associated with said content is done randomly in time.
  • the method is remarkable in that said changing said point of view of said content and/or said changing of said focus plane associated with said content is done based on a random spatial point of a scene associated with 4D light field data.
  • the method is remarkable in that said changing further comprises blurring at least part of said content.
  • the method is remarkable in that said blurring comprises adding Gaussian blur to said content.
  • the method is remarkable in that said blurring is done randomly in time.
  • the method is remarkable in that said blurring is a spatial blurring of said content that is done randomly.
  • the method is remarkable in that said content is a 4D light field data.
  • the method is remarkable in that said content is a 2D image or 2D video.
  • the method is remarkable in that said content is a 3D content or multiviews content.
  • the method is remarkable in that a difference value between consecutive changed points of views of said content and/or consecutive changed focus planes is defined as a function of said viewing rights.
  • the Light Field viewing parameters can be linked to the values of one or more features described in the following link: http://lightfield-forum.com/lightfield-features/, such as features related to the refocusing choices, and/or the choice of the all-in-focus option, and/or the change of the depth of field (as it can be variable), and/or the perspective shift/parallax selection, and/or the change of lighting, and/or the change of depth of Field.
  • the changes of Light Field viewing parameters can be done randomly in time aspects.
  • the different steps of the method are implemented by a computer software program or programs, this software program comprising software instructions designed to be executed by a data processor of a relay module according to the disclosure and being designed to control the execution of the different steps of this method.
  • an aspect of the disclosure also concerns a program liable to be executed by a computer or by a data processor, this program comprising instructions to command the execution of the steps of a method as mentioned here above.
  • This program can use any programming language whatsoever and be in the form of a source code, object code or code that is intermediate between source code and object code, such as in a partially compiled form or in any other desirable form.
  • the disclosure also concerns an information medium readable by a data processor and comprising instructions of a program as mentioned here above.
  • the information medium can be any entity or device capable of storing the program.
  • the medium can comprise a storage means such as a ROM (which stands for “Read Only Memory”), for example a CD-ROM (which stands for “Compact Disc-Read Only Memory”) or a microelectronic circuit ROM or again a magnetic recording means, for example a floppy disk or a hard disk drive.
  • ROM Read Only Memory
  • CD-ROM Compact Disc-Read Only Memory
  • microelectronic circuit ROM again a magnetic recording means, for example a floppy disk or a hard disk drive.
  • the information medium may be a transmissible carrier such as an electrical or optical signal that can be conveyed through an electrical or optical cable, by radio or by other means.
  • the program can be especially downloaded into an Internet-type network.
  • the information medium can be an integrated circuit into which the program is incorporated, the circuit being adapted to executing or being used in the execution of the method in question.
  • an embodiment of the disclosure is implemented by means of software and/or hardware components.
  • module can correspond in this document both to a software component and to a hardware component or to a set of hardware and software components.
  • a software component corresponds to one or more computer programs, one or more sub-programs of a program, or more generally to any element of a program or a software program capable of implementing a function or a set of functions according to what is described here below for the module concerned.
  • One such software component is executed by a data processor of a physical entity (terminal, server, etc.) and is capable of accessing the hardware resources of this physical entity (memories, recording media, communications buses, input/output electronic boards, user interfaces, etc.).
  • a hardware component corresponds to any element of a hardware unit capable of implementing a function or a set of functions according to what is described here below for the module concerned. It may be a programmable hardware component or a component with an integrated circuit for the execution of software, for example an integrated circuit, a smart card, a memory card, an electronic board for executing firmware etc.
  • the hardware component comprises a processor that is an integrated circuit such as a central processing unit, and/or a microprocessor, and/or an Application-specific integrated circuit (ASIC), and/or an Application-specific instruction-set processor (ASIP), and/or a graphics processing unit (GPU), and/or a physics processing unit (PPU), and/or a digital signal processor (DSP), and/or an image processor, and/or a coprocessor, and/or a floating-point unit, and/or a network processor, and/or an audio processor, and/or a multi-core processor.
  • a processor that is an integrated circuit such as a central processing unit, and/or a microprocessor, and/or an Application-specific integrated circuit (ASIC), and/or an Application-specific instruction-set processor (ASIP), and/or a graphics processing unit (GPU), and/or a physics processing unit (PPU), and/or a digital signal processor (DSP), and/or an image processor, and/or a coprocessor, and
  • the hardware component can also comprise a baseband processor (comprising for example memory units, and a firmware) and/or radio electronic circuits (that can comprise antennas) which receive or transmit radio signals.
  • the hardware component is compliant with one or more standards such as ISO/IEC 18092/ECMA-340, ISO/IEC 21481/ECMA-352, GSMA, StoLPaN, ETSI/SCP (Smart Card Platform), GlobalPlatform (i.e. a secure element).
  • the hardware component is a Radio-frequency identification (RFID) tag.
  • a hardware component comprises circuits that enable Bluetooth communications, and/or Wi-fi communications, and/or Zigbee communications, and/or USB communications and/or Firewire communications and/or NFC (for Near Field) communications.
  • a step of obtaining an element/value in the present document can be viewed either as a step of reading such element/value in a memory unit of an electronic device or a step of receiving such element/value from another electronic device via communication means.
  • an electronic device for displaying a content from 4D light field data comprises a changing module configured to changing point of view of said content and/or a focus plane associated with said content, according to viewing rights.
  • the electronic device is remarkable in that said changing said point of view of said content and/or said focus plane associated with said content is done randomly in time.
  • the electronic device is remarkable in that said changing said point of view of said content and/or said changing of said focus plane associated with said content is done based on a random spatial point of a scene associated with 4D light field data.
  • FIG. 1 presents a flowchart that comprises some steps of a method for displaying according to one embodiment of the disclosure
  • FIG. 2 presents a flowchart that comprises some steps of a method for displaying according to another embodiment of the disclosure
  • FIG. 3 presents an example of an electronic device that can be used to perform one or several steps of methods disclosed in the present document.
  • FIG. 1 presents a flowchart that comprises some steps of a method for displaying according to one embodiment of the disclosure.
  • an electronic device obtains 4D light-field data as well as viewing rights (or credentials). These viewing rights can be linked to a license that has been obtained after having paid for it.
  • viewing rights can be associated with a degradation level (such as strong degradation, medium degradation, low degradation, or no degradation) to be applied to the received 4D light-field data.
  • a degradation level such as strong degradation, medium degradation, low degradation, or no degradation
  • the electronic device verifies the value of the viewing rights. In the case that no degradations have to be done, the electronic device allows the display device (either a light field display device or a 2D display device) to process the 4D light-field data without restrictions. In the case the electronic device detects that the user should watch a degraded content from the received 4D light-field data, the electronic device controls the display device in such way that the display device has to display a degraded content.
  • the displayed content corresponds to one view extracted from the 4D light-field data. The point of view changes randomly from one view to another possible view, from time to time.
  • the displayed content i.e. the 2D image
  • the speed at which the point of view is changed is correlated to the value of the viewing rights. Therefore, the more credentials a user has, the more “stable” the point of view associated with the displayed content is.
  • the displayed content (extracted from the 4D light-field data) is associated with a focus plane.
  • the focus plane changes randomly from one possible value to another, from time to time.
  • the displayed content i.e. the 2D image
  • the 2D display device can display degraded content in which both the value of the focal plane and the point of view are modified/changed during the display.
  • the present technique dynamically modifies the points of view during the display of 4D light-field data, when the viewing rights are not sufficient.
  • the process is stopped (or reduced). More precisely, in one embodiment, if a point of view is defined by a focus f (i.e.
  • the present technique dynamically adds one additive perturbation ( ⁇ , ⁇ ) to the pair of angles ( ⁇ , ⁇ ) of a point of view during the display process.
  • T the new angle of the point of view is ( ⁇ + ⁇ , ⁇ + ⁇ ).
  • the perturbation is a function of time, remarkable by a period and an intensity.
  • the period is the duration before the next update of the point of view along time.
  • the intensity is the maximal value of the perturbation.
  • the perturbations are randomly chosen between zero and the intensity.
  • the period may also be random.
  • the intensity smoothly increases over the time.
  • the intensity may be null at the beginning and for a pre-determined duration. After that, the intensity increases. This can be used for a teasing effect before degrading the user experience.
  • the period smoothly decreases along time. This lets the degraded effect increase along time.
  • the user's chosen point of view is not used at all, the point of view is only determined by the perturbation.
  • the sequence of point of views can depend from points of interest in the images.
  • the focus part of the point of view may be perturbed with a term ⁇ f.
  • the new focus f+ ⁇ f may lie outside the image (in particular for intense perturbation).
  • the choice can be random or chosen among points that appear in each image. Therefore, this embodiment is advantageous in the case when the user does not choose any point of view.
  • two cases are possible according to the degradation policy:
  • the visual effect may vary according to the depth of the focus.
  • the visual impact is more important if the focus is on the background and the impact is less important if the focus is on the foreground.
  • the displayed content in output of step 102 conveys such modifications.
  • FIG. 2 presents a flowchart that comprises some steps of a method for displaying according to another embodiment of the disclosure.
  • the method for displaying comprises a step referenced 201 .
  • the step 201 comprises the blurring of at least part of the content to be displayed.
  • a parameter d i is added for all pixel.
  • d i 1 and thus the display/restitution process is unmodified.
  • d i varies in range [0;1]. This change introduces a blurring effect on the pixel.
  • the variation of d i is determined according to two parameters:
  • An advantage of the present technique is that no additional information nor processing is required in order to create the effect: all the needed visual information is contained in the 4D light-field data.
  • additional information e.g. parameters of a blurring effect
  • the zone Z covers the whole 4D light-field data.
  • the value of d i is randomly chosen in the range of [1-F;1]. This results in a modification of each pixel of the displayed 4D light-field data.
  • the value of F may increase smoothly along the time in order to intensify the degradation.
  • the increase of F may stop at a pre-determine threshold in order to guarantee a minimal user experience. It should be noted that no pre- nor post-processing is required and that the additional operations benefit from knowing optimization of float multiplication. Typically in a set-top-box, there will be no significant additional delay.
  • the zone Z does not cover entirely the 4D light-field data.
  • Z may be a circle with center C and radius r.
  • C may be the center of the image and r may be fixed along time, typically one quarter of the image diagonal.
  • d i 1.
  • the intensity F may vary according to the positions of the pixels in Z. For instance, F decreases with the distance to the center of Z. This leads to a stronger modification close to the center and a lesser modification close to the frontier of Z.
  • the zone Z itself may vary along time.
  • the center, the size and the shape may vary along time.
  • the radius may increase along time up to a predetermined threshold.
  • the center may be chosen randomly.
  • the center may follow a predetermined path within the image.
  • the center may follow one relevant part of the light field (a face, main character, foreground, most rapidly moving object, etc.).
  • the center may also depend from the user: mouse position, eyes focus, etc.
  • the additional delay depends on: the determination of Z, the test to determine if a pixel belongs to Z.
  • FIG. 3 presents an example of an electronic device that can be used to perform one or more steps of methods disclosed in the present document.
  • Such device referenced 300 comprises a computing unit (for example a CPU, for “Central Processing Unit”), referenced 301 , and one or more memory units (for example a RAM (for “Random Access Memory”) block in which intermediate results can be stored temporarily during the execution of instructions a computer program, or a ROM block in which, among other things, computer programs are stored, or an EEPROM (“Electrically-Erasable Programmable Read-Only Memory”) block, or a flash block) referenced 302 .
  • Computer programs are made of instructions that can be executed by the computing unit.
  • Such device 300 can also comprise a dedicated unit, referenced 303 , constituting an input-output interface to allow the device 300 to communicate with other devices.
  • this dedicated unit 303 can be connected with an antenna (in order to perform communication without contacts), or with serial ports (to carry communications “contact”). It should be noted that the arrows in FIG. 3 signify that the linked unit can exchange data through buses for example together.
  • some or all of the steps of the method previously described can be implemented in hardware in a programmable FPGA (“Field Programmable Gate Array”) component or ASIC (“Application-Specific Integrated Circuit”) component.
  • a programmable FPGA Field Programmable Gate Array
  • ASIC Application-Specific Integrated Circuit
  • some or all of the steps of the method previously described can be executed on an electronic device comprising memory units and processing units as the one disclosed in the FIG. 3 .
  • the electronic device depicted in FIG. 3 can be comprised in a light field display device or in a light field acquisition device that are configured to display and/or capture images (i.e. a sampling of a light field). These images are stored on one or more memory units. Hence, these images can be viewed as bit stream data (i.e. a sequence of bits). Obviously, a bit stream can also be converted on byte stream and vice versa.
  • the electronic device depicted in FIG. 3 can be comprised in a set-top box., or in a mobile phone.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A method for displaying a content from 4D light field data is described. Such method is executed by an electronic device, and is remarkable in that it comprises changing point of view of the content and/or a focus plane associated with the content, according to viewing rights.

Description

    TECHNICAL FIELD
  • The disclosure relates to 4D light field data processing. More precisely, the disclosure relates to a technique for displaying a content (either a 2D image derived/extracted from a 4D light field data, or a set of images derived/extracted from a 4D light field data that can be interpreted as displayed 4D light field data).
  • BACKGROUND
  • This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present invention that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
  • The acquisition and rendering of 4D light-field data, which can be viewed as a sampling of a 4D light field (i.e. the recording of light rays as explained in FIG. 1 of the article:” Understanding camera trade-offs through a Bayesian analysis of light field projections” by Anat Levin et al., published in the conference proceedings of ECCV 2008) is an hectic research subject.
  • Indeed, compared to classical 2D images obtained from a camera, 4D light-field data enable a user to have access to more post processing features that enhance the rendering of images and/or the interactivity with the user. For example, with 4D light-field data, it is possible to perform with ease refocusing of images a posteriori (i.e. refocusing with freely selected distances of focalization meaning that the position of a focal plane can be specified/selected a posteriori), as well as changing slightly the point of view in the scene of an image. The acquisition of 4D light-field data can be done by different techniques (for example via the use of plenoptic camera, as depicted in document WO 2013/180192 or in document GB 2488905, or via the use a camera array as depicted in document WO 2014/149403).
  • In the state of the art, there are several ways to represent (or define) 4D light-field data. Indeed, in the Chapter 3.3 of the Phd dissertation thesis entitled “Digital Light Field Photography” by Ren N g, published in July 2006, three different ways to represent 4D light-field data are described. Firstly, 4D light-field data can be represented, when recorded by a plenoptic camera by a collection of micro-lens images. 4D light-field data in this representation are named raw images (or raw 4D light-field data). Secondly, 4D light-field data can be represented, by a set of sub-aperture images. A sub-aperture image corresponds to a captured image of a scene from a point of view, the point of view being slightly different between two sub-aperture images. These sub-aperture images give information about the parallax and depth of the imaged scene. Thirdly, 4D light-field data can be represented by a set of epipolar images (see for example the article entitled : “Generating EPI Representation of a 4D Light Fields with a Single Lens Focused Plenoptic Camera”, by S. Wanner et al., published in the conference proceedings of ISVC 2011).
  • Usually, 4D light-field data can be used for displaying at least one 2D image in which refocusing a posteriori can be done (i.e. the display device is a conventional display device). But, it is also possible to display these 4D light-field data via a light field display device as the one depicted in document U.S. Pat. No. 8,933,862, or in the document U.S. Pat. No. 8,416,289. In a variant, the light field display device can be the one depicted in the article entitled “A Compressive Light Field Projection System” by M. Hirsch, G. Wetzstein, R. Raska, published in the conference proceedings of SIGGRAPH 2014.
  • In order to protect the delivering of 4D light-field data, one skilled in the art is urged to use classical Digital Right Management (DRM) techniques. For example, in the case that a Video On demand system provides 4D light-field data (intended to be either displayed at a light field content or a to be used for extracting 2D content to be displayed), one skilled in the art could have use the technique described in document WO 2006/053804. Hence, a degraded 4D light-field data that is obtained from a wavelet coefficient basis encoding technique, can still be viewed (but in a degraded way). Therefore, the user can decide to pay for having access to a non-degraded version of the received degraded 4D light-field data, in the same way as in document WO 2006/053804.
  • The present technique provides an alternative to this approach, that is less complex to implement.
  • SUMMARY OF THE DISCLOSURE
  • References in the specification to “one embodiment”, “an embodiment”, “an example embodiment”, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • The present disclosure is directed to a method for displaying a content from 4D light field data. The method is executed by an electronic device, and is remarkable in that it comprises changing point of view of said content and/or a focus plane associated with said content, according to viewing rights.
  • It should be noted that, in one embodiment, the concept of changing point of view correspond to changing perspective shift/parallax selection as explained in for example in the following link: http://lightfield-forum.com/lightfield-features/.
  • In a preferred embodiment, the method is remarkable in that said changing said point of view of said content and/or said focus plane associated with said content is done randomly in time.
  • In a preferred embodiment, the method is remarkable in that said changing said point of view of said content and/or said changing of said focus plane associated with said content is done based on a random spatial point of a scene associated with 4D light field data.
  • In a preferred embodiment, the method is remarkable in that said changing further comprises blurring at least part of said content.
  • In a preferred embodiment, the method is remarkable in that said blurring comprises adding Gaussian blur to said content.
  • In a preferred embodiment, the method is remarkable in that said blurring is done randomly in time.
  • In a preferred embodiment, the method is remarkable in that said blurring is a spatial blurring of said content that is done randomly.
  • In a preferred embodiment, the method is remarkable in that said content is a 4D light field data.
  • In a preferred embodiment, the method is remarkable in that said content is a 2D image or 2D video.
  • In a preferred embodiment, the method is remarkable in that said content is a 3D content or multiviews content.
  • In a preferred embodiment, the method is remarkable in that a difference value between consecutive changed points of views of said content and/or consecutive changed focus planes is defined as a function of said viewing rights.
  • In another embodiment of the invention, it is proposed a method for displaying a content from 4D light field data. Such method is executed by an electronic device, and is remarkable in that it comprises changing Light Field viewing parameters according to viewing rights. For example, the Light Field viewing parameters can be linked to the values of one or more features described in the following link: http://lightfield-forum.com/lightfield-features/, such as features related to the refocusing choices, and/or the choice of the all-in-focus option, and/or the change of the depth of field (as it can be variable), and/or the perspective shift/parallax selection, and/or the change of lighting, and/or the change of depth of Field. The changes of Light Field viewing parameters can be done randomly in time aspects.
  • According to an exemplary implementation, the different steps of the method are implemented by a computer software program or programs, this software program comprising software instructions designed to be executed by a data processor of a relay module according to the disclosure and being designed to control the execution of the different steps of this method.
  • Consequently, an aspect of the disclosure also concerns a program liable to be executed by a computer or by a data processor, this program comprising instructions to command the execution of the steps of a method as mentioned here above.
  • This program can use any programming language whatsoever and be in the form of a source code, object code or code that is intermediate between source code and object code, such as in a partially compiled form or in any other desirable form.
  • The disclosure also concerns an information medium readable by a data processor and comprising instructions of a program as mentioned here above.
  • The information medium can be any entity or device capable of storing the program. For example, the medium can comprise a storage means such as a ROM (which stands for “Read Only Memory”), for example a CD-ROM (which stands for “Compact Disc-Read Only Memory”) or a microelectronic circuit ROM or again a magnetic recording means, for example a floppy disk or a hard disk drive.
  • Furthermore, the information medium may be a transmissible carrier such as an electrical or optical signal that can be conveyed through an electrical or optical cable, by radio or by other means. The program can be especially downloaded into an Internet-type network.
  • Alternately, the information medium can be an integrated circuit into which the program is incorporated, the circuit being adapted to executing or being used in the execution of the method in question.
  • According to one embodiment, an embodiment of the disclosure is implemented by means of software and/or hardware components. From this viewpoint, the term “module” can correspond in this document both to a software component and to a hardware component or to a set of hardware and software components.
  • A software component corresponds to one or more computer programs, one or more sub-programs of a program, or more generally to any element of a program or a software program capable of implementing a function or a set of functions according to what is described here below for the module concerned. One such software component is executed by a data processor of a physical entity (terminal, server, etc.) and is capable of accessing the hardware resources of this physical entity (memories, recording media, communications buses, input/output electronic boards, user interfaces, etc.).
  • Similarly, a hardware component corresponds to any element of a hardware unit capable of implementing a function or a set of functions according to what is described here below for the module concerned. It may be a programmable hardware component or a component with an integrated circuit for the execution of software, for example an integrated circuit, a smart card, a memory card, an electronic board for executing firmware etc. In a variant, the hardware component comprises a processor that is an integrated circuit such as a central processing unit, and/or a microprocessor, and/or an Application-specific integrated circuit (ASIC), and/or an Application-specific instruction-set processor (ASIP), and/or a graphics processing unit (GPU), and/or a physics processing unit (PPU), and/or a digital signal processor (DSP), and/or an image processor, and/or a coprocessor, and/or a floating-point unit, and/or a network processor, and/or an audio processor, and/or a multi-core processor. Moreover, the hardware component can also comprise a baseband processor (comprising for example memory units, and a firmware) and/or radio electronic circuits (that can comprise antennas) which receive or transmit radio signals. In one embodiment, the hardware component is compliant with one or more standards such as ISO/IEC 18092/ECMA-340, ISO/IEC 21481/ECMA-352, GSMA, StoLPaN, ETSI/SCP (Smart Card Platform), GlobalPlatform (i.e. a secure element). In a variant, the hardware component is a Radio-frequency identification (RFID) tag. In one embodiment, a hardware component comprises circuits that enable Bluetooth communications, and/or Wi-fi communications, and/or Zigbee communications, and/or USB communications and/or Firewire communications and/or NFC (for Near Field) communications.
  • It should also be noted that a step of obtaining an element/value in the present document can be viewed either as a step of reading such element/value in a memory unit of an electronic device or a step of receiving such element/value from another electronic device via communication means.
  • In another embodiment, it is proposed an electronic device for displaying a content from 4D light field data. The electronic device is remarkable in that it comprises a changing module configured to changing point of view of said content and/or a focus plane associated with said content, according to viewing rights.
  • In a variant, the electronic device is remarkable in that said changing said point of view of said content and/or said focus plane associated with said content is done randomly in time.
  • In one embodiment, the electronic device is remarkable in that said changing said point of view of said content and/or said changing of said focus plane associated with said content is done based on a random spatial point of a scene associated with 4D light field data.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other aspects of the invention will become more apparent by the following detailed description of exemplary embodiments thereof with reference to the attached drawings in which:
  • FIG. 1 presents a flowchart that comprises some steps of a method for displaying according to one embodiment of the disclosure;
  • FIG. 2 presents a flowchart that comprises some steps of a method for displaying according to another embodiment of the disclosure;
  • FIG. 3 presents an example of an electronic device that can be used to perform one or several steps of methods disclosed in the present document.
  • DETAILED DESCRIPTION
  • FIG. 1 presents a flowchart that comprises some steps of a method for displaying according to one embodiment of the disclosure.
  • In a step referenced 101, an electronic device (as the one depicted in FIG. 3 of the present document) obtains 4D light-field data as well as viewing rights (or credentials). These viewing rights can be linked to a license that has been obtained after having paid for it.
  • For example, viewing rights can be associated with a degradation level (such as strong degradation, medium degradation, low degradation, or no degradation) to be applied to the received 4D light-field data.
  • In a step referenced 102, the electronic device verifies the value of the viewing rights. In the case that no degradations have to be done, the electronic device allows the display device (either a light field display device or a 2D display device) to process the 4D light-field data without restrictions. In the case the electronic device detects that the user should watch a degraded content from the received 4D light-field data, the electronic device controls the display device in such way that the display device has to display a degraded content. In one embodiment of the disclosure, in the case that the display device is 2D display device, the displayed content corresponds to one view extracted from the 4D light-field data. The point of view changes randomly from one view to another possible view, from time to time. For example, in the case that the viewing rights indicate a strong degradation of the displayed content, the displayed content (i.e. the 2D image) is displayed from a view point that changes every two seconds. Hence, the speed at which the point of view is changed is correlated to the value of the viewing rights. Therefore, the more credentials a user has, the more “stable” the point of view associated with the displayed content is.
  • In a variant, in the case that the display device is 2D display device, the displayed content (extracted from the 4D light-field data) is associated with a focus plane. The focus plane changes randomly from one possible value to another, from time to time. For example, in the case that the viewing rights indicate a strong degradation of the displayed content, the displayed content (i.e. the 2D image) is displayed with a focal plane that changes every two seconds. Hence, the speed at which the focal plane is changed is correlated to the value of the viewing rights. Therefore, the more credentials a user has, the more “stable” the focal plane associated with the displayed content is. Obviously, in one embodiment of the invention, the 2D display device can display degraded content in which both the value of the focal plane and the point of view are modified/changed during the display.
  • In another embodiment of the present principle, when the display device is a light field display device, a similar process for changing the point of views and/or the focal plane of a displayed content is done according to the value of viewing rights. Therefore, the present technique dynamically modifies the points of view during the display of 4D light-field data, when the viewing rights are not sufficient. When the user acquires sufficient viewing rights (e.g. pay-per-view) the process is stopped (or reduced). More precisely, in one embodiment, if a point of view is defined by a focus f (i.e. a pixel in the light field) and a pair of angles of view (θ, ψ), then the present technique dynamically adds one additive perturbation (δθ, δψ) to the pair of angles (θ, ψ) of a point of view during the display process. T, the new angle of the point of view is (θ+δθ, ψ+δψ). The perturbation is a function of time, remarkable by a period and an intensity. The period is the duration before the next update of the point of view along time. The intensity is the maximal value of the perturbation.
  • Hence, in an embodiment, the perturbations are randomly chosen between zero and the intensity. The period may also be random.
  • In a variant, the intensity smoothly increases over the time. In a particular case, the intensity may be null at the beginning and for a pre-determined duration. After that, the intensity increases. This can be used for a teasing effect before degrading the user experience.
  • In a variant, the period smoothly decreases along time. This lets the degraded effect increase along time.
  • In a variant, the user's chosen point of view is not used at all, the point of view is only determined by the perturbation. In particular, the sequence of point of views can depend from points of interest in the images. In particular, the focus part of the point of view may be perturbed with a term δf. Note that the new focus f+δf may lie outside the image (in particular for intense perturbation). In this case, it is preferable to choose a new focus f. The choice can be random or chosen among points that appear in each image. Therefore, this embodiment is advantageous in the case when the user does not choose any point of view. When the user later selects a point of view, two cases are possible according to the degradation policy:
      • The same perturbation parameters are applied with respect to the new user point of view.
      • Or the new user point of view is ignored.
  • It should be noted that, for a given intensity, the visual effect may vary according to the depth of the focus. For a same intensity, the visual impact is more important if the focus is on the background and the impact is less important if the focus is on the foreground.
  • The displayed content in output of step 102 conveys such modifications.
  • FIG. 2 presents a flowchart that comprises some steps of a method for displaying according to another embodiment of the disclosure.
  • In addition to the previous mentioned steps (i.e. the steps 101 and 102), the method for displaying comprises a step referenced 201. The step 201 comprises the blurring of at least part of the content to be displayed.
  • Usually, the display of a light field content is done as follows: for each pixel p of the displayed image, we have p=Σ mi pi where pi are the pixels corresponding to p in the sub-aperture images i and where mi is a ponderation value depending from the user chosen focus.
  • According to the present disclosure, a parameter di is added for all pixel. Hence, we have: p=Σ di mi p. When the user owns sufficient viewing rights, for all i, di=1 and thus the display/restitution process is unmodified. When the user does not own sufficient viewing rights, di varies in range [0;1]. This change introduces a blurring effect on the pixel.
  • The variation of di is determined according to two parameters:
      • a zone Z in the light field: within this zone the modification is active;
      • a fog intensity F: applied within the zone.
  • An advantage of the present technique is that no additional information nor processing is required in order to create the effect: all the needed visual information is contained in the 4D light-field data. With more traditional content, such as a 2D picture, making the same effect requires additional information (e.g. parameters of a blurring effect) and dedicated processing.
  • In an embodiment of the disclosure, the zone Z covers the whole 4D light-field data. The value of di is randomly chosen in the range of [1-F;1]. This results in a modification of each pixel of the displayed 4D light-field data. The value of F may increase smoothly along the time in order to intensify the degradation. The increase of F may stop at a pre-determine threshold in order to guarantee a minimal user experience. It should be noted that no pre- nor post-processing is required and that the additional operations benefit from knowing optimization of float multiplication. Typically in a set-top-box, there will be no significant additional delay.
  • In another embodiment, the zone Z does not cover entirely the 4D light-field data. For instance Z may be a circle with center C and radius r. C may be the center of the image and r may be fixed along time, typically one quarter of the image diagonal. Outside Z, di=1. Inside Z, the fog effect as defined previously can be applied. Additionally, the intensity F may vary according to the positions of the pixels in Z. For instance, F decreases with the distance to the center of Z. This leads to a stronger modification close to the center and a lesser modification close to the frontier of Z.
  • In another embodiment, the zone Z itself may vary along time. Typically, the center, the size and the shape may vary along time. In the case of a circle the radius may increase along time up to a predetermined threshold. The center may be chosen randomly. The center may follow a predetermined path within the image. The center may follow one relevant part of the light field (a face, main character, foreground, most rapidly moving object, etc.). The center may also depend from the user: mouse position, eyes focus, etc. The additional delay depends on: the determination of Z, the test to determine if a pixel belongs to Z.
  • FIG. 3 presents an example of an electronic device that can be used to perform one or more steps of methods disclosed in the present document.
  • Such device referenced 300 comprises a computing unit (for example a CPU, for “Central Processing Unit”), referenced 301, and one or more memory units (for example a RAM (for “Random Access Memory”) block in which intermediate results can be stored temporarily during the execution of instructions a computer program, or a ROM block in which, among other things, computer programs are stored, or an EEPROM (“Electrically-Erasable Programmable Read-Only Memory”) block, or a flash block) referenced 302. Computer programs are made of instructions that can be executed by the computing unit. Such device 300 can also comprise a dedicated unit, referenced 303, constituting an input-output interface to allow the device 300 to communicate with other devices. In particular, this dedicated unit 303 can be connected with an antenna (in order to perform communication without contacts), or with serial ports (to carry communications “contact”). It should be noted that the arrows in FIG. 3 signify that the linked unit can exchange data through buses for example together.
  • In an alternative embodiment, some or all of the steps of the method previously described, can be implemented in hardware in a programmable FPGA (“Field Programmable Gate Array”) component or ASIC (“Application-Specific Integrated Circuit”) component.
  • In an alternative embodiment, some or all of the steps of the method previously described, can be executed on an electronic device comprising memory units and processing units as the one disclosed in the FIG. 3.
  • In one embodiment of the disclosure, the electronic device depicted in FIG. 3 can be comprised in a light field display device or in a light field acquisition device that are configured to display and/or capture images (i.e. a sampling of a light field). These images are stored on one or more memory units. Hence, these images can be viewed as bit stream data (i.e. a sequence of bits). Obviously, a bit stream can also be converted on byte stream and vice versa.
  • In one embodiment of the disclosure, the electronic device depicted in FIG. 3 can be comprised in a set-top box., or in a mobile phone.

Claims (15)

1. Method for displaying a content from 4D light field data, the method being executed by an electronic device, and being characterized in that it comprises changing point of view of said content and/or a focus plane associated with said content, according to viewing rights.
2. Method for displaying according to claim 1, wherein said changing said point of view of said content and/or said focus plane associated with said content is done randomly in time.
3. Method for displaying according to claim 1, wherein said changing said point of view of said content and/or said changing of said focus plane associated with said content is done based on a random spatial point of a scene associated with 4D light field data.
4. Method for displaying according to claim 1, wherein said changing further comprises blurring at least part of said content.
5. Method for displaying according to claim 4, wherein said blurring comprises adding Gaussian blur to said content.
6. Method for displaying according to claim 4, wherein said blurring is done randomly in time.
7. Method for displaying according to claim 4, wherein said blurring is a spatial blurring of said content that is done randomly.
8. Method for displaying according to claim 1, wherein said content is a 4D light field data.
9. Method for displaying according to claim 1, wherein said content is a 2D image or 2D video.
10. Method for displaying according to claim 1, wherein said content is a 3D content or multiviews content.
11. Method for displaying according to claim 1, wherein a difference value between consecutive changed points of views of said content and/or consecutive changed focus planes is defined as a function of said viewing rights.
12. A computer-readable and non-transient storage medium storing a computer program comprising a set of computer-executable instructions to implement a method for displaying when the instructions are executed by a computer, wherein the instructions comprise instructions, which when executed, configure the computer to perform the method of claim 1.
13. An electronic device for displaying a content from 4D light field data, the electronic device being characterized in that it comprises a changing module configured to changing point of view of said content and/or a focus plane associated with said content, according to viewing rights.
14. The electronic device for displaying according to claim 13, wherein said changing said point of view of said content and/or said focus plane associated with said content is done randomly in time.
15. The electronic device for displaying according to claim 13, wherein said changing said point of view of said content and/or said changing of said focus plane associated with said content is done based on a random spatial point of a scene associated with 4D light field data.
US15/168,003 2015-05-29 2016-05-28 Method for displaying a content from 4D light field data Active 2036-07-06 US10484671B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP15305817.7 2015-05-29
EP15305817 2015-05-29
EP15305817.7A EP3099076B1 (en) 2015-05-29 2015-05-29 Method for displaying a content from 4d light field data

Publications (2)

Publication Number Publication Date
US20160353087A1 true US20160353087A1 (en) 2016-12-01
US10484671B2 US10484671B2 (en) 2019-11-19

Family

ID=53396410

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/168,003 Active 2036-07-06 US10484671B2 (en) 2015-05-29 2016-05-28 Method for displaying a content from 4D light field data

Country Status (2)

Country Link
US (1) US10484671B2 (en)
EP (1) EP3099076B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113671721B (en) * 2020-05-15 2023-03-28 华为技术有限公司 Display device, system and method

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006053804A2 (en) * 2004-11-19 2006-05-26 France Telecom Method, server and programme for viewing scrambled images
US20070285554A1 (en) * 2005-10-31 2007-12-13 Dor Givon Apparatus method and system for imaging
US20080297593A1 (en) * 2007-04-17 2008-12-04 University Of Southern California Rendering for an Interactive 360 Degree Light Field Display
US20100332343A1 (en) * 2008-02-29 2010-12-30 Thomson Licensing Method for displaying multimedia content with variable interference based on receiver/decoder local legislation
US20110128412A1 (en) * 2009-11-25 2011-06-02 Milnes Thomas B Actively Addressable Aperture Light Field Camera
US20120311342A1 (en) * 2011-06-03 2012-12-06 Ebay Inc. Focus-based challenge-response authentication
US20130223673A1 (en) * 2011-08-30 2013-08-29 Digimarc Corporation Methods and arrangements for identifying objects
US20140052555A1 (en) * 2011-08-30 2014-02-20 Digimarc Corporation Methods and arrangements for identifying objects
US20140098191A1 (en) * 2012-10-05 2014-04-10 Vidinoti Sa Annotation method and apparatus
US20140146201A1 (en) * 2012-05-09 2014-05-29 Lytro, Inc. Optimization of optical systems for improved light field capture and manipulation
US20140300869A1 (en) * 2013-04-09 2014-10-09 Massachusetts Institute Of Technology Methods and Apparatus for Light Field Projection
US8933862B2 (en) * 2012-08-04 2015-01-13 Paul Lapstun Light field display with MEMS Scanners
US20150234477A1 (en) * 2013-07-12 2015-08-20 Magic Leap, Inc. Method and system for determining user input based on gesture
US20150301592A1 (en) * 2014-04-18 2015-10-22 Magic Leap, Inc. Utilizing totems for augmented or virtual reality systems
US20150310601A1 (en) * 2014-03-07 2015-10-29 Digimarc Corporation Methods and arrangements for identifying objects
US20160029017A1 (en) * 2012-02-28 2016-01-28 Lytro, Inc. Calibration of light-field camera geometry via robust fitting
US20160026253A1 (en) * 2014-03-11 2016-01-28 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US20160042501A1 (en) * 2014-08-11 2016-02-11 The Regents Of The University Of California Vision correcting display with aberration compensation using inverse blurring and a light field display
US20160248987A1 (en) * 2015-02-12 2016-08-25 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Light-field camera
US20160262608A1 (en) * 2014-07-08 2016-09-15 Krueger Wesley W O Systems and methods using virtual reality or augmented reality environments for the measurement and/or improvement of human vestibulo-ocular performance
US20170209044A1 (en) * 2014-07-18 2017-07-27 Kabushiki Kaisha Topcon Visual function testing device and visual function testing system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5583936A (en) 1993-05-17 1996-12-10 Macrovision Corporation Video copy protection process enhancement to introduce horizontal and vertical picture distortions
US7092616B2 (en) 2001-07-19 2006-08-15 Sony Electronics Inc. Method and apparatus for copy protecting video content and producing a reduced quality reproduction of video content for personal use
US7088823B2 (en) * 2002-01-09 2006-08-08 International Business Machines Corporation System and method for secure distribution and evaluation of compressed digital information
CN101297545B (en) 2005-10-28 2012-05-02 株式会社尼康 Imaging device and image processing device
US8280049B2 (en) 2008-08-27 2012-10-02 Rovi Solutions Corporation Method and apparatus for synthesizing copy protection for reducing/defeating the effectiveness or capability of a circumvention device
US8416289B2 (en) 2009-04-28 2013-04-09 Microsoft Corporation Light-field display
US8374489B2 (en) 2009-09-23 2013-02-12 Rovi Technologies Corporation Method and apparatus for inducing and or reducing geometric distortions in a display via positive going pulses
JP5623313B2 (en) 2011-03-10 2014-11-12 キヤノン株式会社 Imaging apparatus and imaging optical system
US9218692B2 (en) 2011-11-15 2015-12-22 Trimble Navigation Limited Controlling rights to a drawing in a three-dimensional modeling environment
JP6168794B2 (en) 2012-05-31 2017-07-26 キヤノン株式会社 Information processing method and apparatus, program.
US9497429B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Extended color processing on pelican array cameras

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006053804A2 (en) * 2004-11-19 2006-05-26 France Telecom Method, server and programme for viewing scrambled images
US20070285554A1 (en) * 2005-10-31 2007-12-13 Dor Givon Apparatus method and system for imaging
US20080297593A1 (en) * 2007-04-17 2008-12-04 University Of Southern California Rendering for an Interactive 360 Degree Light Field Display
US20100332343A1 (en) * 2008-02-29 2010-12-30 Thomson Licensing Method for displaying multimedia content with variable interference based on receiver/decoder local legislation
US20110128412A1 (en) * 2009-11-25 2011-06-02 Milnes Thomas B Actively Addressable Aperture Light Field Camera
US20120311342A1 (en) * 2011-06-03 2012-12-06 Ebay Inc. Focus-based challenge-response authentication
US20130223673A1 (en) * 2011-08-30 2013-08-29 Digimarc Corporation Methods and arrangements for identifying objects
US20140052555A1 (en) * 2011-08-30 2014-02-20 Digimarc Corporation Methods and arrangements for identifying objects
US20160029017A1 (en) * 2012-02-28 2016-01-28 Lytro, Inc. Calibration of light-field camera geometry via robust fitting
US20140146201A1 (en) * 2012-05-09 2014-05-29 Lytro, Inc. Optimization of optical systems for improved light field capture and manipulation
US8933862B2 (en) * 2012-08-04 2015-01-13 Paul Lapstun Light field display with MEMS Scanners
US20140098191A1 (en) * 2012-10-05 2014-04-10 Vidinoti Sa Annotation method and apparatus
US20140300869A1 (en) * 2013-04-09 2014-10-09 Massachusetts Institute Of Technology Methods and Apparatus for Light Field Projection
US20150234477A1 (en) * 2013-07-12 2015-08-20 Magic Leap, Inc. Method and system for determining user input based on gesture
US20150310601A1 (en) * 2014-03-07 2015-10-29 Digimarc Corporation Methods and arrangements for identifying objects
US20160026253A1 (en) * 2014-03-11 2016-01-28 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US20150301592A1 (en) * 2014-04-18 2015-10-22 Magic Leap, Inc. Utilizing totems for augmented or virtual reality systems
US20160262608A1 (en) * 2014-07-08 2016-09-15 Krueger Wesley W O Systems and methods using virtual reality or augmented reality environments for the measurement and/or improvement of human vestibulo-ocular performance
US20170209044A1 (en) * 2014-07-18 2017-07-27 Kabushiki Kaisha Topcon Visual function testing device and visual function testing system
US20160042501A1 (en) * 2014-08-11 2016-02-11 The Regents Of The University Of California Vision correcting display with aberration compensation using inverse blurring and a light field display
US20160248987A1 (en) * 2015-02-12 2016-08-25 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Light-field camera

Also Published As

Publication number Publication date
EP3099076A1 (en) 2016-11-30
EP3099076B1 (en) 2019-08-07
US10484671B2 (en) 2019-11-19

Similar Documents

Publication Publication Date Title
US11055826B2 (en) Method and apparatus for image processing
US10182183B2 (en) Method for obtaining a refocused image from 4D raw light field data
AU2014218390B2 (en) Method, system and apparatus for forming a high resolution depth map
CN117061885A (en) System and method for fusing images
EP3139614A1 (en) Method and device for encoding and decoding a light field based image, and corresponding computer program product
CN112182299B (en) Method, device, equipment and medium for acquiring highlight in video
US20150294472A1 (en) Method, apparatus and computer program product for disparity estimation of plenoptic images
US20160337632A1 (en) Method for obtaining a refocused image from a 4d raw light field data using a shift correction parameter
US11948280B2 (en) System and method for multi-frame contextual attention for multi-frame image and video processing using deep neural networks
US20210248754A1 (en) Method for processing a light field image delivering a super-rays representation of a light field image
US10484671B2 (en) Method for displaying a content from 4D light field data
EP3166073A1 (en) Method for obtaining a refocused image from 4d raw light field data
US11202052B2 (en) Method for displaying, on a 2D display device, a content derived from light field data
US20150117757A1 (en) Method for processing at least one disparity map, corresponding electronic device and computer program product
CN111527518B (en) Method for processing light field video based on use of hyper-ray representations
US20140292748A1 (en) System and method for providing stereoscopic image by adjusting depth value
US9967551B2 (en) Method for displaying a 3D content on a multi-view display device, corresponding multi-view display device and computer program product
CN104185005A (en) Image processing apparatus and image processing method
EP3099077B1 (en) Method for displaying a content from 4d light field data
CN111200759B (en) Playing control method, device, terminal and storage medium of panoramic video
US20240209843A1 (en) Scalable voxel block selection
CN118212146A (en) Image fusion method and device, storage medium and electronic equipment
WO2017198766A1 (en) Method for modifying mal-exposed pixel values comprised in sub-aperture images obtained from a 4d raw light field

Legal Events

Date Code Title Description
AS Assignment

Owner name: THOMSON LICENSING, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ELUAED, MARC;MONSIFROT, ANTOINE;HEEN, OLIVIER;REEL/FRAME:042149/0898

Effective date: 20160905

AS Assignment

Owner name: INTERDIGITAL CE PATENT HOLDINGS, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THOMSON LICENSING;REEL/FRAME:047332/0511

Effective date: 20180730

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: INTERDIGITAL MADISON PATENT HOLDINGS, SAS, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERDIGITAL CE PATENT HOLDINGS,SAS;REEL/FRAME:053061/0025

Effective date: 20190911

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: INTERDIGITAL CE PATENT HOLDINGS, SAS, FRANCE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE RECEIVING PARTY NAME FROM INTERDIGITAL CE PATENT HOLDINGS TO INTERDIGITAL CE PATENT HOLDINGS, SAS. PREVIOUSLY RECORDED AT REEL: 47332 FRAME: 511. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:THOMSON LICENSING;REEL/FRAME:066703/0509

Effective date: 20180730