WO2017198766A1 - Method for modifying mal-exposed pixel values comprised in sub-aperture images obtained from a 4d raw light field - Google Patents

Method for modifying mal-exposed pixel values comprised in sub-aperture images obtained from a 4d raw light field Download PDF

Info

Publication number
WO2017198766A1
WO2017198766A1 PCT/EP2017/061967 EP2017061967W WO2017198766A1 WO 2017198766 A1 WO2017198766 A1 WO 2017198766A1 EP 2017061967 W EP2017061967 W EP 2017061967W WO 2017198766 A1 WO2017198766 A1 WO 2017198766A1
Authority
WO
WIPO (PCT)
Prior art keywords
mal
exposed
region
modifying
sub
Prior art date
Application number
PCT/EP2017/061967
Other languages
French (fr)
Inventor
Joaquin ZEPEDA SALVATIERRA
Mozhdeh Seifi
Fatma HAWARY
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2017198766A1 publication Critical patent/WO2017198766A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Definitions

  • the disclosure relates to 4D light field data processing. More precisely, the disclosure relates to a technique for correcting or at least reducing the color artifacts appearing/occurring after the execution of a de-mosaicking method on 4D light field data.
  • 4D light-field data enable a user to have access to more post processing features that enhance the rendering of images and/or the interactivity with the user.
  • 4D light-field data it is possible to perform with ease refocusing of images a posteriori (i.e. refocusing with freely selected distances of focalization meaning that the position of a focal plane can be specified/selected a posteriori), as well as changing slightly the point of view in the scene of an image.
  • a plenoptic camera as depicted in document WO 2013/180192 or in document GB 2488905, is able to acquire 4D light- field data.
  • 4D light-field data can be represented, when recorded by a plenoptic camera as the one depicted in Figure 1 for example, by a collection of micro-lens images.
  • 4D light-field data in this representation are named raw images (or 4D raw light-field data).
  • 4D light- field data can be represented, by a set of sub-aperture images.
  • a sub-aperture image corresponds to a captured image of a scene from a point of view, the point of view being slightly different between two sub-aperture images. These sub-aperture images give information about the parallax and depth of the imaged scene.
  • 4D light-field data can be represented by a set of epipolar images (see for example the article entitled: "Generating EPI Representation of a 4D Light Fields with g Single Lens Focused Plenoptic Cgmerg" ' , by S. Wanner and al., published in the conference proceedings of ISVC 2011).
  • the demultiplexing process consists of reorganizing the pixels of the 4D raw light-field data in such a way that all pixels capturing the light rays with a certain angle of incidence are stored in the same image creating the so-called views.
  • Each view is a projection of the scene under a different angle.
  • the set of views create a block matrix (i.e. a matrix of images), where the central view/image stores the pixels capturing perpendicular light rays to the sensor.
  • the angular information of the light rays is given by the relative pixel positions in the microlens images with respect to the microlens-images centers.
  • the obtained sub-aperture images can be altered by the vignetting effect induced by the main lens and the micro-lenses of a plenoptic camera. Indeed, due to the vignetting effect, a difference in illumination in the matrix of images, i.e., the peripheral sub- aperture views are low illuminated. This difference in illumination causes color artifacts in de- mosaicking methods like in the one presented in the article entitled "Dispgrity-guided demosgicking of light field imgges", by M. Seifi et al., and published in I CI P 2014. A similar issue can occur when the obtained sub-aperture images comprise clipped regions/area/zones (i.e. regions corresponding to either very bright regions or dark regions that comprise pixel values intensity corresponding to the extrema of the format range). As previously, color artifacts after the execution of de-mosaicking methods ca n occur in that case.
  • references in the specification to "one embodiment”, “an embodiment”, “an example embodiment”, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • the method is remarkable in that it comprises: modifying mal-exposed pixel values comprised in at least one mal-exposed region ( ) of said at least one sub-aperture image, by combining weighted pixels values from co-located regions (R k , R 7 , R 8 ), comprising at least a part of well-exposed pixels, in other sub-aperture images comprised in said matrix; wherein the method comprises determining wheights used in said combining, wherein said determining takes into account only pixels values from a region (Qi ) partially surrounding said at least one mal-exposed region ( ), and pixels values from other regions ( Q k , Q 7 , ...), partially surrounding said co-located region (R k , R 7 , R 8 ).
  • the present disclosure is directed to a method for modifying mal-exposed pixel values comprised in sub-aperture images, from a matrix of images obtained from a 4D raw light field.
  • the method is remarkable in that it comprises: - obtaining at least one region partially surrounding at least one determined mal-exposed region in at least one sub-aperture image;
  • each sub-aperture image comprises a co- located region compared to said at least one determined mal-exposed region, where said co- located regions comprise at least a part of well-exposed pixels; - determining, for each co-located region of said set, a weight to be applied on pixels of said co- located region, said determining taking into account only pixels values from said at least one region, and pixels values from other regions, each of said other regions partially surrounding said co-located region in said set;
  • the method is remarkable in that said combining corresponds to a linear combination.
  • the method is remarkable in that said combining corresponds to a non-linear combination.
  • the method for modifying mal-exposed pixel values is remarkable in that said determining comprises solving an optimization problem between said pixels from said at least one region, and pixels from other regions.
  • the method for modifying mal-exposed pixel values is remarkable in that said optimization problem involves a minimum mean square error (MMSE) estimator computation.
  • MMSE minimum mean square error estimator computation.
  • the method for modifying mal-exposed pixel values is remarkable in that said optimization problem further involves a penalty term.
  • the method for modifying mal-exposed pixel values is remarkable in that said at least one region, and said other regions completely surround respectively said determined mal-exposed region and said co-located regions.
  • the different steps of the method are implemented by a computer software program or programs, this software program comprising software instructions designed to be executed by a data processor of a relay module according to the disclosure and being designed to control the execution of the different steps of this method.
  • an aspect of the disclosure also concerns a program liable to be executed by a computer or by a data processor, this program comprising instructions to command the execution of the steps of a method as mentioned here above.
  • This program can use any programming language whatsoever and be in the form of a source code, object code or code that is intermediate between source code and object code, such as in a partially compiled form or in any other desirable form.
  • the disclosure also concerns an information medium readable by a data processor and comprising instructions of a program as mentioned here above.
  • the information medium can be any entity or device capable of storing the program.
  • the medium can comprise a storage means such as a ROM (which stands for "Read Only Memory”), for example a CD-ROM (which stands for “Compact Disc - Read Only Memory”) or a microelectronic circuit ROM or again a magnetic recording means, for example a floppy disk or a hard disk drive.
  • ROM Read Only Memory
  • CD-ROM Compact Disc - Read Only Memory
  • microelectronic circuit ROM again a magnetic recording means, for example a floppy disk or a hard disk drive.
  • the information medium may be a transmissible carrier such as an electrical or optical signal that can be conveyed through an electrical or optical cable, by radio or by other means.
  • the program can be especially downloaded into an I nternet-type network.
  • the information medium can be an integrated circuit into which the program is incorporated, the circuit being adapted to executing or being used in the execution of the method in question.
  • an embodiment of the disclosure is implemented by means of software and/or hardware components.
  • module can correspond in this document both to a software component and to a hardware component or to a set of hardware and software components.
  • a software component corresponds to one or more computer programs, one or more sub-programs of a program, or more generally to any element of a program or a software program capable of implementing a function or a set of functions according to what is described here below for the module concerned.
  • One such software component is executed by a data processor of a physical entity (terminal, server, etc.) and is capable of accessing the hardware resources of this physical entity (memories, recording media, communications buses, input/output electronic boards, user interfaces, etc.).
  • a hardware component corresponds to any element of a hardware unit capable of implementing a function or a set of functions according to what is described here below for the module concerned.
  • the hardware component comprises a processor that is an integrated circuit such as a central processing unit, and/or a microprocessor, and/or an Application-specific integrated circuit (ASIC), and/or an Application-specific instruction-set processor (ASIP), and/or a graphics processing unit (GPU), and/or a physics processing unit (PPU), and/or a digital signal processor (DSP), and/or an image processor, and/or a coprocessor, and/or a floating-point unit, and/or a network processor, and/or an audio processor, and/or a multi-core processor.
  • a processor that is an integrated circuit such as a central processing unit, and/or a microprocessor, and/or an Application-specific integrated circuit (ASIC), and/or an Application-specific instruction-set processor (ASIP), and/or a graphics processing unit (GPU), and/or a physics processing unit (PPU), and/or a digital signal processor (DSP), and/or an image processor, and/or a coprocessor, and
  • the hardware component can also comprise a baseband processor (comprising for example memory units, and a firmware) and/or radio electronic circuits (that can comprise antennas) which receive or transmit radio signals.
  • the hardware component is compliant with one or more standards such as ISO/IEC 18092 / ECMA-340, ISO/IEC 21481 / ECMA-352, GSMA, StoLPaN, ETSI / SCP (Smart Card Platform), GlobalPlatform (i.e. a secure element).
  • the hardware component is a Radio-frequency identification (RFID) tag.
  • a hardware component comprises circuits that enable Bluetooth communications, and/or Wi-fi communications, and/or Zigbee communications, and/or USB communications and/or Firewire communications and/or NFC (for Near Field) communications.
  • a step of obtaining an element/value in the present document can be viewed either as a step of reading such element/value in a memory unit of an electronic device or a step of receiving such element/value from another electronic device via communication means.
  • an electronic device for modifying mal-exposed pixel values comprised in sub-aperture images, from a matrix of images obtained from a 4D raw light field is proposed.
  • the electronic device is remarkable in that it comprises:
  • - a module configured to obtain at least one region partially surrounding at least one determined mal-exposed region in at least one sub-aperture image
  • each sub-aperture image comprises a co-located region compared to said at least one determined mal-exposed region, where said co-located regions comprise at least a part of well-exposed pixels;
  • a module configured to determine, for each co-located region of said set, a weight to be applied on pixels of said co-located region, said module configured to determine a weight taking into account only pixels values from said at least one region, and pixels values from other regions, each of said other regions partially surrounding said co-located region in said set;
  • the electronic device for modifying mal-exposed pixel values is remarkable in that said combining corresponds to a linear combination. In a variant, the electronic device for modifying mal-exposed pixel values is remarkable in that said combining corresponds to a non-linear combination.
  • the electronic device for modifying mal-exposed pixel values is remarkable in that said module configured to determine a weight is further configured to solve an optimization problem between said pixels from said at least one region, and pixels from other regions.
  • the electronic device for modifying mal-exposed pixel values is remarkable in that said optimization problem involves a minimum mean square error (MMSE) estimator computation.
  • MMSE minimum mean square error
  • the electronic device for modifying mal-exposed pixel values is remarkable in that said optimization problem further involves a penalty term.
  • the electronic device for modifying mal-exposed pixel values is remarkable in that said at least one region, and said other regions completely surround respectively said determined mal-exposed region and said co-located regions.
  • Figure 1 present schematically the main components comprised in a plenoptic camera that enables the acquisition of light field data on which the present technique can be applied;
  • Figure 2 illustrates a set of sub-aperture images, on which an embodiment of the disclosure is applied
  • Figure 3 illustrates a set of sub-aperture images, on which an embodiment of the disclosure is applied;
  • Figure 4 presents an example of device that can be used to perform one or several steps of a method for modifying mal-exposed pixel values comprised in sub-aperture images described in the present document.
  • scalars, vectors and matrices in the present document are denoted by using, respectively standard, bold, and uppercase bold typeface (e.g., scalar a, vector a and matrix A).
  • Vk denotes a vector from a sequence Vi, v 2 , . . . , v N
  • v k denotes the k-th coefficient of vector v.
  • [a k ] k denotes concatenation of the vectors a fc (scalars a k ) to form a single column vector.
  • the following terms are used in the present document:
  • Mal-exposed A pixel of an image or view is said to be mal-exposed or clipped if the light impingent on the sensor at that pixel location was outside the dynamic range of the sensor. In practice, we assume all pixels having values at the extrema of the dynamic range of the quantized sensor output (e.g., 0 or 255, for 8-bit representations) to be mal-exposed.
  • Well-exposed Pixels that are not mal-exposed are said to be well-exposed.
  • Support A set of positions.
  • the support of a table in the image or a room is the set of all (x, y) coordinates corresponding to pixels of the table.
  • Image of support Given a set of pixel positions in a given image/view, its image in a different image/view is the support, in that image/view, corresponding to the same 3D point.
  • Figure 1 present schematically the main components comprised in a plenoptic camera that enables the acquisition of light field data on which the present technique can be applied.
  • a plenoptic camera comprises a main lens referenced 101, and a sensor array (i.e. an array of pixel sensors (for example a sensor based on CMOS technology))., referenced 104. Between the main lens 101 and the sensor array 104, a microlens array referenced 102, that comprises a set of micro lenses referenced 103, is positioned. It should be noted that optionally some spacers might be located between the micro-lens array around each lens and the sensor to prevent light from one lens to overlap with the light of other lenses at the sensor side.
  • the main lens 101 can be a more complex optical system as the one depicted in Figure 1 (as for example the optical system described in Figures 12 and 13 of document GB2488905)
  • a plenoptic camera can be viewed as a conventional camera plus a micro-lens array set just in front of the sensor as illustrated in Figure 1.
  • the light rays passing through a micro-lens cover a part of the sensor array that records the radiance of these light rays.
  • the recording by this part of the sensor defines a micro-lens image.
  • Figure 2 illustrates a set of sub-aperture images, on which an embodiment of the disclosure is applied.
  • each (u, v) pair indicates a square that contains an image view.
  • R it denote a set of mal-exposed pixel positions in image/view i that we wish to inpaint.
  • the positions of Ri may be connected, meaning that, for any two positions in R it there exists a path of pixel positions between these two positions such that all positions on the path are also in R ⁇ .
  • Regions R ⁇ that are not connected can be processed in the same way as described below or, alternatively, they can be separated into multiple, connected regions and processed independently.
  • ?,. where X t [x k ] k ⁇ w .
  • the Ridge regression regularization is used.
  • , it leads to a ridge regression problem: b argmin a
  • This approach has the advantage that it produces sparse vectors b and, accordingly, uses only a subset of the columns of X it making it robust to distortions that might be present only in some images/views and not in others.
  • the selection of the hyper-parameters ⁇ and r in these equations needs to be done using a training set T of light-field images.
  • the camera When taking a picture, the camera first detects the scene conditions and then chooses the corresponding set of hyper-parameters.
  • the scene conditions ca n include lighting conditions (how bright the scene is), scene texture (is it a water scene, or an indoors scene), and others.
  • x' k E _3 ⁇ 4' Qfc ' and y' fc E M' Rfe ' denote the vector of first order finite differences computed, respectively, from mal-exposed and well-exposed pixels.
  • x' k E _3 ⁇ 4' Qfc ' and y' fc E M' Rfe ' denote the vector of first order finite differences computed, respectively, from mal-exposed and well-exposed pixels.
  • Figure 3 illustrates a set of sub-aperture images, on which an embodiment of the disclosure is applied.
  • Figure 3 presents an example partition of mal-exposed region and well exposed support enabling usage of views with partially mal-exposed regions.
  • fj N ⁇ 1* be a generic, dimension preserving function. For example, it can be the element wise square function.
  • X [x ⁇ ] .
  • This X'i can be used in place of X; in the previous equations to derive a new set of coefficients b.
  • Using a Y built in a manner analogous to the determination of X'; would allow us to again use the equation y £ K £ fe to obtain a reconstruction of y £ that is non-linear.
  • FIG. 4 presents an example of device that ca n be used to perform one or several steps of a method disclosed in the present document.
  • Such device referenced 400 comprises a computing unit (for example a CPU, for "Central Processing Unit"), referenced 401, and one or more memory units (for example a RAM (for "Random Access Memory”) block in which intermediate results can be stored temporarily during the execution of instructions a computer program, or a ROM block in which, among other things, computer programs are stored, or an EEPROM (“Electrically-Erasable Programmable Read-Only Memory”) block, or a flash block) referenced 402.
  • Computer programs are made of instructions that can be executed by the computing unit.
  • Such device 400 can also comprise a dedicated unit, referenced 403, constituting an input-output interface to allow the device 400 to communicate with other devices.
  • this dedicated unit 403 ca n be connected with an antenna (in order to perform communication without contacts), or with serial ports (to carry communications "contact").
  • the arrows in Figure 4 signify that the linked unit ca n exchange data through buses for example together.
  • the electronic device depicted in Figure 4 can be comprised in a camera device that is configure to capture images (i.e. a sampling of a light field). These images are stored on one or more memory units. Hence, these images can be viewed as bit stream data (i.e. a sequence of bits). Obviously, a bit stream can also be converted on byte stream and vice versa.
  • a mal-exposed region can be done for example according to a method such as the one depicted in the article entitled “Color Clipping and Over-exposure Correction” by Abebe et al., published in Eurographics Symposium on Rendering - Experimental Ideas and I mplementations, 2015.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

In one embodiment, it is proposed a method for modifying mal-exposed pixel values comprised in sub-aperture images, from a matrix of images obtained from a 4D raw light field. The method is remarkable in that it comprises: - obtaining at least one region (Qi ) partially surrounding at least one determined mal-exposed region (Rt) in at least one sub-aperture image (View i'); - determining a set of sub-aperture images, wherein each sub-aperture image comprises a co- located region (Rk, R7, R8) compared to said at least one determined mal-exposed region Ri), where said co-located regions comprise at least a part of well-exposed pixels; - determining, for each co-located region of said set, a weight to be applied on pixels of said co- located region, said determining taking into account only pixels values from said at least one region (Qi ), and pixels values from other regions (Qk, Q7,...), each of said other regions partially surrounding said co-located region in said set; - modifying mal-exposed pixels in said at least one determined mal-exposed region ( fy ) by combining weighted pixels values from said co-located regions of said set (Rk, R7, R8).

Description

Method for modifying mal-exposed pixel values comprised in sub-aperture images obtained from a 4D raw light field
Technical Field
The disclosure relates to 4D light field data processing. More precisely, the disclosure relates to a technique for correcting or at least reducing the color artifacts appearing/occurring after the execution of a de-mosaicking method on 4D light field data.
Background
This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present invention that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
The acquisition of 4D light-field data), which can be viewed as a sampling of a 4D light field (i.e. the recording of light rays as explained in Figure 1 of the article:" Understanding camera trade-offs through a Bayesian analysis of light field projections" by Anat Levin and al., published in the conference proceedings of ECCV 2008) is a hectic research subject.
Indeed, compared to classical 2D images obtained from a camera, 4D light-field data enable a user to have access to more post processing features that enhance the rendering of images and/or the interactivity with the user. For example, with 4D light-field data, it is possible to perform with ease refocusing of images a posteriori (i.e. refocusing with freely selected distances of focalization meaning that the position of a focal plane can be specified/selected a posteriori), as well as changing slightly the point of view in the scene of an image. In order to acquire 4D light-field data, several techniques can be used. Especially, a plenoptic camera, as depicted in document WO 2013/180192 or in document GB 2488905, is able to acquire 4D light- field data. In the state of the art, there are several ways to represent (or define) 4D light-field data. Indeed, in the Chapter 3.3 of the Phd dissertation thesis entitled "Digital Light Field Photography" by Ren Ng, published in July 2006, three different ways to represent 4D light-field data are described. Firstly, 4D light-field data can be represented, when recorded by a plenoptic camera as the one depicted in Figure 1 for example, by a collection of micro-lens images. 4D light-field data in this representation are named raw images (or 4D raw light-field data). Secondly, 4D light- field data can be represented, by a set of sub-aperture images. A sub-aperture image corresponds to a captured image of a scene from a point of view, the point of view being slightly different between two sub-aperture images. These sub-aperture images give information about the parallax and depth of the imaged scene. Thirdly, 4D light-field data can be represented by a set of epipolar images (see for example the article entitled: "Generating EPI Representation of a 4D Light Fields with g Single Lens Focused Plenoptic Cgmerg" ', by S. Wanner and al., published in the conference proceedings of ISVC 2011).
In order to obtain a set of sub-aperture images from a 4D raw light-field data, one skilled in the art uses a demultiplexing method. Indeed, the demultiplexing process consists of reorganizing the pixels of the 4D raw light-field data in such a way that all pixels capturing the light rays with a certain angle of incidence are stored in the same image creating the so-called views. Each view is a projection of the scene under a different angle. The set of views create a block matrix (i.e. a matrix of images), where the central view/image stores the pixels capturing perpendicular light rays to the sensor. In fact, the angular information of the light rays is given by the relative pixel positions in the microlens images with respect to the microlens-images centers.
However, the obtained sub-aperture images can be altered by the vignetting effect induced by the main lens and the micro-lenses of a plenoptic camera. Indeed, due to the vignetting effect, a difference in illumination in the matrix of images, i.e., the peripheral sub- aperture views are low illuminated. This difference in illumination causes color artifacts in de- mosaicking methods like in the one presented in the article entitled "Dispgrity-guided demosgicking of light field imgges", by M. Seifi et al., and published in I CI P 2014. A similar issue can occur when the obtained sub-aperture images comprise clipped regions/area/zones (i.e. regions corresponding to either very bright regions or dark regions that comprise pixel values intensity corresponding to the extrema of the format range). As previously, color artifacts after the execution of de-mosaicking methods ca n occur in that case.
Hence, there is a need to have a technique that could reduce or avoid color artifacts when processing (i.e. applying a de-mosaicking method) on a set of sub-aperture images.
Summary of the disclosure
References in the specification to "one embodiment", "an embodiment", "an example embodiment", indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. I n one embodiment of the disclosure, it is proposed a method for modifying mal-exposed pixel values comprised in at least one sub-aperture image, said at least one sub-aperture image being comprised in a matrix of sub-aperture images obtained from a 4D raw light field. The method is remarkable in that it comprises: modifying mal-exposed pixel values comprised in at least one mal-exposed region ( ) of said at least one sub-aperture image, by combining weighted pixels values from co-located regions (Rk , R7 , R8 ), comprising at least a part of well-exposed pixels, in other sub-aperture images comprised in said matrix; wherein the method comprises determining wheights used in said combining, wherein said determining takes into account only pixels values from a region (Qi ) partially surrounding said at least one mal-exposed region ( ), and pixels values from other regions ( Qk, Q7, ...), partially surrounding said co-located region (Rk, R7, R8). In a variant, the present disclosure is directed to a method for modifying mal-exposed pixel values comprised in sub-aperture images, from a matrix of images obtained from a 4D raw light field. The method is remarkable in that it comprises: - obtaining at least one region partially surrounding at least one determined mal-exposed region in at least one sub-aperture image;
- determining a set of sub-aperture images, wherein each sub-aperture image comprises a co- located region compared to said at least one determined mal-exposed region, where said co- located regions comprise at least a part of well-exposed pixels; - determining, for each co-located region of said set, a weight to be applied on pixels of said co- located region, said determining taking into account only pixels values from said at least one region, and pixels values from other regions, each of said other regions partially surrounding said co-located region in said set;
- modifying mal-exposed pixels in said at least one determined mal-exposed region by combining weighted pixels values from said co-located regions of said set.
In a preferred embodiment, the method is remarkable in that said combining corresponds to a linear combination.
In a preferred embodiment, the method is remarkable in that said combining corresponds to a non-linear combination. In a preferred embodiment, the method for modifying mal-exposed pixel values is remarkable in that said determining comprises solving an optimization problem between said pixels from said at least one region, and pixels from other regions.
In a preferred embodiment, the method for modifying mal-exposed pixel values is remarkable in that said optimization problem involves a minimum mean square error (MMSE) estimator computation. I n a preferred embodiment, the method for modifying mal-exposed pixel values is remarkable in that said optimization problem further involves a penalty term.
I n a preferred embodiment, the method for modifying mal-exposed pixel values is remarkable in that said at least one region, and said other regions completely surround respectively said determined mal-exposed region and said co-located regions.
According to an exemplary implementation, the different steps of the method are implemented by a computer software program or programs, this software program comprising software instructions designed to be executed by a data processor of a relay module according to the disclosure and being designed to control the execution of the different steps of this method.
Consequently, an aspect of the disclosure also concerns a program liable to be executed by a computer or by a data processor, this program comprising instructions to command the execution of the steps of a method as mentioned here above.
This program can use any programming language whatsoever and be in the form of a source code, object code or code that is intermediate between source code and object code, such as in a partially compiled form or in any other desirable form.
The disclosure also concerns an information medium readable by a data processor and comprising instructions of a program as mentioned here above.
The information medium can be any entity or device capable of storing the program. For example, the medium can comprise a storage means such as a ROM (which stands for "Read Only Memory"), for example a CD-ROM (which stands for "Compact Disc - Read Only Memory") or a microelectronic circuit ROM or again a magnetic recording means, for example a floppy disk or a hard disk drive.
Furthermore, the information medium may be a transmissible carrier such as an electrical or optical signal that can be conveyed through an electrical or optical cable, by radio or by other means. The program can be especially downloaded into an I nternet-type network. Alternately, the information medium can be an integrated circuit into which the program is incorporated, the circuit being adapted to executing or being used in the execution of the method in question.
According to one embodiment, an embodiment of the disclosure is implemented by means of software and/or hardware components. From this viewpoint, the term "module" can correspond in this document both to a software component and to a hardware component or to a set of hardware and software components.
A software component corresponds to one or more computer programs, one or more sub-programs of a program, or more generally to any element of a program or a software program capable of implementing a function or a set of functions according to what is described here below for the module concerned. One such software component is executed by a data processor of a physical entity (terminal, server, etc.) and is capable of accessing the hardware resources of this physical entity (memories, recording media, communications buses, input/output electronic boards, user interfaces, etc.). Similarly, a hardware component corresponds to any element of a hardware unit capable of implementing a function or a set of functions according to what is described here below for the module concerned. It may be a programmable hardware component or a component with an integrated circuit for the execution of software, for example an integrated circuit, a smart card, a memory card, an electronic board for executing firmware etc. In a variant, the hardware component comprises a processor that is an integrated circuit such as a central processing unit, and/or a microprocessor, and/or an Application-specific integrated circuit (ASIC), and/or an Application-specific instruction-set processor (ASIP), and/or a graphics processing unit (GPU), and/or a physics processing unit (PPU), and/or a digital signal processor (DSP), and/or an image processor, and/or a coprocessor, and/or a floating-point unit, and/or a network processor, and/or an audio processor, and/or a multi-core processor. Moreover, the hardware component can also comprise a baseband processor (comprising for example memory units, and a firmware) and/or radio electronic circuits (that can comprise antennas) which receive or transmit radio signals. In one embodiment, the hardware component is compliant with one or more standards such as ISO/IEC 18092 / ECMA-340, ISO/IEC 21481 / ECMA-352, GSMA, StoLPaN, ETSI / SCP (Smart Card Platform), GlobalPlatform (i.e. a secure element). In a variant, the hardware component is a Radio-frequency identification (RFID) tag. In one embodiment, a hardware component comprises circuits that enable Bluetooth communications, and/or Wi-fi communications, and/or Zigbee communications, and/or USB communications and/or Firewire communications and/or NFC (for Near Field) communications.
It should also be noted that a step of obtaining an element/value in the present document can be viewed either as a step of reading such element/value in a memory unit of an electronic device or a step of receiving such element/value from another electronic device via communication means.
In another embodiment, it is proposed an electronic device for modifying mal-exposed pixel values comprised in sub-aperture images, from a matrix of images obtained from a 4D raw light field. The electronic device is remarkable in that it comprises:
- a module configured to obtain at least one region partially surrounding at least one determined mal-exposed region in at least one sub-aperture image;
- a module configured to determine a set of sub-aperture images, wherein each sub-aperture image comprises a co-located region compared to said at least one determined mal-exposed region, where said co-located regions comprise at least a part of well-exposed pixels;
- a module configured to determine, for each co-located region of said set, a weight to be applied on pixels of said co-located region, said module configured to determine a weight taking into account only pixels values from said at least one region, and pixels values from other regions, each of said other regions partially surrounding said co-located region in said set;
- a module configured to modify mal-exposed pixels in said at least one determined mal-exposed region by combining weighted pixels values from said co-located regions of said set. In a variant, the electronic device for modifying mal-exposed pixel values is remarkable in that said combining corresponds to a linear combination. In a variant, the electronic device for modifying mal-exposed pixel values is remarkable in that said combining corresponds to a non-linear combination.
In a variant, the electronic device for modifying mal-exposed pixel values is remarkable in that said module configured to determine a weight is further configured to solve an optimization problem between said pixels from said at least one region, and pixels from other regions.
In a variant, the electronic device for modifying mal-exposed pixel values is remarkable in that said optimization problem involves a minimum mean square error (MMSE) estimator computation.
In a variant, the electronic device for modifying mal-exposed pixel values is remarkable in that said optimization problem further involves a penalty term.
In a variant, the electronic device for modifying mal-exposed pixel values is remarkable in that said at least one region, and said other regions completely surround respectively said determined mal-exposed region and said co-located regions.
Brief description of the drawings
The above and other aspects of the invention will become more apparent by the following detailed description of exemplary embodiments thereof with reference to the attached drawings in which:
Figure 1 present schematically the main components comprised in a plenoptic camera that enables the acquisition of light field data on which the present technique can be applied;
Figure 2 illustrates a set of sub-aperture images, on which an embodiment of the disclosure is applied;
Figure 3 illustrates a set of sub-aperture images, on which an embodiment of the disclosure is applied; Figure 4 presents an example of device that can be used to perform one or several steps of a method for modifying mal-exposed pixel values comprised in sub-aperture images described in the present document.
Detailed description The scalars, vectors and matrices in the present document are denoted by using, respectively standard, bold, and uppercase bold typeface (e.g., scalar a, vector a and matrix A). Moreover, the term Vk denotes a vector from a sequence Vi, v2, . . . , vN, and vk denotes the k-th coefficient of vector v. We let [ak]k (respectively, [ak]k) denotes concatenation of the vectors afc (scalars ak) to form a single column vector. Moreover, the following terms are used in the present document:
Mal-exposed: A pixel of an image or view is said to be mal-exposed or clipped if the light impingent on the sensor at that pixel location was outside the dynamic range of the sensor. In practice, we assume all pixels having values at the extrema of the dynamic range of the quantized sensor output (e.g., 0 or 255, for 8-bit representations) to be mal-exposed. Well-exposed: Pixels that are not mal-exposed are said to be well-exposed.
Support: A set of positions. For example, the support of a table in the image or a room is the set of all (x, y) coordinates corresponding to pixels of the table.
Image of support: Given a set of pixel positions in a given image/view, its image in a different image/view is the support, in that image/view, corresponding to the same 3D point. Figure 1 present schematically the main components comprised in a plenoptic camera that enables the acquisition of light field data on which the present technique can be applied.
More precisely, a plenoptic camera comprises a main lens referenced 101, and a sensor array (i.e. an array of pixel sensors (for example a sensor based on CMOS technology))., referenced 104. Between the main lens 101 and the sensor array 104, a microlens array referenced 102, that comprises a set of micro lenses referenced 103, is positioned. It should be noted that optionally some spacers might be located between the micro-lens array around each lens and the sensor to prevent light from one lens to overlap with the light of other lenses at the sensor side. It should be noted that the main lens 101 can be a more complex optical system as the one depicted in Figure 1 (as for example the optical system described in Figures 12 and 13 of document GB2488905) Hence, a plenoptic camera can be viewed as a conventional camera plus a micro-lens array set just in front of the sensor as illustrated in Figure 1. The light rays passing through a micro-lens cover a part of the sensor array that records the radiance of these light rays. The recording by this part of the sensor defines a micro-lens image.
Figure 2 illustrates a set of sub-aperture images, on which an embodiment of the disclosure is applied.
I n such set of sub-aperture images, each (u, v) pair indicates a square that contains an image view. For notational convenience, a running index k=0,...,8 is used to denote each image/view. All white parts (or regions) of each image/view are well exposed, whereas, gray parts (or regions) are considered as mal-exposed. We let Rit denote a set of mal-exposed pixel positions in image/view i that we wish to inpaint. The positions of Ri may be connected, meaning that, for any two positions in Rit there exists a path of pixel positions between these two positions such that all positions on the path are also in R^ . We illustrate this setup in Figure 2.
Regions R^ that are not connected can be processed in the same way as described below or, alternatively, they can be separated into multiple, connected regions and processed independently.
Given the mal-exposed region Ri t we first choose a support (^ corresponding to well- exposed pixels from the same image/view i. One possible approach is to let Qt denote an n-pixel thick border of Rit or at least pixels that are very close to Rit since pixel correlation increases with spatial proximity. Using the approach, we can obtain, for each image/view k≠ i, the images Qk and Rk of Qt and Rt . It will be the case that some of these images will consist entirely of well- exposed pixels, and we let W denote the set of corresponding image/view indices k. For example, in Figure 2, W = {0, 1, 3, 5, 6, 7, 8}.
We now let xfc E M Qk^ and let yk E M'Rfe' denote the vectors containing the pixel intensities corresponding to support Qk and region Rk , respectively. We need to obtain an estimate ^ for the missing values yit and we propose doing this by using the similar content yk, k E W. Since there are multiple such yk, we need to find a way to derive a single estimate, and this is complicated by the fact that the content from one image/view to the other will be different due to vignetting and lighting changes related to differences in view perspective. Hence, in one embodiment of the disclosure, it is used a linear combination of the candidates yk, k E W of the following form
Figure imgf000013_0001
where Yt = [yk]k ew and b is the vector of mixing weights that needs to be determined.
The vector of mixing weights is determined by the use of a mean Square Error minimization method using only pixels from the well-exposed regions Qk. More precisely, in one embodiment of the disclosure, it is proposed to determine the vector b as follows: b = argmina|xj— X,a|?,. where Xt = [xk]k≡w.
However, a common problem in estimation problems (as the equation used in the determination of the vector b ) is overfitting, which refers to the situation wherein the chosen weights represent well the training data χέ but generalize poorly to the unknown data ^. In order to address this situation, it is proposed to regularize the weights b by restricticting their possible values by means of an additive penalty term: b = argmiiijx,;.— X.|.a|f + λίϊ(μ) where the coefficient λ needs to be chosen empirically by means of cross-validation.
In a variant, the Ridge regression regularization is used. Indeed, in the case that the penalty term which is used is the l2 regularizer H(a) = |a| , it leads to a ridge regression problem: b = argmina |xi—
Figure imgf000014_0001
+ A|a.||,
In another variant, it is proposed to use the ^ regularizer H(a) = la^, which leads to the solving of the following problem: b = argmiria x, — X.ja|f + A|a| i .
This approach has the advantage that it produces sparse vectors b and, accordingly, uses only a subset of the columns of Xit making it robust to distortions that might be present only in some images/views and not in others.
In a variant, it is proposed to combine the two previously mentioned penalty terms, and this is referred to as an elastic net regularizer. Given some empirically chosen 0 < r < 1, the resulting problem is: b = argminjx,;— X,-a|ij + A (r|a|i + (1— r) |a| ) -
The selection of the hyper-parameters λ and r in these equations needs to be done using a training set T of light-field images. The larger the training set, the better the hyper-parameters will work for generic images.
Alternatively, one can collect multiple training sets {Tc}c c=1 , one for each of C scene conditions, and learn a set of hyper-parameters c, rc}c c=1 that are stored in the camera. When taking a picture, the camera first detects the scene conditions and then chooses the corresponding set of hyper-parameters. The scene conditions ca n include lighting conditions (how bright the scene is), scene texture (is it a water scene, or an indoors scene), and others. Given the hyper-parameters, the camera system needs to solve, after each image capture, the equations y£ = with b = argiiiinjxj— j | + Afi(a) Alternatively, the mixing coefficients b ca n also be pre-computed and be made dependent on the scene conditions c, resulting in multiple such coefficient vectors {bc} =1.
While this case would result in sub-optimal visual performance, it would reduce the complexity of the system.
It should be noted that, while in the previous discussion we have proposed applying inpainting methods on vectors x£ consisting of pixels of a matrix of images, it is likewise possible to compute n-th order finite differences and apply these methods to these vectors of finite differences. It is further possible to compute multiple such vectors depending on the direction of differentiation. For example, one vector can carry out differences along the horizontal direction, and another one along the vertical direction.
For example, we can let x'k E _¾'Qfc' and y'fc E M'Rfe' denote the vector of first order finite differences computed, respectively, from mal-exposed and well-exposed pixels. Given these definitions, one can predict the first-order difference vector x'k given y'k using the same methods described above (e.g., the optimization problem and its regularized variants) but using the difference vectors x'k and y'k instead of the pixel-intensity vectors xfc and yk.
Given the estimate y'£of the missing finite-difference pixels y'it one can insert this back into the view of finite difference and carry out a finite integration to obtain an estimate y£ of the mal-exposed pixels y£ .
The approach presented above can be adapted easily to exploit partially mal-exposed regions such as the region in the central image/view in Figure 2.
One possible way to do this relies on a partition of the mal-exposed region β£ = fa1, -.*." } To define the partition, we assign to each pixel p E Rir a binary string d(p) = [d, ... , dv] of length equal to the number of images/views V and entries equal to 1 if the corresponding image/view is well-exposed and 0 otherwise. Accordingly, the partition is chosen to be the smallest partition such that all pixels p E R£ fc have the same binary string d£. The well-exposed support Qt in this case needs to be chosen so that all pixels indicated by this support are well-exposed in all images/views.
An exam ple partition of the mal-exposed region Rt and an adequate well-exposed support Qi is illustrated in Figure 3.
Figure 3 illustrates a set of sub-aperture images, on which an embodiment of the disclosure is applied.
More precisely, Figure 3 presents an example partition of mal-exposed region and well exposed support enabling usage of views with partially mal-exposed regions.
Using this approach, the same method described above can be used to reconstruct each sub-region R£ m of the partition of region β£ one at a time, using the same well-exposed support Qi for all regions.
The approach described above can be extended to be non-linear in the pixels of the views. Let fj N → 1* be a generic, dimension preserving function. For example, it can be the element wise square function. We can build from X; = [X/ /CEM/ , a modified version Xj = [ / (xfc)]fc(≡M - We further concatenate many such modified version to obtain X = [x\] .
This X'i can be used in place of X; in the previous equations to derive a new set of coefficients b. Using a Y built in a manner analogous to the determination of X'; would allow us to again use the equation y£ = K£fe to obtain a reconstruction of y£ that is non-linear.
Figure 4 presents an example of device that ca n be used to perform one or several steps of a method disclosed in the present document. Such device referenced 400 comprises a computing unit (for example a CPU, for "Central Processing Unit"), referenced 401, and one or more memory units (for example a RAM (for "Random Access Memory") block in which intermediate results can be stored temporarily during the execution of instructions a computer program, or a ROM block in which, among other things, computer programs are stored, or an EEPROM ("Electrically-Erasable Programmable Read-Only Memory") block, or a flash block) referenced 402. Computer programs are made of instructions that can be executed by the computing unit. Such device 400 can also comprise a dedicated unit, referenced 403, constituting an input-output interface to allow the device 400 to communicate with other devices. In particular, this dedicated unit 403 ca n be connected with an antenna (in order to perform communication without contacts), or with serial ports (to carry communications "contact"). It should be noted that the arrows in Figure 4 signify that the linked unit ca n exchange data through buses for example together.
I n an alternative embodiment, some or all of the steps of the method previously described, can be implemented in hardware in a programmable FPGA ("Field Programmable Gate Array") component or ASIC ("Application-Specific Integrated Circuit") component.
I n an alternative embodiment, some or all of the steps of the method previously described, can be executed on an electronic device comprising memory units and processing units as the one disclosed in the Figure 4.
I n one embodiment of the disclosure, the electronic device depicted in Figure 4 can be comprised in a camera device that is configure to capture images (i.e. a sampling of a light field). These images are stored on one or more memory units. Hence, these images can be viewed as bit stream data (i.e. a sequence of bits). Obviously, a bit stream can also be converted on byte stream and vice versa.
At last, it should be noted that the determination of a mal-exposed region (Ri) can be done for example according to a method such as the one depicted in the article entitled "Color Clipping and Over-exposure Correction" by Abebe et al., published in Eurographics Symposium on Rendering - Experimental Ideas and I mplementations, 2015.

Claims

Claims
1. A method for modifying mal-exposed pixel values comprised in at least one sub- aperture image, said at least one sub-aperture image being comprised in a matrix of sub- aperture images obtained from a 4D raw light field, said method being characterized in that it comprises:
modifying mal-exposed pixel values comprised in at least one mal-exposed region (Ri ) of said at least one sub-aperture image, by combining weighted pixels values from co-located regions (Rk, R7, R8), comprising at least a part of well-exposed pixels, in other sub-aperture images comprised in said matrix;
wherein said method comprises determining wheights used in said combining, wherein said determining takes into account only pixels values from a region ( Qi ) partially surrounding said at least one mal-exposed region ( R^ ), and pixels values from other regions (Qk, Q7, ...), partially surrounding said co-located region (Rk, R7, R8).
2. The method for modifying mal-exposed pixel values according to claim 1, wherein said combining corresponds to a linear combination.
3. The method for modifying mal-exposed pixel values according to claim 1, wherein said combining corresponds to a non-linear combination.
4. The method for modifying mal-exposed pixel values according to any claims 1 to 3, wherein said determining comprises solving an optimization problem between pixels from said region (Qi ) partially surrounding said at least one mal-exposed region (R^ ), and pixels from said other regions (Qk, Q7,...).
5. The method for modifying mal-exposed pixel values according to claim 4, wherein said optimization problem involves a minimum mean square error (M MSE) estimator computation.
6. The method for modifying mal-exposed pixel values according to any claims 4 to 5, wherein said optimization problem further involves a penalty term.
7. The method for modifying mal-exposed pixel values according to any claims 1 to 6, wherein said region (Qi ) partially surrounding said at least one mal-exposed region (Ri), and said other regions ( Qk, Q7 ,...) completely surround respectively said mal-exposed region (R ) and said co-located regions (Rk, R7, R8).
8. A computer-readable and non-transient storage medium storing a computer program comprising a set of computer-executable instructions to implement a method for processing sub-aperture images, said instructions, when they are executed by a computer, being able to configure the computer to perform a method for modifying mal- exposed pixel values comprised in sub-aperture images of claims 1 to 7.
9. An electronic device for modifying mal-exposed pixel values comprised in at least one sub-aperture image, said at least one sub-aperture image being comprised in a matrix of sub-aperture images obtained from a 4D raw light field, the electronic device comprising a processor and at least one memory unit, the processor being coupled to said at least one memory unit, the processor being configured to:
modify mal-exposed pixel values comprised in at least one mal-exposed region (Ri ) of said at least one sub-aperture image, by combining weighted pixels values from co-located regions (Rk, R7, R8), comprising at least a part of well-exposed pixels, in other sub-aperture images comprised in said matrix;
wherein the processor is further configured to determine wheights used in said combining, by taking into account only pixels values from a region ( Qi ) partially surrounding said at least one mal-exposed region ( R^ ), and pixels values from other regions (Qk, Q7, ...), partially surrounding said co-located region (Rk, R7, R8).
10. The electronic device for modifying mal-exposed pixel values according to claim 9, wherein said combining corresponds to a linear combination.
11. The electronic device for modifying mal-exposed pixel values according to claim 9, wherein said combining corresponds to a non-linear combination.
12. The electronic device for modifying mal-exposed pixel values according to any claims 9 to 11, wherein the processor is further configured to solve an optimization problem between pixels from said region (Qi ) partially surrounding said at least one mal-exposed region ( ), and pixels from said other regions (Qk, Q7,...).
13. The electronic device for modifying mal-exposed pixel values according to claim 12, wherein said optimization problem involves a minimum mean square error (MMSE) estimator computation.
14. The electronic device for modifying mal-exposed pixel values according to any claims 12 to 13, wherein said optimization problem further involves a penalty term.
15. The electronic device for modifying mal-exposed pixel values according to any claims 9 to 14, wherein said region ( Qi ) partially surrounding said at least one mal-exposed region (R , and said other regions (Qk, Q7,...) completely surround respectively said mal- exposed region (R and said co-located regions (Rk, R7, R8).
PCT/EP2017/061967 2016-05-18 2017-05-18 Method for modifying mal-exposed pixel values comprised in sub-aperture images obtained from a 4d raw light field WO2017198766A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP16305574 2016-05-18
EP16305574.2 2016-05-18

Publications (1)

Publication Number Publication Date
WO2017198766A1 true WO2017198766A1 (en) 2017-11-23

Family

ID=56108589

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/061967 WO2017198766A1 (en) 2016-05-18 2017-05-18 Method for modifying mal-exposed pixel values comprised in sub-aperture images obtained from a 4d raw light field

Country Status (1)

Country Link
WO (1) WO2017198766A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2488905A (en) 2011-03-10 2012-09-12 Canon Kk Image pickup apparatus, such as plenoptic camera, utilizing lens array
WO2013180192A1 (en) 2012-05-31 2013-12-05 Canon Kabushiki Kaisha Information processing method, information processing apparatus, and program storage medium
US20140055646A1 (en) * 2012-08-27 2014-02-27 Canon Kabushiki Kaisha Image processing apparatus, method, and program, and image pickup apparatus having image processing apparatus
US20160073076A1 (en) * 2014-09-08 2016-03-10 Lytro, Inc. Saturated pixel recovery in light-field images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2488905A (en) 2011-03-10 2012-09-12 Canon Kk Image pickup apparatus, such as plenoptic camera, utilizing lens array
WO2013180192A1 (en) 2012-05-31 2013-12-05 Canon Kabushiki Kaisha Information processing method, information processing apparatus, and program storage medium
US20140055646A1 (en) * 2012-08-27 2014-02-27 Canon Kabushiki Kaisha Image processing apparatus, method, and program, and image pickup apparatus having image processing apparatus
US20160073076A1 (en) * 2014-09-08 2016-03-10 Lytro, Inc. Saturated pixel recovery in light-field images

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
ABEBE ET AL.: "Color Clipping and Over-exposure Correction", EUROGRAPHICS SYMPOSIUM ON RENDERING - EXPERIMENTAL IDEAS AND IMPLEMENTATIONS, 2015
ANAT LEVIN: "Understanding camera trade-offs through a Bayesian analysis of light field projections", CONFERENCE PROCEEDINGS OF ECCV, 2008
CRIMINISI A ET AL: "Object removal by exemplar-based inpainting", PROCEEDINGS / 2003 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 18 - 20 JUNE 2003, MADISON, WISCONSIN; [PROCEEDINGS OF THE IEEE COMPUTER CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION], LOS ALAMITOS, CALIF. [U.A, vol. 2, 18 June 2003 (2003-06-18), pages 721 - 728, XP010644808, ISBN: 978-0-7695-1900-5, DOI: 10.1109/CVPR.2003.1211538 *
GOLDLUECKE BASTIAN ET AL: "The Variational Structure of Disparity and Regularization of 4D Light Fields", IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. PROCEEDINGS, IEEE COMPUTER SOCIETY, US, 23 June 2013 (2013-06-23), pages 1003 - 1010, XP032492838, ISSN: 1063-6919, [retrieved on 20131002], DOI: 10.1109/CVPR.2013.134 *
M. SEIFI ET AL.: "Dispgrity-guided demosgicking of light field imgges", ICIP, 2014
REN NG: "Phd dissertation thesis", July 2006, article "Digital Light Field Photography"
S. WANNER: "Generating EPI Representation of a 4D Light Fields with a Single Lens Focused Plenoptic Camera", CONFERENCE PROCEEDINGS OF ISVC, 2011
YATZIV L ET AL: "Lightfield completion", IMAGE PROCESSING, 2004. ICIP '04. 2004 INTERNATIONAL CONFERENCE ON SINGAPORE 24-27 OCT. 2004, PISCATAWAY, NJ, USA,IEEE, vol. 3, 24 October 2004 (2004-10-24), pages 1787 - 1790, XP010786109, ISBN: 978-0-7803-8554-2, DOI: 10.1109/ICIP.2004.1421421 *
ZHONGYUAN WANG ET AL: "Trilateral constrained sparse representation for Kinect depth hole filling", PATTERN RECOGNITION LETTERS., vol. 65, 1 November 2015 (2015-11-01), NL, pages 95 - 102, XP055384597, ISSN: 0167-8655, DOI: 10.1016/j.patrec.2015.07.025 *

Similar Documents

Publication Publication Date Title
JP7186672B2 (en) System and method for multiscopic noise reduction and high dynamic range
US11107205B2 (en) Techniques for convolutional neural network-based multi-exposure fusion of multiple image frames and for deblurring multiple image frames
Haim et al. Depth estimation from a single image using deep learned phase coded mask
CN110428366B (en) Image processing method and device, electronic equipment and computer readable storage medium
US10708525B2 (en) Systems and methods for processing low light images
EP3098779B1 (en) Method for obtaining a refocused image from 4d raw light field data
US10334229B2 (en) Method for obtaining a refocused image from a 4D raw light field data using a shift correction parameter
CN108419028B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
US10366478B2 (en) Method and device for obtaining a HDR image by graph signal processing
US20180198970A1 (en) High dynamic range imaging using camera arrays
EP3249607A1 (en) Method for obtaining a position of a main lens optical center of a plenoptic camera
CN109661815B (en) Robust disparity estimation in the presence of significant intensity variations of the camera array
US20220100054A1 (en) Saliency based capture or image processing
KR20130061635A (en) System and method for performing depth estimation utilizing defocused pillbox images
CN109191398B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109325905B (en) Image processing method, image processing device, computer readable storage medium and electronic apparatus
EP3166073A1 (en) Method for obtaining a refocused image from 4d raw light field data
US20220319026A1 (en) Imaging system and method
Le Pendu et al. High dynamic range light fields via weighted low rank approximation
KR20210018348A (en) Prediction for light field coding and decoding
WO2017198766A1 (en) Method for modifying mal-exposed pixel values comprised in sub-aperture images obtained from a 4d raw light field
CN109242793A (en) Image processing method, device, computer readable storage medium and electronic equipment
US20190149750A1 (en) High frame rate motion field estimation for light field sensor, method, corresponding computer program product, computer readable carrier medium and device
CN115311149A (en) Image denoising method, model, computer-readable storage medium and terminal device
US20170018062A1 (en) Image processing devices and image processing methods

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17723428

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17723428

Country of ref document: EP

Kind code of ref document: A1