EP3162073A1 - Kompressionsbildgebung - Google Patents

Kompressionsbildgebung

Info

Publication number
EP3162073A1
EP3162073A1 EP15741635.5A EP15741635A EP3162073A1 EP 3162073 A1 EP3162073 A1 EP 3162073A1 EP 15741635 A EP15741635 A EP 15741635A EP 3162073 A1 EP3162073 A1 EP 3162073A1
Authority
EP
European Patent Office
Prior art keywords
compressive
matrix
array
measurements
aperture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP15741635.5A
Other languages
English (en)
French (fr)
Inventor
Hong Jiang
Gang Huang
Paul Wilford
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alcatel Lucent SAS
Original Assignee
Alcatel Lucent SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/319,142 external-priority patent/US20150382026A1/en
Application filed by Alcatel Lucent SAS filed Critical Alcatel Lucent SAS
Publication of EP3162073A1 publication Critical patent/EP3162073A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3059Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
    • H03M7/3062Compressive sampling or sensing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/955Computational photography systems, e.g. light-field imaging systems for lensless imaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof

Definitions

  • This disclosure is directed to systems and methods for compressive sense image processing.
  • Digital image/video cameras acquire and process a significant amount of raw data.
  • the raw pixel data for each of the N pixels of an N-pixel image is first captured and then typically compressed using a suitable compression algorithm for storage and/or transmission.
  • compression after capturing the raw data for each of the N pixels of the image is generally useful for reducing the size of the image (or video) captured by the camera, it requires significant computational resources and time.
  • compression of the raw pixel data does not always meaningfully reduce the size of the captured images.
  • a more recent approach known as compressive sense imaging, acquires compressed image (or video) data using random projections without first collecting the raw data for all of the N pixels of an N-pixel image. For example, a compressive measurement basis is applied to obtain a series of compressive measurements which represent the encoded (i.e., compressed) image. Since a reduced number of compressive measurements are acquired in comparison to the raw data for each of the N pixel values of a desired N-pixel image, this approach can significantly eliminate or reduce the need for applying compression after the raw data is captured .
  • Systems and methods for compressive sense imaging are provided.
  • incident light reflecting from an object and passing through an aperture array is detected by a sensor.
  • Intermediate compressive measurements are generated based on the output by the sensor using compressive sequence matrices that are determined based on the properties of the aperture array and the sensor.
  • the intermediate compressive measurements are further processed to generate compressive measurements representing the compressed image of the object.
  • An uncompressed image of the object is generated from the compressive measurements using a determined reconstruction matrix that is different from the sequence matrices used to acquire the intermediate compressive measurements .
  • a compressive sense imaging system and method includes generating a plurality of sequence matrices; determining a plurality of intermediate compressive measurements using the plurality of sequence matrices; and, generating a plurality of compressive measurements representing a compressed image of an object using the plurality of intermediate compressive measurements .
  • the system and method includes generating an uncompressed image of the object from the plurality of compressive measurements using a reconstruction basis matrix.
  • the system and method includes determining a kernel matrix based on properties of an aperture array of aperture elements and a sensor, and, generating a sensing matrix using the kernel matrix and a reconstruction basis matrix.
  • the system and method includes decomposing the sensing matrix to generate the plurality of sequence matrices .
  • the system and method includes determining a sensitivity function for the sensor; determining at least one characteristic function for at least one of the aperture elements of the aperture array; computing a kernel function by performing a convolution operation using the sensitivity function and the at least one characteristic function; and, determining the kernel matrix using the kernel function and an image.
  • the system and method includes applying a sparsifying operator to generate the uncompressed image of the object from the plurality of compressive measurements using the reconstruction basis matrix.
  • the system and method includes selectively enabling or disabling one or more aperture elements of an aperture array based on at least one basis in a sequence matrix to determine at least one of the plurality of intermediate compressive measurements during a time period, where the at least one of the plurality of intermediate compressive measurements is determined based on an aggregated sum of light detected by the sensor during the time period.
  • the aperture array is an array of micro-mirrors. In some aspects, the aperture array is an array of LCD elements .
  • FIG. 1 illustrates an example of a compressive sense imaging system in accordance with various aspects of the disclosure.
  • FIG. 2 illustrates an example of a camera unit for acquiring compressive measurements of an object using a sequence matrix in accordance with one aspect of the disclosure .
  • FIG. 3 illustrates an example process for compressive sense imaging in accordance with various aspects of the disclosure.
  • FIG. 4 illustrates an example apparatus for implementing aspects of the disclosure.
  • the term, "or” refers to a nonexclusive or, unless otherwise indicated (e.g., “or else” or “or in the alternative") .
  • words used to describe a relationship between elements should be broadly construed to include a direct relationship or the presence of intervening elements unless otherwise indicated. For example, when an element is referred to as being “connected” or “coupled” to another element, the element may be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Similarly, words such as “between”, “adjacent”, and the like should be interpreted in a like fashion .
  • FIG. 1 illustrates a schematic example of a compressive imaging acquisition and reconstruction system 100 ("system 100") .
  • Incident light 105 reflecting from an object 110 is received by the camera unit 115, which generates a plurality of intermediate compressive measurements using a determined number of compressive sequence matrices 120.
  • the intermediate compressive measurements are further processed to generate compressive measurements 125 representing the compressed image of the object 110.
  • the compressive measurements 125 representing the compressed image of the object 110 may be stored (or transmitted) by a storage/transmission unit 130.
  • the reconstruction unit 135 generates an uncompressed image 140 (e.g., for display on a display unit) of the object 110 from the compressive measurements 125 using a determined reconstruction matrix 150.
  • the units are shown separately in FIG. 1, this is merely to aid understanding of the disclosure. In other aspects the functionality of any or all of the units described above may be implemented using fewer or greater number of units. Furthermore, the functionality attributed to the various units may be implemented by a single processing device or distributed amongst multiple processing devices. Some examples of suitable processing devices include cameras, camera systems, mobile phones, personal computer systems, tablets, set-top boxes, smart phones or any type of computing device configured to acquire, process, or output data.
  • a single processing device may be configured to provide the functionality of each of the units of system 100.
  • the single processing device may include, for example, a memory storing one or more instructions, and a processor for executing the one or more instructions, which, upon execution, may configure the processor to provide functionality ascribed to the units.
  • the single processing device may include other components typically found in computing devices, such as one or more input/output components for inputting or outputting information to/from the processing device, including a camera, a display, a keyboard, a mouse, network adapter, etc .
  • a local processing device may be provided at a first location that is communicatively interconnected with a remote processing device at a remote location via network.
  • the local processing device may be configured with the functionality to generate and provide the compressive measurements 125 of the local object 110 to a remote processing device over the network.
  • the remote processing device may be configured to receive the compressive measurements from the local processing device, to generate the reconstructed image 140 from the compressive measurements 125 using the reconstruction basis matrix 150, and to display the reconstructed image to a remote user in accordance with the aspects described below.
  • the local processing device and the remote processing device may be respectively implemented using an apparatus similar to the single processing device, and may include a memory storing one or more instructions, a processor for executing the one or more instructions, and various input/output components as in the case of the single processing device.
  • the network may be an intranet, the Internet, or any type or combination of one or more wired or wireless networks .
  • FIG. 2 illustrates an example of a lensless camera unit 115 for acquiring compressive measurements 125 representing the compressed image of the object 110 using compressive sense imaging. Although a particular embodiment of the lensless camera unit 115 is described, this is not to be construed as a limitation, and the principles of the disclosure may be applied to other embodiments of compressive sense imaging systems .
  • Incident light 105 reflected off the object 110 is received at the camera unit 115 where the light 105 is selectively permitted to pass through an aperture array 220 of N individual aperture elements and strike a sensor 230.
  • the camera unit 115 processes the output of the sensor 230 to produce intermediate compressive measurements using a plurality of sequence matrices that are determined based on one or more properties of the aperture array 220 and the sensor 230.
  • the compressive measurements 125 collectively represent the compressed image of the object 110 and are determined using the intermediate compressive measurements.
  • the number M of the compressive measurements 125 that are acquired as the compressed image of the object 110 is typically significantly less than the N raw data values that are acquired in a conventional camera system having an N-pixel sensor for generating an N-pixel image, thus reducing or eliminating the need for conventional compression of the raw data values after acquisition.
  • the number of compressive measurements M may be pre-selected relative to the N aperture elements of the array 220 based upon a desired balance between the level of compression and the quality of the N-pixel image 140 that is reconstructed using the M compressive measurements.
  • the first element in the first row of array 220 is exemplarily referenced as 220 [1,1]
  • the last element in the last row of the array 220 is referenced as 220 [8,8] .
  • the size and format of the array 220 may have a significantly greater (or fewer) number of elements, depending on the desired resolution of the image 140.
  • the overall transmittance of light 105 passing through the array 220 and reaching the sensor 230 at a given time may be varied by setting the transmittance of one or more of the individual aperture elements of the array.
  • the overall transmittance of array 220 may be adjusted by selectively and individually changing the transmittance of one or more of the aperture elements 220 [1,1] to 220 [8, 8] to increase or decrease the amount of light 105 passing through the array 220 and reaching the sensor 230 at a given time.
  • aperture elements that are fully opened allow light 105 to pass through those opened elements and reach the sensor 230
  • aperture elements that are fully closed prevent or block light 105 from passing through the closed elements of the array 220 and reaching the photon detector 230
  • the aperture elements may be partially opened (or partially closed) to pass only some, but not all, of the light 105 to reach the sensor 230 via the partially opened (or partially closed) elements.
  • the collective state of the individual aperture elements determines the overall transmittance of the aperture array 220 and therefore determines the amount of light 105 reaching the sensor 230 at a given time.
  • the aperture array 220 is a micro-mirror array of N individually selectable micro- mirrors.
  • the aperture array 120 may be an N element LCD array.
  • the aperture array 220 may be any suitable array of electronic or optical components having selectively controllable transmittance .
  • the camera unit 115 is configured to generate intermediate compressive measurements by selectively adjusting the overall transmittance of the aperture array 220 in accordance with compressive bases information in a plurality of sequence matrices.
  • Each of the intermediate compressive measurements may be understood as the determined sum (or aggregate) of the light 105 reaching the sensor 230 through the array 220 during a particular time when particular ones of the N aperture elements of the array 220 are selectively opened and closed (either fully or partially) in accordance with a pattern indicated by a particular compressive basis of a sequence matrix 120.
  • One feature of the present disclosure is that a M number of intermediate compressive measurements are acquired using each of a S number of sequence matrices that are determined as described further below. Since S>2, at least 2M number of intermediate compressive measurements are determined, which are processed into M compressive measurements 125 representing the compressed image of the object 110 as described further below. The M compressive measurements 125 are used in conjunction with the reconstruction matrix 150 to reconstruct or generate the uncompressed image 140 of the object 110.
  • the sequence matrices are determined based on a kernel function, where the kernel function is determined based on the properties of the array 220 and the sensor 230.
  • a determined sequence matrix 120 is a set of M compressive bases b lt b 2 ,— b Mr each of which is applied in turn to the array 220 to produce a respective one of M intermediate compressive measurements .
  • Each measurement basis b lt b 2 ,—b M in the sequence matrix 120 is itself an array of N values corresponding to the number N of aperture elements of the array 220, as indicated mathematically below:
  • each compressive basis b k (7c G [1... ]) of a given sequence matrix 120 is a set of values b k [1] to b k [64] where each value is normalized to a set [0,1] as described later below.
  • each value of a given compressive basis may be a "0", "1", or a real value between "0" and "1", which respectively determines the corresponding state (e.g., fully closed, fully opened, or a state in-between) of a respective aperture element in the 8x8 aperture array 220.
  • a given compressive basis b k is applied to the array 220 to produce a corresponding intermediate compressive measurement for a time t k as follows.
  • the respective values b k [l] to 3 ⁇ 4 [64] are used to set the state (fully opened, fully closed or partially opened or closed) of the corresponding elements of array 220, and the detected sum or aggregate of light 105 reaching the sensor 230 is determined as the value of the corresponding intermediate compressive measurement.
  • a total number of MxS intermediate compressive measurements are produced in this manner, where M is the number of compressive bases in each sequence matrix 120 and S is the number of sequence matrices (where S ⁇ 2) .
  • steps 302-308 describe the determination of the sequence matrices 120.
  • Step 310 describes determination of the compressive measurements 125 representing the compressed image of the object 110 from the intermediate compressive measurements acquired using the sequence matrices 120.
  • Step 312 describes generating the uncompressed image of the object 110 from the compressive measurements 125 using the reconstruction matrix 150.
  • the determination of the sequence matrices 120 begins in step 302 with the computation of a NxN kernel matrix K that is determined based on the geometry and properties of the array 220 and the sensor 230.
  • the kernel matrix may be determined as follows .
  • the kernel matrix is computed based on a sensitivity function for the sensor 230 and a characteristic function of the array 220.
  • a sensitivity function F(x, y) of the sensor 230 is determined, where F(x, y) is the response of the sensor 230 when light strikes a point x, y on the sensor in Cartesian coordinates.
  • the sensor 230 is selected such that it has a large sensing area and a uniform (or close to uniform) sensitivity function F(x, y) , such that the sensor response (or, in other words, sensor sensitivity) does not vary (or does not vary very much) based on the where the light strikes the sensor.
  • the discrete kernel function may also be obtained by calibrating the camera unit 115 using a point lighting source (e.g., a laser source or another lighting source that is in effect a point lighting source with respect to the camera unit 115) .
  • a point lighting source e.g., a laser source or another lighting source that is in effect a point lighting source with respect to the camera unit 115.
  • NxN kernel matrix K is computed from the discrete kernel function as:
  • ID indicates the one-dimensional (ID) vector form of a 2D array, and / is any N-pixel image.
  • the determination of the sequence matrices 120 continues by specifying the reconstruction matrix 150.
  • the reconstruction matrix may be any MxN matrix that has a property suitable for use in compressive sense imaging, such as, for example, the Restricted Isometry Property.
  • the reconstruction matrix 150 is a MxN matrix whose rows are selected from randomly or pseudo-randomly permuted NxN Hadamard matrix, having the known properties that the entries or values of such reconstruction matrix are either +1 or -1 and the rows are mutually orthogonal.
  • step 306 the determination of the sequence matrices 120 continues by computing a MxN sensing matrix A, where the sensing matrix is computed as:
  • R is the MxN reconstruction matrix computed in step 304
  • the MxN matrix that is determined based on the properties of the array 220 and the sensor 230, it is not suitable for use as a sequence matrix 120 directly. This is because, as will be apparent at least from the negatively values of the reconstruction matrix R, one or more of values [ ⁇ 3 ⁇ 4] of the sensing matrix A do not satisfy 0 ⁇ a ⁇ 1. In fact, the sensing matrix A may include large negative and positive values, which are impractical (or perhaps not possible) to use as a pattern for setting the condition of the aperture elements of the array 220.
  • the sensing matrix A is further decomposed into the sequence matrices 120 that have values that are within the set [0,1] as follows. It is also noted that while the description below is provided for the sequence matrices to have values within the set [0,1], the disclosure below is applicable to decomposing the sensing matrix A to have values within other sets.
  • a ⁇ k [ a i'f-] also satisfy 0 ⁇ ⁇ - ⁇ 1.
  • each of the determined sequence matrices A k and A k are applied to the array 220 to acquire the intermediate compressive measurements as described previously.
  • step 312 the process includes determining the compressive measurements 125 representing the compressed image of the object 110 from the intermediate compressive measurements determined in step 310, and reconstructing the uncompressed image of the object 110 from the compressive measurements 125.
  • the M number of compressive measurements 125 are determined using the intermediate compressive measurements y and y k as:
  • W is a sparsifying operator
  • / is the one- dimensional matrix representation of the N valued image 140
  • R is the reconstruction basis matrix determined in step 304
  • y ⁇ 1 , ⁇ 2 , ⁇ 3—Y M is a column vector of the compressive measurements 125 acquired based on the intermediate compressive measurements acquired using the sequence matrices.
  • the sparsifying operator W may be generated, for example, by using wavelets, or by using total variations.
  • Steps 304 to 312 of the process described above may be repeated or performed once per image or video frame .
  • Step 302 need not be repeated unless a different kernel matrix K is desired, for example, if there is a change in the array 220 or the sensor 230.
  • the present disclosure is believed to incur a number of advantages. To begin with, it describes an improved lensless camera unit suitable for compressive sense imaging that provides better images in low-light having a higher signal-to-noise ratio due to a larger number of measurements (at least 2xM) acquired using the array 220 to produce the M number of compressive measurements. In addition, the measurements are acquired in a manner that takes particular properties of the aperture array and the sensor into account. To continue, the present disclosure is suited for images in all spectrum of light, including the visible and the invisible spectrum.
  • FIG. 4 depicts a high-level block diagram of an example processing device or apparatus 400 suitable for implementing one or more aspects of the disclosure.
  • Apparatus 400 comprises a processor 402 that is communicatively interconnected with various input/output devices 404 and a memory 406.
  • the processor 402 may be any type of processor such as a general purpose central processing unit (“CPU") or a dedicated microprocessor such as an embedded microcontroller or a digital signal processor ("DSP").
  • the input/output devices 404 may be any peripheral device operating under the control of the processor 402 and configured to input data into or output data from the apparatus 400 in accordance with the disclosure, such as, for example, a lens or lensless camera or video capture device which may include a aperture array and a sensor.
  • the input/output devices 404 may also include conventional network adapters, data ports, and various user interface devices such as a keyboard, a keypad, a mouse, or a display.
  • Memory 406 may be any type of memory suitable for storing electronic information, including data and instructions executable by the processor 402.
  • Memory 406 may be implemented as, for example, as one or more combinations of a random access memory (RAM) , read only memory (ROM) , flash memory, hard disk drive memory, compact- disk memory, optical memory, etc.
  • apparatus 400 may also include an operating system, queue managers, device drivers, or one or more network protocols which may be stored, in one embodiment, in memory 406 and executed by the processor 402.
  • the memory 406 may include non-transitory memory storing executable instructions and data, which instructions, upon execution by the processor 402, may configure apparatus 400 to perform the functionality in accordance with the various aspects and steps described above.
  • the processor 402 may be configured, upon execution of the instructions, to communicate with, control, or implement all or a part of the functionality with respect to the acquisition or the reconstruction of the compressive measurements as described above.
  • the processor may be configured to determine the sequence matrices, the intermediate compressive measurements, the compressive measurements, and to generate the uncompressed images or video using a determined reconstruction matrix as described above.
  • the processor 402 may also be configured to communicate with and/or control another apparatus 400 to which it is interconnected via, for example a network. In such cases, the functionality disclosed herein may be integrated into each standalone apparatus 400 or may be distributed between one or more apparatus 400. In some embodiments, the processor 402 may also be configured as a plurality of interconnected processors that are situated in different locations and communicatively interconnected with each other (e.g., in a cloud computing environment) .
  • FIG. 4 While a particular apparatus configuration is shown in FIG. 4, it will be appreciated that the present disclosure not limited to any particular implementation. For example, in some embodiments, all or a part of the functionality disclosed herein may be implemented using one or more application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or the like.
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • aspects herein have been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present disclosure. It is therefore to be understood that numerous modifications can be made to the illustrative embodiments and that other arrangements can be devised without departing from the spirit and scope of the disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Studio Devices (AREA)
  • Computing Systems (AREA)
EP15741635.5A 2014-06-30 2015-06-18 Kompressionsbildgebung Withdrawn EP3162073A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/319,142 US20150382026A1 (en) 2010-09-30 2014-06-30 Compressive Sense Imaging
PCT/US2015/036314 WO2016003655A1 (en) 2014-06-30 2015-06-18 Compressive sense imaging

Publications (1)

Publication Number Publication Date
EP3162073A1 true EP3162073A1 (de) 2017-05-03

Family

ID=55020114

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15741635.5A Withdrawn EP3162073A1 (de) 2014-06-30 2015-06-18 Kompressionsbildgebung

Country Status (2)

Country Link
EP (1) EP3162073A1 (de)
WO (1) WO2016003655A1 (de)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3249908B1 (de) 2016-05-26 2019-11-27 Nokia Technologies Oy Array-vorrichtung und entsprechende verfahren
US11631708B2 (en) 2018-09-28 2023-04-18 Semiconductor Energy Laboratory Co., Ltd. Image processing method, program, and imaging device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8644376B2 (en) * 2010-09-30 2014-02-04 Alcatel Lucent Apparatus and method for generating compressive measurements of video using spatial and temporal integration
US20130201343A1 (en) * 2012-02-07 2013-08-08 Hong Jiang Lenseless compressive image acquisition
US20130201297A1 (en) * 2012-02-07 2013-08-08 Alcatel-Lucent Usa Inc. Lensless compressive image acquisition
US20130044818A1 (en) * 2011-08-19 2013-02-21 Alcatel-Lucent Usa Inc. Method And Apparatus For Video Coding Using A Special Class Of Measurement Matrices
US8842216B2 (en) * 2012-08-30 2014-09-23 Raytheon Company Movable pixelated filter array

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO2016003655A1 *

Also Published As

Publication number Publication date
WO2016003655A1 (en) 2016-01-07

Similar Documents

Publication Publication Date Title
US20150382026A1 (en) Compressive Sense Imaging
AU2014233518C1 (en) Noise aware edge enhancement
KR20210059712A (ko) 이미지 향상을 위한 인공지능 기법
US9025883B2 (en) Adaptive quality image reconstruction via a compressed sensing framework
KR20210089166A (ko) 신경망을 사용한 밝은 반점 제거
KR20210079282A (ko) 신경망을 이용한 사진 노출 부족 보정
WO2017062834A1 (en) Holographic light field imaging device and method of using the same
EP3162072A1 (de) Systeme und verfahren zur komprimierenden erfassungsbildgebung
EP3162073A1 (de) Kompressionsbildgebung
US8744200B1 (en) Hybrid compressive/Nyquist sampling for enhanced sensing
JP6689379B2 (ja) マルチ解像度圧縮センシング画像処理
JP2017521942A (ja) 圧縮センシング撮像
KR101701138B1 (ko) 해상도 및 초점 향상
EP3983990A1 (de) Lichtfeldnachrichtenübertragung
RU2016115562A (ru) Устройство захвата изображений, система захвата изображений и способ захвата изображений
US10091441B1 (en) Image capture at multiple resolutions
US10404907B2 (en) Electronic device, method and computer program
JP6294751B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
Don Designing for compressive sensing: Compressive art, camouflage, fonts, and quick response codes
Jiang et al. Noise analysis for lensless compressive imaging
JP2016025511A5 (de)
Jiang et al. Signal to noise ratio in lensless compressive imaging

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20170130

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20180205

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ALCATEL LUCENT

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20190622