US20150382026A1 - Compressive Sense Imaging - Google Patents

Compressive Sense Imaging Download PDF

Info

Publication number
US20150382026A1
US20150382026A1 US14/319,142 US201414319142A US2015382026A1 US 20150382026 A1 US20150382026 A1 US 20150382026A1 US 201414319142 A US201414319142 A US 201414319142A US 2015382026 A1 US2015382026 A1 US 2015382026A1
Authority
US
United States
Prior art keywords
compressive
matrix
measurements
compressive measurements
aperture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/319,142
Inventor
Hong Jiang
Gang Huang
Paul Albin Wilford
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alcatel Lucent SAS
Original Assignee
Alcatel Lucent USA Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/894,855 external-priority patent/US8644376B2/en
Priority claimed from US13/367,413 external-priority patent/US20130201297A1/en
Priority claimed from US13/658,904 external-priority patent/US9319578B2/en
Priority claimed from US13/658,900 external-priority patent/US20130201343A1/en
Priority to US14/319,142 priority Critical patent/US20150382026A1/en
Application filed by Alcatel Lucent USA Inc filed Critical Alcatel Lucent USA Inc
Assigned to ALCATEL-LUCENT USA INC. reassignment ALCATEL-LUCENT USA INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HUANG, GANG, JIANG, HONG, WILFORD, PAUL ALBIN
Priority to EP15741635.5A priority patent/EP3162073A1/en
Priority to JP2016575825A priority patent/JP6475269B2/en
Priority to PCT/US2015/036314 priority patent/WO2016003655A1/en
Assigned to ALCATEL LUCENT reassignment ALCATEL LUCENT ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALCATEL-LUCENT USA INC.
Publication of US20150382026A1 publication Critical patent/US20150382026A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/48Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, variable length coding [VLC] data or run-length data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • This disclosure is directed to systems and methods for compressive sense image processing.
  • Digital image/video cameras acquire and process a significant amount of raw data.
  • the raw pixel data for each of the N pixels of an N-pixel image is first captured and then typically compressed using a suitable compression algorithm for storage and/or transmission.
  • compression after capturing the raw data for each of the N pixels of the image is generally useful for reducing the size of the image (or video) captured by the camera, it requires significant computational resources and time.
  • compression of the raw pixel data does not always meaningfully reduce the size of the captured images.
  • a more recent approach known as compressive sense imaging, acquires compressed image (or video) data using random projections without first collecting the raw data for all of the N pixels of an N-pixel image. For example, a compressive measurement basis is applied to obtain a series of compressive measurements which represent the encoded (i.e., compressed) image. Since a reduced number of compressive measurements are acquired in comparison to the raw data for each of the N pixel values of a desired N-pixel image, this approach can significantly eliminate or reduce the need for applying compression after the raw data is captured.
  • incident light reflecting from an object and passing through an aperture array is detected by a sensor.
  • Intermediate compressive measurements are generated based on the output by the sensor using compressive sequence matrices that are determined based on the properties of the aperture array and the sensor.
  • the intermediate compressive measurements are further processed to generate compressive measurements representing the compressed image of the object.
  • An uncompressed image of the object is generated from the compressive measurements using a determined reconstruction matrix that is different from the sequence matrices used to acquire the intermediate compressive measurements.
  • a compressive sense imaging system and method includes generating a plurality of sequence matrices; determining a plurality of intermediate compressive measurements using the plurality of sequence matrices; and, generating a plurality of compressive measurements representing a compressed image of an object using the plurality of intermediate compressive measurements.
  • the system and method includes generating an uncompressed image of the object from the plurality of compressive measurements using a reconstruction basis matrix.
  • the system and method includes determining a kernel matrix based on properties of an aperture array of aperture elements and a sensor, and, generating a sensing matrix using the kernel matrix and a reconstruction basis matrix.
  • the system and method includes decomposing the sensing matrix to generate the plurality of sequence matrices.
  • the system and method includes determining a sensitivity function for the sensor
  • determining at least one characteristic function for at least one of the aperture elements of the aperture array determining at least one characteristic function for at least one of the aperture elements of the aperture array; computing a kernel function by performing a convolution operation using the sensitivity function and the at least one characteristic function; and, determining the kernel matrix using the kernel function and an image.
  • the system and method includes applying a sparsifying operator to generate the uncompressed image of the object from the plurality of compressive measurements using the reconstruction basis matrix.
  • the system and method includes selectively enabling or disabling one or more aperture elements of an aperture array based on at least one basis in a sequence matrix to determine at least one of the plurality of intermediate compressive measurements during a time period, where the at least one of the plurality of intermediate compressive measurements is determined based on an aggregated sum of light detected by the sensor during the time period.
  • the aperture array is an array of micro-mirrors. In some aspects, the aperture array is an array of LCD elements.
  • FIG. 1 illustrates an example of a compressive sense imaging system in accordance with various aspects of the disclosure.
  • FIG. 2 illustrates an example of a camera unit for acquiring compressive measurements of an object using a sequence matrix in accordance with one aspect of the disclosure.
  • FIG. 3 illustrates an example process for compressive sense imaging in accordance with various aspects of the disclosure.
  • FIG. 4 illustrates an example apparatus for implementing aspects of the disclosure.
  • the term, “or” refers to a non-exclusive or, unless otherwise indicated (e.g., “or else” or “or in the alternative”).
  • words used to describe a relationship between elements should be broadly construed to include a direct relationship or the presence of intervening elements unless otherwise indicated. For example, when an element is referred to as being “connected” or “coupled” to another element, the element may be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Similarly, words such as “between”, “adjacent”, and the like should be interpreted in a like fashion.
  • FIG. 1 illustrates a schematic example of a compressive imaging acquisition and reconstruction system 100 (“system 100 ”).
  • System 100 Incident light 105 reflecting from an object 110 is received by the camera unit 115 , which generates a plurality of intermediate compressive measurements using a determined number of compressive sequence matrices 120 .
  • the intermediate compressive measurements are further processed to generate compressive measurements 125 representing the compressed image of the object 110 .
  • the compressive measurements 125 representing the compressed image of the object 110 may be stored (or transmitted) by a storage/transmission unit 130 .
  • the reconstruction unit 135 generates an uncompressed image 140 (e.g., for display on a display unit) of the object 110 from the compressive measurements 125 using a determined reconstruction matrix 150 .
  • any or all of the units described above may be implemented using fewer or greater number of units.
  • the functionality attributed to the various units may be implemented by a single processing device or distributed amongst multiple processing devices.
  • suitable processing devices include cameras, camera systems, mobile phones, personal computer systems, tablets, set-top boxes, smart phones or any type of computing device configured to acquire, process, or output data.
  • a single processing device may be configured to provide the functionality of each of the units of system 100 .
  • the single processing device may include, for example, a memory storing one or more instructions, and a processor for executing the one or more instructions, which, upon execution, may configure the processor to provide functionality ascribed to the units.
  • the single processing device may include other components typically found in computing devices, such as one or more input/output components for inputting or outputting information to/from the processing device, including a camera, a display, a keyboard, a mouse, network adapter, etc.
  • a local processing device may be provided at a first location that is communicatively interconnected with a remote processing device at a remote location via network.
  • the local processing device may be configured with the functionality to generate and provide the compressive measurements 125 of the local object 110 to a remote processing device over the network.
  • the remote processing device may be configured to receive the compressive measurements from the local processing device, to generate the reconstructed image 140 from the compressive measurements 125 using the reconstruction basis matrix 150 , and to display the reconstructed image to a remote user in accordance with the aspects described below.
  • the local processing device and the remote processing device may be respectively implemented using an apparatus similar to the single processing device, and may include a memory storing one or more instructions, a processor for executing the one or more instructions, and various input/output components as in the case of the single processing device.
  • the network may be an intranet, the Internet, or any type or combination of one or more wired or wireless networks.
  • FIG. 2 illustrates an example of a lensless camera unit 115 for acquiring compressive measurements 125 representing the compressed image of the object 110 using compressive sense imaging.
  • a particular embodiment of the lensless camera unit 115 is described, this is not to be construed as a limitation, and the principles of the disclosure may be applied to other embodiments of compressive sense imaging systems.
  • Incident light 105 reflected off the object 110 is received at the camera unit 115 where the light 105 is selectively permitted to pass through an aperture array 220 of N individual aperture elements and strike a sensor 230 .
  • the camera unit 115 processes the output of the sensor 230 to produce intermediate compressive measurements using a plurality of sequence matrices that are determined based on one or more properties of the aperture array 220 and the sensor 230 .
  • the compressive measurements 125 collectively represent the compressed image of the object 110 and are determined using the intermediate compressive measurements.
  • the number M of the compressive measurements 125 that are acquired as the compressed image of the object 110 is typically significantly less than the N raw data values that are acquired in a conventional camera system having an N-pixel sensor for generating an N-pixel image, thus reducing or eliminating the need for conventional compression of the raw data values after acquisition.
  • the number of compressive measurements M may be pre-selected relative to the N aperture elements of the array 220 based upon a desired balance between the level of compression and the quality of the N-pixel image 140 that is reconstructed using the M compressive measurements.
  • N sixty-four
  • the first element in the first row of array 220 is exemplarily referenced as 220 [ 1 , 1 ]
  • the last element in the last row of the array 220 is referenced as 220 [ 8 , 8 ].
  • the size and format of the array 220 may have a significantly greater (or fewer) number of elements, depending on the desired resolution of the image 140 .
  • the overall transmittance of light 105 passing through the array 220 and reaching the sensor 230 at a given time may be varied by setting the transmittance of one or more of the individual aperture elements of the array.
  • the overall transmittance of array 220 may be adjusted by selectively and individually changing the transmittance of one or more of the aperture elements 220 [ 1 , 1 ] to 220 [ 8 , 8 ] to increase or decrease the amount of light 105 passing through the array 220 and reaching the sensor 230 at a given time.
  • aperture elements that are fully opened allow light 105 to pass through those opened elements and reach the sensor 230
  • aperture elements that are fully closed prevent or block light 105 from passing through the closed elements of the array 220 and reaching the photon detector 230
  • the aperture elements may be partially opened (or partially closed) to pass only some, but not all, of the light 105 to reach the sensor 230 via the partially opened (or partially closed) elements.
  • the collective state of the individual aperture elements determines the overall transmittance of the aperture array 220 and therefore determines the amount of light 105 reaching the sensor 230 at a given time.
  • the aperture array 220 is a micro-mirror array of N individually selectable micro-mirrors.
  • the aperture array 120 may be an N element LCD array.
  • the aperture array 220 may be any suitable array of electronic or optical components having selectively controllable transmittance.
  • the camera unit 115 is configured to generate intermediate compressive measurements by selectively adjusting the overall transmittance of the aperture array 220 in accordance with compressive bases information in a plurality of sequence matrices.
  • Each of the intermediate compressive measurements may be understood as the determined sum (or aggregate) of the light 105 reaching the sensor 230 through the array 220 during a particular time when particular ones of the N aperture elements of the array 220 are selectively opened and closed (either fully or partially) in accordance with a pattern indicated by a particular compressive basis of a sequence matrix 120 .
  • One feature of the present disclosure is that a M number of intermediate compressive measurements are acquired using each of a S number of sequence matrices that are determined as described further below. Since S ⁇ 2, at least 2M number of intermediate compressive measurements are determined, which are processed into M compressive measurements 125 representing the compressed image of the object 110 as described further below. The M compressive measurements 125 are used in conjunction with the reconstruction matrix 150 to reconstruct or generate the uncompressed image 140 of the object 110 .
  • the sequence matrices are determined based on a kernel function, where the kernel function is determined based on the properties of the array 220 and the sensor 230 .
  • a determined sequence matrix 120 is a set of M compressive bases b 1 , b 2 , . . . b M , each of which is applied in turn to the array 220 to produce a respective one of M intermediate compressive measurements.
  • Each measurement basis b 1 , b 2 , . . . b M in the sequence matrix 120 is itself an array of N values corresponding to the number N of aperture elements of the array 220 , as indicated mathematically below:
  • each compressive basis b k (k ⁇ [1 . . . M]) of a given sequence matrix 120 is a set of values b k [1] to b k [64] where each value is normalized to a set [0,1] as described later below.
  • each value of a given compressive basis may be a “0”, “1”, or a real value between “0” and “1”, which respectively determines the corresponding state (e.g., fully closed, fully opened, or a state in-between) of a respective aperture element in the 8 ⁇ 8 aperture array 220 .
  • a given compressive basis b k is applied to the array 220 to produce a corresponding intermediate compressive measurement for a time t k as follows.
  • the respective values b k [1] to b k [64] are used to set the state (fully opened, fully closed or partially opened or closed) of the corresponding elements of array 220 , and the detected sum or aggregate of light 105 reaching the sensor 230 is determined as the value of the corresponding intermediate compressive measurement.
  • a total number of M ⁇ S intermediate compressive measurements are produced in this manner, where M is the number of compressive bases in each sequence matrix 120 and S is the number of sequence matrices (where S ⁇ 2).
  • steps 302 - 308 describe the determination of the sequence matrices 120 .
  • Step 310 describes determination of the compressive measurements 125 representing the compressed image of the object 110 from the intermediate compressive measurements acquired using the sequence matrices 120 .
  • Step 312 describes generating the uncompressed image of the object 110 from the compressive measurements 125 using the reconstruction matrix 150 .
  • the determination of the sequence matrices 120 begins in step 302 with the computation of a N ⁇ N kernel matrix K that is determined based on the geometry and properties of the array 220 and the sensor 230 .
  • the kernel matrix may be determined as follows.
  • the kernel matrix is computed based on a sensitivity function for the sensor 230 and a characteristic function of the array 220 .
  • a sensitivity function F(x,y) of the sensor 230 is determined, where F(x,y) is the response of the sensor 230 when light strikes a point x,y on the sensor in Cartesian coordinates.
  • the sensor 230 is selected such that it has a large sensing area and a uniform (or close to uniform) sensitivity function F(x,y), such that the sensor response (or, in other words, sensor sensitivity) does not vary (or does not vary very much) based on the where the light strikes the sensor.
  • a discrete kernel function k(row,column) is determined as:
  • E row,column identifies a particular aperture element E of the array 220 using the row, column notation.
  • the discrete kernel function may also be obtained by calibrating the camera unit 115 using a point lighting source (e.g., a laser source or another lighting source that is in effect a point lighting source with respect to the camera unit 115 ).
  • a point lighting source e.g., a laser source or another lighting source that is in effect a point lighting source with respect to the camera unit 115 .
  • N ⁇ N kernel matrix K is computed from the discrete kernel function as:
  • K ⁇ I 1D ( k (row,column)* I 2D ) 1D ,
  • 1D indicates the one-dimensional (1D) vector form of a 2D array
  • I is any N-pixel image
  • the determination of the sequence matrices 120 continues by specifying the reconstruction matrix 150 .
  • the reconstruction matrix may be any M ⁇ N matrix that has a property suitable for use in compressive sense imaging, such as, for example, the Restricted Isometry Property.
  • the reconstruction matrix 150 is a M ⁇ N matrix whose rows are selected from randomly or pseudo-randomly permuted N ⁇ N Hadamard matrix, having the known properties that the entries or values of such reconstruction matrix are either +1 or ⁇ 1 and the rows are mutually orthogonal.
  • step 306 the determination of the sequence matrices 120 continues by computing a M ⁇ N sensing matrix A, where the sensing matrix is computed as:
  • sensing matrix A is a M ⁇ N matrix that is determined based on the properties of the array 220 and the sensor 230 , it is not suitable for use as a sequence matrix 120 directly. This is because, as will be apparent at least from the negatively values of the reconstruction matrix R, one or more of values [ ⁇ ij ] of the sensing matrix A do not satisfy 0 ⁇ ij ⁇ 1. In fact, the sensing matrix A may include large negative and positive values, which are impractical (or perhaps not possible) to use as a pattern for setting the condition of the aperture elements of the array 220 .
  • the sensing matrix A is further decomposed into the sequence matrices 120 that have values that are within the set [0,1] as follows. It is also noted that while the description below is provided for the sequence matrices to have values within the set [0,1], the disclosure below is applicable to decomposing the sensing matrix A to have values within other sets.
  • each of the determined sequence matrices A k + and A k ⁇ are applied to the array 220 to acquire the intermediate compressive measurements as described previously.
  • step 312 the process includes determining the compressive measurements 125 representing the compressed image of the object 110 from the intermediate compressive measurements determined in step 310 , and reconstructing the uncompressed image of the object 110 from the compressive measurements 125 .
  • the M number of compressive measurements 125 are determined using the intermediate compressive measurements y k + and y k ⁇ as:
  • the uncompressed image I of the object 110 may be determined using the compressive measurements 125 and the reconstruction matrix 150 as:
  • W is a sparsifying operator
  • I is the one-dimensional matrix representation of the N valued image 140
  • R is the reconstruction basis matrix determined in step 304
  • the sparsifying operator W may be generated, for example, by using wavelets, or by using total variations.
  • Steps 304 to 312 of the process described above may be repeated or performed once per image or video frame. Step 302 need not be repeated unless a different kernel matrix K is desired, for example, if there is a change in the array 220 or the sensor 230 .
  • the present disclosure is believed to incur a number of advantages. To begin with, it describes an improved lensless camera unit suitable for compressive sense imaging that provides better images in low-light having a higher signal-to-noise ratio due to a larger number of measurements (at least 2 ⁇ M) acquired using the array 220 to produce the M number of compressive measurements. In addition, the measurements are acquired in a manner that takes particular properties of the aperture array and the sensor into account. To continue, the present disclosure is suited for images in all spectrum of light, including the visible and the invisible spectrum.
  • the present disclosure also provides for capturing images that sharper (e.g., having a greater amount of detail) for a given sensor geometry and size, and particularly for sensor and aperture arrays that are relatively large, which are otherwise known to produce soft (relatively blurrier) images.
  • FIG. 4 depicts a high-level block diagram of an example processing device or apparatus 400 suitable for implementing one or more aspects of the disclosure.
  • Apparatus 400 comprises a processor 402 that is communicatively interconnected with various input/output devices 404 and a memory 406 .
  • the processor 402 may be any type of processor such as a general purpose central processing unit (“CPU”) or a dedicated microprocessor such as an embedded microcontroller or a digital signal processor (“DSP”).
  • the input/output devices 404 may be any peripheral device operating under the control of the processor 402 and configured to input data into or output data from the apparatus 400 in accordance with the disclosure, such as, for example, a lens or lensless camera or video capture device which may include a aperture array and a sensor.
  • the input/output devices 404 may also include conventional network adapters, data ports, and various user interface devices such as a keyboard, a keypad, a mouse, or a display.
  • Memory 406 may be any type of memory suitable for storing electronic information, including data and instructions executable by the processor 402 .
  • Memory 406 may be implemented as, for example, as one or more combinations of a random access memory (RAM), read only memory (ROM), flash memory, hard disk drive memory, compact-disk memory, optical memory, etc.
  • apparatus 400 may also include an operating system, queue managers, device drivers, or one or more network protocols which may be stored, in one embodiment, in memory 406 and executed by the processor 402 .
  • the memory 406 may include non-transitory memory storing executable instructions and data, which instructions, upon execution by the processor 402 , may configure apparatus 400 to perform the functionality in accordance with the various aspects and steps described above.
  • the processor 402 may be configured, upon execution of the instructions, to communicate with, control, or implement all or a part of the functionality with respect to the acquisition or the reconstruction of the compressive measurements as described above.
  • the processor may be configured to determine the sequence matrices, the intermediate compressive measurements, the compressive measurements, and to generate the uncompressed images or video using a determined reconstruction matrix as described above.
  • the processor 402 may also be configured to communicate with and/or control another apparatus 400 to which it is interconnected via, for example a network. In such cases, the functionality disclosed herein may be integrated into each standalone apparatus 400 or may be distributed between one or more apparatus 400 . In some embodiments, the processor 402 may also be configured as a plurality of interconnected processors that are situated in different locations and communicatively interconnected with each other (e.g., in a cloud computing environment).
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays

Abstract

Systems and methods for compressive sense imaging are provided. In one aspect, incident light reflecting from an object is received via an aperture array and a sensor and intermediate compressive measurements are generated using compressive sequence matrices that are determined based on the properties of the aperture array and the sensor. The intermediate compressive measurements are further processed to generate compressive measurements representing the compressed image of the object. An uncompressed image of the object is generated from the compressive measurements using a determined reconstruction matrix that is different from the sequence matrices used to acquire the intermediate compressive measurements.

Description

    CROSS-REFERENCE
  • The present application references subject matter of the following U.S. applications, each of which is incorporated by reference herein in its entirety: U.S. application Ser. No. 13/658,904 filed on Oct. 24, 2012 and entitled “Resolution and Focus Enhancement”; U.S. application Ser. No. 13/658,900 filed on Oct. 24, 2012 and entitled “Lensless Compressive Image Acquisition”; U.S. application Ser. No. 13/367,413 filed on Feb. 7, 2012 and entitled “Lensless Compressive Image Acquisition”; and, U.S. application Ser. No. 12/894,855 filed on Sep. 30, 2010 and entitled “Apparatus and Method for Generating Compressive Measurements of Video Using Spatial and Temporal Integration”, which issued as U.S. Pat. No. 8,644,376 on Feb. 4, 2014.
  • TECHNICAL FIELD
  • This disclosure is directed to systems and methods for compressive sense image processing.
  • BACKGROUND
  • This section introduces aspects that may be helpful in facilitating a better understanding of the systems and methods disclosed herein. Accordingly, the statements of this section are to be read in this light and are not to be understood or interpreted as admissions about what is or is not in the prior art.
  • Digital image/video cameras acquire and process a significant amount of raw data. In order to store or transmit image data efficiently, the raw pixel data for each of the N pixels of an N-pixel image is first captured and then typically compressed using a suitable compression algorithm for storage and/or transmission. Although compression after capturing the raw data for each of the N pixels of the image is generally useful for reducing the size of the image (or video) captured by the camera, it requires significant computational resources and time. In addition, compression of the raw pixel data does not always meaningfully reduce the size of the captured images.
  • A more recent approach, known as compressive sense imaging, acquires compressed image (or video) data using random projections without first collecting the raw data for all of the N pixels of an N-pixel image. For example, a compressive measurement basis is applied to obtain a series of compressive measurements which represent the encoded (i.e., compressed) image. Since a reduced number of compressive measurements are acquired in comparison to the raw data for each of the N pixel values of a desired N-pixel image, this approach can significantly eliminate or reduce the need for applying compression after the raw data is captured.
  • BRIEF SUMMARY
  • Systems and methods for compressive sense imaging are provided. In some embodiments, incident light reflecting from an object and passing through an aperture array is detected by a sensor. Intermediate compressive measurements are generated based on the output by the sensor using compressive sequence matrices that are determined based on the properties of the aperture array and the sensor. The intermediate compressive measurements are further processed to generate compressive measurements representing the compressed image of the object. An uncompressed image of the object is generated from the compressive measurements using a determined reconstruction matrix that is different from the sequence matrices used to acquire the intermediate compressive measurements.
  • In one aspect, a compressive sense imaging system and method includes generating a plurality of sequence matrices; determining a plurality of intermediate compressive measurements using the plurality of sequence matrices; and, generating a plurality of compressive measurements representing a compressed image of an object using the plurality of intermediate compressive measurements.
  • In some aspects, the system and method includes generating an uncompressed image of the object from the plurality of compressive measurements using a reconstruction basis matrix.
  • In some aspects, the system and method includes determining a kernel matrix based on properties of an aperture array of aperture elements and a sensor, and, generating a sensing matrix using the kernel matrix and a reconstruction basis matrix.
  • In some aspects, the system and method includes decomposing the sensing matrix to generate the plurality of sequence matrices.
  • In some aspects, the system and method includes determining a sensitivity function for the sensor;
  • determining at least one characteristic function for at least one of the aperture elements of the aperture array; computing a kernel function by performing a convolution operation using the sensitivity function and the at least one characteristic function; and, determining the kernel matrix using the kernel function and an image.
  • In some aspects, the system and method includes applying a sparsifying operator to generate the uncompressed image of the object from the plurality of compressive measurements using the reconstruction basis matrix.
  • In some aspects, the system and method includes selectively enabling or disabling one or more aperture elements of an aperture array based on at least one basis in a sequence matrix to determine at least one of the plurality of intermediate compressive measurements during a time period, where the at least one of the plurality of intermediate compressive measurements is determined based on an aggregated sum of light detected by the sensor during the time period.
  • In some aspects, the aperture array is an array of micro-mirrors. In some aspects, the aperture array is an array of LCD elements.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an example of a compressive sense imaging system in accordance with various aspects of the disclosure.
  • FIG. 2 illustrates an example of a camera unit for acquiring compressive measurements of an object using a sequence matrix in accordance with one aspect of the disclosure.
  • FIG. 3 illustrates an example process for compressive sense imaging in accordance with various aspects of the disclosure.
  • FIG. 4 illustrates an example apparatus for implementing aspects of the disclosure.
  • DETAILED DESCRIPTION
  • Various aspects of the disclosure are described below with reference to the accompanying drawings, in which like numerals refer to like elements in the description of the figures. The description and drawings merely illustrate the principles of the disclosure; various structures, systems and devices are described and depicted in the drawings for purposes of explanation only and so as not to obscure the present invention with details that are well known to those skilled in the art, who will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles and are included within spirit and scope of the disclosure.
  • As used herein, the term, “or” refers to a non-exclusive or, unless otherwise indicated (e.g., “or else” or “or in the alternative”). Furthermore, words used to describe a relationship between elements should be broadly construed to include a direct relationship or the presence of intervening elements unless otherwise indicated. For example, when an element is referred to as being “connected” or “coupled” to another element, the element may be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Similarly, words such as “between”, “adjacent”, and the like should be interpreted in a like fashion.
  • The singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,”, “includes” and “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • FIG. 1 illustrates a schematic example of a compressive imaging acquisition and reconstruction system 100 (“system 100”). Incident light 105 reflecting from an object 110 is received by the camera unit 115, which generates a plurality of intermediate compressive measurements using a determined number of compressive sequence matrices 120. The intermediate compressive measurements are further processed to generate compressive measurements 125 representing the compressed image of the object 110. The compressive measurements 125 representing the compressed image of the object 110 may be stored (or transmitted) by a storage/transmission unit 130. The reconstruction unit 135 generates an uncompressed image 140 (e.g., for display on a display unit) of the object 110 from the compressive measurements 125 using a determined reconstruction matrix 150.
  • Although the units are shown separately in FIG. 1, this is merely to aid understanding of the disclosure. In other aspects the functionality of any or all of the units described above may be implemented using fewer or greater number of units. Furthermore, the functionality attributed to the various units may be implemented by a single processing device or distributed amongst multiple processing devices. Some examples of suitable processing devices include cameras, camera systems, mobile phones, personal computer systems, tablets, set-top boxes, smart phones or any type of computing device configured to acquire, process, or output data.
  • In one embodiment, a single processing device may be configured to provide the functionality of each of the units of system 100. The single processing device may include, for example, a memory storing one or more instructions, and a processor for executing the one or more instructions, which, upon execution, may configure the processor to provide functionality ascribed to the units. The single processing device may include other components typically found in computing devices, such as one or more input/output components for inputting or outputting information to/from the processing device, including a camera, a display, a keyboard, a mouse, network adapter, etc.
  • In another embodiment, a local processing device may be provided at a first location that is communicatively interconnected with a remote processing device at a remote location via network. The local processing device may be configured with the functionality to generate and provide the compressive measurements 125 of the local object 110 to a remote processing device over the network. The remote processing device, in turn, may be configured to receive the compressive measurements from the local processing device, to generate the reconstructed image 140 from the compressive measurements 125 using the reconstruction basis matrix 150, and to display the reconstructed image to a remote user in accordance with the aspects described below. The local processing device and the remote processing device may be respectively implemented using an apparatus similar to the single processing device, and may include a memory storing one or more instructions, a processor for executing the one or more instructions, and various input/output components as in the case of the single processing device. The network may be an intranet, the Internet, or any type or combination of one or more wired or wireless networks.
  • FIG. 2 illustrates an example of a lensless camera unit 115 for acquiring compressive measurements 125 representing the compressed image of the object 110 using compressive sense imaging. Although a particular embodiment of the lensless camera unit 115 is described, this is not to be construed as a limitation, and the principles of the disclosure may be applied to other embodiments of compressive sense imaging systems.
  • Incident light 105 reflected off the object 110 is received at the camera unit 115 where the light 105 is selectively permitted to pass through an aperture array 220 of N individual aperture elements and strike a sensor 230. The camera unit 115 processes the output of the sensor 230 to produce intermediate compressive measurements using a plurality of sequence matrices that are determined based on one or more properties of the aperture array 220 and the sensor 230. The compressive measurements 125 collectively represent the compressed image of the object 110 and are determined using the intermediate compressive measurements.
  • To achieve compression, the number M of the compressive measurements 125 that are acquired as the compressed image of the object 110 is typically significantly less than the N raw data values that are acquired in a conventional camera system having an N-pixel sensor for generating an N-pixel image, thus reducing or eliminating the need for conventional compression of the raw data values after acquisition. In practice, the number of compressive measurements M may be pre-selected relative to the N aperture elements of the array 220 based upon a desired balance between the level of compression and the quality of the N-pixel image 140 that is reconstructed using the M compressive measurements.
  • The example array 220 illustrated in FIG. 2 is a two dimensional, 8×8 array of sixty-four (N=64) discrete aperture elements, which are arranged in two dimensional row and column format such that individual elements of the array 220 may be uniquely identified using a tabular notation form “[row, column]”. Thus, the first element in the first row of array 220 is exemplarily referenced as 220[1,1], and the last element in the last row of the array 220 is referenced as 220[8,8].
  • In practice, the size and format of the array 220 may have a significantly greater (or fewer) number of elements, depending on the desired resolution of the image 140. By way of example only, the array 220 may be a 640×480 (N=307,200) element array for a desired image resolution of 640×480 pixels for the image 140, or may be a 1920×1080 (N=2,073,600) element array for a correspondingly greater desired resolution of the image 140.
  • The overall transmittance of light 105 passing through the array 220 and reaching the sensor 230 at a given time may be varied by setting the transmittance of one or more of the individual aperture elements of the array. For example, the overall transmittance of array 220 may be adjusted by selectively and individually changing the transmittance of one or more of the aperture elements 220[1,1] to 220[8,8] to increase or decrease the amount of light 105 passing through the array 220 and reaching the sensor 230 at a given time.
  • Aperture elements that are fully opened (e.g., fully enabled or activated) allow light 105 to pass through those opened elements and reach the sensor 230, whereas aperture elements that are fully closed (e.g., fully disabled or deactivated) prevent or block light 105 from passing through the closed elements of the array 220 and reaching the photon detector 230. The aperture elements may be partially opened (or partially closed) to pass only some, but not all, of the light 105 to reach the sensor 230 via the partially opened (or partially closed) elements. Thus, the collective state of the individual aperture elements (e.g., opened, closed, or partially opened or closed) determines the overall transmittance of the aperture array 220 and therefore determines the amount of light 105 reaching the sensor 230 at a given time.
  • In one embodiment, the aperture array 220 is a micro-mirror array of N individually selectable micro-mirrors. In another embodiment, the aperture array 120 may be an N element LCD array. In other embodiments, the aperture array 220 may be any suitable array of electronic or optical components having selectively controllable transmittance.
  • The camera unit 115 is configured to generate intermediate compressive measurements by selectively adjusting the overall transmittance of the aperture array 220 in accordance with compressive bases information in a plurality of sequence matrices. Each of the intermediate compressive measurements may be understood as the determined sum (or aggregate) of the light 105 reaching the sensor 230 through the array 220 during a particular time when particular ones of the N aperture elements of the array 220 are selectively opened and closed (either fully or partially) in accordance with a pattern indicated by a particular compressive basis of a sequence matrix 120.
  • One feature of the present disclosure is that a M number of intermediate compressive measurements are acquired using each of a S number of sequence matrices that are determined as described further below. Since S≧2, at least 2M number of intermediate compressive measurements are determined, which are processed into M compressive measurements 125 representing the compressed image of the object 110 as described further below. The M compressive measurements 125 are used in conjunction with the reconstruction matrix 150 to reconstruct or generate the uncompressed image 140 of the object 110. Another feature of the present disclosure is that the sequence matrices are determined based on a kernel function, where the kernel function is determined based on the properties of the array 220 and the sensor 230. These and other aspects of the present disclosure are described in detail further below.
  • In general, a determined sequence matrix 120 is a set of M compressive bases b1, b2, . . . bM, each of which is applied in turn to the array 220 to produce a respective one of M intermediate compressive measurements. Each measurement basis b1, b2, . . . bM in the sequence matrix 120 is itself an array of N values corresponding to the number N of aperture elements of the array 220, as indicated mathematically below:
  • [ b 1 [ 1 ] b 1 [ 2 ] b 1 [ N ] b 2 [ 1 ] b 2 [ 2 ] b 2 [ N ] b 3 [ 1 ] b 3 [ 2 ] b 3 [ N ] b M [ 1 ] b M [ 2 ] b M [ N ] ]
  • For example, in the embodiment illustrated in FIG. 2, each compressive basis bk(kε[1 . . . M]) of a given sequence matrix 120 is a set of values bk [1] to bk [64] where each value is normalized to a set [0,1] as described later below. Accordingly, each value of a given compressive basis may be a “0”, “1”, or a real value between “0” and “1”, which respectively determines the corresponding state (e.g., fully closed, fully opened, or a state in-between) of a respective aperture element in the 8×8 aperture array 220.
  • A given compressive basis bk is applied to the array 220 to produce a corresponding intermediate compressive measurement for a time tk as follows. The respective values bk[1] to bk[64] are used to set the state (fully opened, fully closed or partially opened or closed) of the corresponding elements of array 220, and the detected sum or aggregate of light 105 reaching the sensor 230 is determined as the value of the corresponding intermediate compressive measurement. A total number of M×S intermediate compressive measurements are produced in this manner, where M is the number of compressive bases in each sequence matrix 120 and S is the number of sequence matrices (where S≧2).
  • An example operation of system 100 is now described in conjunction with the process 300 of FIG. 3. As an overview for aiding the reader, steps 302-308 describe the determination of the sequence matrices 120. Step 310 describes determination of the compressive measurements 125 representing the compressed image of the object 110 from the intermediate compressive measurements acquired using the sequence matrices 120. Step 312 describes generating the uncompressed image of the object 110 from the compressive measurements 125 using the reconstruction matrix 150.
  • It is to be understood that the steps described below are merely illustrative and that existing steps may be modified or omitted, additional steps may be added, and the order of certain steps may be altered.
  • Turning now to the process 300 of FIG. 3, the determination of the sequence matrices 120 begins in step 302 with the computation of a N×N kernel matrix K that is determined based on the geometry and properties of the array 220 and the sensor 230. The kernel matrix may be determined as follows.
  • In one embodiment, the kernel matrix is computed based on a sensitivity function for the sensor 230 and a characteristic function of the array 220. First, a sensitivity function F(x,y) of the sensor 230 is determined, where F(x,y) is the response of the sensor 230 when light strikes a point x,y on the sensor in Cartesian coordinates. Preferably, but not necessarily, the sensor 230 is selected such that it has a large sensing area and a uniform (or close to uniform) sensitivity function F(x,y), such that the sensor response (or, in other words, sensor sensitivity) does not vary (or does not vary very much) based on the where the light strikes the sensor.
  • Next, a characteristic function for each of the aperture elements of the array is defined, such that the characteristic function E(x,y) of a given aperture element E is E(x,y)=1 if a point x,y in Cartesian coordinates falls within the area of the aperture element and E(x,y)=0 if the point x,y in lies outside the area of the aperture element.
  • Next, a kernel function k(x,y) is defined using the sensitivity function of the sensor 230 and the characteristic function of the aperture elements of the array 220 as k(x,y)=E*F, where, the * operator indicates two-dimensional (2D) convolution operation. A discrete kernel function k(row,column) is determined as:

  • k(row,column)=∫∫E row,column k(x,y)dxdy,
  • where Erow,column identifies a particular aperture element E of the array 220 using the row, column notation.
  • It is noted here that alternatively, in another embodiment, the discrete kernel function may also be obtained by calibrating the camera unit 115 using a point lighting source (e.g., a laser source or another lighting source that is in effect a point lighting source with respect to the camera unit 115).
  • Finally, the N×N kernel matrix K is computed from the discrete kernel function as:

  • K·I 1D=(k(row,column)*I 2D)1D,
  • where 1D indicates the one-dimensional (1D) vector form of a 2D array, and I is any N-pixel image.
  • In step 304, the determination of the sequence matrices 120 continues by specifying the reconstruction matrix 150. The reconstruction matrix may be any M×N matrix that has a property suitable for use in compressive sense imaging, such as, for example, the Restricted Isometry Property. In one embodiment, accordingly, the reconstruction matrix 150 is a M×N matrix whose rows are selected from randomly or pseudo-randomly permuted N×N Hadamard matrix, having the known properties that the entries or values of such reconstruction matrix are either +1 or −1 and the rows are mutually orthogonal.
  • In step 306, the determination of the sequence matrices 120 continues by computing a M×N sensing matrix A, where the sensing matrix is computed as:

  • A=[α ij ]=RK −1
  • where, R is the M×N reconstruction matrix computed in step 304 and K−1 is the N×N inverse matrix of the N×N kernel matrix K that was determined in step 302 based on the properties of the sensor 230 and the array 220, and where [αij] are the values of the sensing matrix A for i=1, . . . M and j=1, . . . N.
  • It is pointed out that while sensing matrix A is a M×N matrix that is determined based on the properties of the array 220 and the sensor 230, it is not suitable for use as a sequence matrix 120 directly. This is because, as will be apparent at least from the negatively values of the reconstruction matrix R, one or more of values [αij] of the sensing matrix A do not satisfy 0≦αij≦1. In fact, the sensing matrix A may include large negative and positive values, which are impractical (or perhaps not possible) to use as a pattern for setting the condition of the aperture elements of the array 220.
  • As a result, in step 308, the sensing matrix A is further decomposed into the sequence matrices 120 that have values that are within the set [0,1] as follows. It is also noted that while the description below is provided for the sequence matrices to have values within the set [0,1], the disclosure below is applicable to decomposing the sensing matrix A to have values within other sets.
  • Given the sensing matrix A, define:
  • A + = [ a i , j + ] , where a i , j + = { a i , j , for a i , j > 0 0 , for a i , j < 0 and , A - = [ a i , j - ] , where a i , j - = { - a i , j for a i , j < 0 0 , for a i , j 0
  • for i=1, . . . M and j=1, . . . N.
  • Next, A+ is decomposed into a P+ number of M×N sequence matrices Ak +=[αi,j (k)+] where, i=1, . . . M, j=1, . . . N, and k=1, . . . , P3+ using the following pseudo-code algorithm:
  •  for i = 1, . . . M, j = 1, . . . , N
      let p = 0, aij (0)+ = 0
       while a ij + - k = 1 p a ij ( k ) + > 1
        a ij ( p + 1 ) + = clip ( a ij + - k = 1 p a ij ( k ) + , 1 )
       p ← p + 1
      end
       a ij ( p + 1 ) + = a ij + - k = 1 p a ij ( k ) +
      P+ (i, j) = p + 1
     end
    define P + = max i , j P ( i , j )
     for k = 1, . . . , P+
      Ak + = [aij (k)+], where aij (k)+ = 0 if k > P+ (i, j)
    where , clip ( x , u ) = { x , if 0 x µ µ , otherwise
  • Next, matrix A may be similarly decomposed into a P number of M×N sequence matrices Ak =[αi,j (k)−] where, i=1, . . . M, j=1, . . . N, and k=1, . . . , P based on the algorithm above.
  • It is noted that all of the values of the resulting P+ number of M×N sequence matrices Ak +=[αi,j (k)+] satisfy 0≦αij +≦1, and, similarly, all of the values of the each of the resulting P number of M×N sequence matrices Ak =[αi,j (k)−] also satisfy 0≦αij ≦1.
  • The decomposition of the sensing matrix into the sequence matrices described above leads to the equation:
  • A = k = 1 P + A k + - k = 1 P - A k -
  • In step 310, each of the determined sequence matrices Ak + and Ak are applied to the array 220 to acquire the intermediate compressive measurements as described previously. For example, in one embodiment, each M×N sequence matrix Ak + (k=1, . . . , P+) is applied to the array 220 to generate a measurement vector yk + of the corresponding set of M intermediate compressive measurements. Similarly, each M×N sequence matrix Ak (k=1, . . . , P) is also applied to the array 220 to generate a measurement vector yk of the corresponding set of M intermediate compressive measurements.
  • In step 312, the process includes determining the compressive measurements 125 representing the compressed image of the object 110 from the intermediate compressive measurements determined in step 310, and reconstructing the uncompressed image of the object 110 from the compressive measurements 125.
  • In particular, the M number of compressive measurements 125 are determined using the intermediate compressive measurements yk + and yk as:
  • y = k = 1 P + y k + - k = 1 P - y k -
  • The uncompressed image I of the object 110 may be determined using the compressive measurements 125 and the reconstruction matrix 150 as:
  • min∥W·I∥1, subject to: R·I=y=Σk=1 P+yk +−Σk=1 P−yk
  • where W is a sparsifying operator, I is the one-dimensional matrix representation of the N valued image 140, R is the reconstruction basis matrix determined in step 304, and y=Y1, Y2, Y3 . . . YM is a column vector of the compressive measurements 125 acquired based on the intermediate compressive measurements acquired using the sequence matrices. The sparsifying operator W may be generated, for example, by using wavelets, or by using total variations.
  • Steps 304 to 312 of the process described above may be repeated or performed once per image or video frame. Step 302 need not be repeated unless a different kernel matrix K is desired, for example, if there is a change in the array 220 or the sensor 230.
  • The present disclosure is believed to incur a number of advantages. To begin with, it describes an improved lensless camera unit suitable for compressive sense imaging that provides better images in low-light having a higher signal-to-noise ratio due to a larger number of measurements (at least 2×M) acquired using the array 220 to produce the M number of compressive measurements. In addition, the measurements are acquired in a manner that takes particular properties of the aperture array and the sensor into account. To continue, the present disclosure is suited for images in all spectrum of light, including the visible and the invisible spectrum. In addition, the present disclosure also provides for capturing images that sharper (e.g., having a greater amount of detail) for a given sensor geometry and size, and particularly for sensor and aperture arrays that are relatively large, which are otherwise known to produce soft (relatively blurrier) images.
  • It will be appreciated that one or more aspects of the disclosure may be implemented using hardware, software, or a combination thereof. FIG. 4 depicts a high-level block diagram of an example processing device or apparatus 400 suitable for implementing one or more aspects of the disclosure. Apparatus 400 comprises a processor 402 that is communicatively interconnected with various input/output devices 404 and a memory 406.
  • The processor 402 may be any type of processor such as a general purpose central processing unit (“CPU”) or a dedicated microprocessor such as an embedded microcontroller or a digital signal processor (“DSP”). The input/output devices 404 may be any peripheral device operating under the control of the processor 402 and configured to input data into or output data from the apparatus 400 in accordance with the disclosure, such as, for example, a lens or lensless camera or video capture device which may include a aperture array and a sensor. The input/output devices 404 may also include conventional network adapters, data ports, and various user interface devices such as a keyboard, a keypad, a mouse, or a display.
  • Memory 406 may be any type of memory suitable for storing electronic information, including data and instructions executable by the processor 402. Memory 406 may be implemented as, for example, as one or more combinations of a random access memory (RAM), read only memory (ROM), flash memory, hard disk drive memory, compact-disk memory, optical memory, etc. In addition, apparatus 400 may also include an operating system, queue managers, device drivers, or one or more network protocols which may be stored, in one embodiment, in memory 406 and executed by the processor 402.
  • The memory 406 may include non-transitory memory storing executable instructions and data, which instructions, upon execution by the processor 402, may configure apparatus 400 to perform the functionality in accordance with the various aspects and steps described above. In some embodiments, the processor 402 may be configured, upon execution of the instructions, to communicate with, control, or implement all or a part of the functionality with respect to the acquisition or the reconstruction of the compressive measurements as described above. The processor may be configured to determine the sequence matrices, the intermediate compressive measurements, the compressive measurements, and to generate the uncompressed images or video using a determined reconstruction matrix as described above.
  • In some embodiments, the processor 402 may also be configured to communicate with and/or control another apparatus 400 to which it is interconnected via, for example a network. In such cases, the functionality disclosed herein may be integrated into each standalone apparatus 400 or may be distributed between one or more apparatus 400. In some embodiments, the processor 402 may also be configured as a plurality of interconnected processors that are situated in different locations and communicatively interconnected with each other (e.g., in a cloud computing environment).
  • While a particular apparatus configuration is shown in FIG. 4, it will be appreciated that the present disclosure not limited to any particular implementation. For example, in some embodiments, all or a part of the functionality disclosed herein may be implemented using one or more application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or the like.
  • Although aspects herein have been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present disclosure. It is therefore to be understood that numerous modifications can be made to the illustrative embodiments and that other arrangements can be devised without departing from the spirit and scope of the disclosure.

Claims (20)

1. A compressive sense imaging system, the system comprising:
a processing device configured to:
generate a plurality of sequence matrices;
determine a plurality of intermediate compressive measurements using the plurality of sequence matrices; and,
generate a plurality of compressive measurements representing a compressed image of an object using the plurality of intermediate compressive measurements.
2. The compressive sense imaging system of claim 1, wherein the processing device is further configured to:
generate an uncompressed image of the object from the plurality of compressive measurements using a reconstruction basis matrix.
3. The compressive sense imaging system of claim 1, wherein the processing device is further configured to:
determine a kernel matrix based on properties of an aperture array of aperture elements and a sensor, and,
generate a sensing matrix using the kernel matrix and a reconstruction basis matrix.
4. The compressive sense imaging system of claim 3, wherein the processing device is configured to:
decompose the sensing matrix to generate the plurality of sequence matrices.
5. The compressive sense imaging system of claim 3, wherein the processing device is configured to:
determine a sensitivity function for the sensor;
determine at least one characteristic function for at least one of the aperture elements of the aperture array;
compute a kernel function by performing a convolution operation using the sensitivity function and the at least one characteristic function; and,
determine the kernel matrix using the kernel function and an image.
6. The compressive sense imaging system of claim 2, wherein the processing device is further configured to:
apply a sparsifying operator to generate the uncompressed image of the object from the plurality of compressive measurements using the reconstruction basis matrix.
7. The compressive sense imaging system of claim 1, further comprising:
a lensless camera unit including an aperture array of aperture elements and a sensor for detecting light passing through the aperture elements of the aperture array.
8. The compressive sense imaging system of claim 7, wherein the processing device is further configured to:
selective enable or disable one or more of the aperture elements of the aperture array based on at least one basis in a sequence matrix to acquire at least one of the plurality of intermediate compressive measurements during a time period, the at least one of the plurality of intermediate compressive measurements being determined based on an aggregated sum of light detected by the sensor during the time period.
9. The compressive sense imaging system of claim 7, wherein the aperture array is a micro-mirror array.
10. The compressive sense imaging system of claim 7, wherein the aperture array is a LCD array.
11. A method for compressive sense imaging, the method comprising:
generating, using a processor, a plurality of sequence matrices;
determining a plurality of intermediate compressive measurements using the plurality of sequence matrices; and,
generating a plurality of compressive measurements representing a compressed image of an object using the plurality of intermediate compressive measurements.
12. The method of claim 11, further comprising:
generating an uncompressed image of the object from the plurality of compressive measurements using a reconstruction basis matrix.
13. The method of claim 11, further comprising:
determining a kernel matrix based on properties of an aperture array of aperture elements and a sensor, and,
generating a sensing matrix using the kernel matrix and a reconstruction basis matrix.
14. The method of claim 13, further comprising:
decomposing the sensing matrix to generate the plurality of sequence matrices.
15. The method of claim 13, further comprising:
determining a sensitivity function for the sensor;
determining at least one characteristic function for at least one of the aperture elements of the aperture array;
computing a kernel function by performing a convolution operation using the sensitivity function and the at least one characteristic function; and,
determining the kernel matrix using the kernel function and an image.
16. The method of claim 12, further comprising:
applying a sparsifying operator to generate the uncompressed image of the object from the plurality of compressive measurements using the reconstruction basis matrix.
17. The method of claim 11, further comprising:
selectively enabling or disabling one or more aperture elements of an aperture array based on at least one basis in a sequence matrix to determine at least one of the plurality of intermediate compressive measurements during a time period, the at least one of the plurality of intermediate compressive measurements being determined based on an aggregated sum of light detected by a sensor during the time period.
18. A non-transitory computer-readable medium including one or more instructions for configuring a processor for:
generating a plurality of sequence matrices;
determining a plurality of intermediate compressive measurements using the plurality of sequence matrices; and,
generating a plurality of compressive measurements representing a compressed image of an object using the plurality of intermediate compressive measurements.
19. The non-transitory computer-readable medium of claim 18, including one or more instructions for further configuring the processor for:
generating an uncompressed image of the object from the plurality of compressive measurements using a reconstruction basis matrix.
20. The non-transitory computer-readable medium of claim 18, including one or more instructions for further configuring the processor for:
determining a kernel matrix based on properties of an aperture array of aperture elements and a sensor;
generating a sensing matrix using the kernel matrix and a reconstruction basis matrix; and,
decomposing the sensing matrix to generate the plurality of sequence matrices.
US14/319,142 2010-09-30 2014-06-30 Compressive Sense Imaging Abandoned US20150382026A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US14/319,142 US20150382026A1 (en) 2010-09-30 2014-06-30 Compressive Sense Imaging
PCT/US2015/036314 WO2016003655A1 (en) 2014-06-30 2015-06-18 Compressive sense imaging
EP15741635.5A EP3162073A1 (en) 2014-06-30 2015-06-18 Compressive sense imaging
JP2016575825A JP6475269B2 (en) 2014-06-30 2015-06-18 Compressed sensing imaging

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US12/894,855 US8644376B2 (en) 2010-09-30 2010-09-30 Apparatus and method for generating compressive measurements of video using spatial and temporal integration
US13/367,413 US20130201297A1 (en) 2012-02-07 2012-02-07 Lensless compressive image acquisition
US13/658,900 US20130201343A1 (en) 2012-02-07 2012-10-24 Lenseless compressive image acquisition
US13/658,904 US9319578B2 (en) 2012-10-24 2012-10-24 Resolution and focus enhancement
US14/319,142 US20150382026A1 (en) 2010-09-30 2014-06-30 Compressive Sense Imaging

Publications (1)

Publication Number Publication Date
US20150382026A1 true US20150382026A1 (en) 2015-12-31

Family

ID=54932000

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/315,909 Active 2034-11-12 US9344736B2 (en) 2010-09-30 2014-06-26 Systems and methods for compressive sense imaging
US14/319,142 Abandoned US20150382026A1 (en) 2010-09-30 2014-06-30 Compressive Sense Imaging

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/315,909 Active 2034-11-12 US9344736B2 (en) 2010-09-30 2014-06-26 Systems and methods for compressive sense imaging

Country Status (2)

Country Link
US (2) US9344736B2 (en)
JP (1) JP6652510B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180035046A1 (en) * 2016-07-29 2018-02-01 Xin Yuan Block-based lensless compressive image acquisition
CN108886588A (en) * 2016-01-15 2018-11-23 康耐视股份有限公司 It is used to form the NI Vision Builder for Automated Inspection of the one-dimensional digital representation of low information content scene
US10397515B2 (en) * 2017-01-05 2019-08-27 Nokia Of America Corporation Protecting personal privacy in a video monitoring system
US10462377B2 (en) 2016-07-29 2019-10-29 Nokia Of America Corporation Single-aperture multi-sensor lensless compressive image acquisition
US11089313B2 (en) * 2019-06-04 2021-08-10 Nokia Of America Corporation Protecting personal privacy in a video monitoring system
US11631708B2 (en) 2018-09-28 2023-04-18 Semiconductor Energy Laboratory Co., Ltd. Image processing method, program, and imaging device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9743024B2 (en) * 2015-07-01 2017-08-22 Massachusetts Institute Of Technology Method and apparatus for on-chip per-pixel pseudo-random time coded exposure
US10798364B2 (en) * 2016-10-20 2020-10-06 Nokia Of America Corporation 3D image reconstruction based on lensless compressive image acquisition
CN111201776B (en) * 2017-10-19 2022-06-28 索尼公司 Imaging apparatus and method, and image processing apparatus and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100241378A1 (en) * 2009-03-19 2010-09-23 Baraniuk Richard G Method and Apparatus for Compressive Parameter Estimation and Tracking
US20120063641A1 (en) * 2009-04-01 2012-03-15 Curtin University Of Technology Systems and methods for detecting anomalies from data
US20120203810A1 (en) * 2011-02-04 2012-08-09 Alexei Ashikhmin Method And Apparatus For Compressive Sensing With Reduced Compression Complexity
US20130011051A1 (en) * 2011-07-07 2013-01-10 Lockheed Martin Corporation Coded aperture imaging
US20130070831A1 (en) * 2011-09-16 2013-03-21 Alcatel-Lucent Usa Inc. Method And Apparatus For Low Complexity Robust Reconstruction Of Noisy Signals
US20140204385A1 (en) * 2010-04-19 2014-07-24 Florida Atlantic University Mems microdisplay optical imaging and sensor systems for underwater and other scattering environments
US20140211039A1 (en) * 2013-01-31 2014-07-31 Inview Technology Corporation Efficient Transforms and Efficient Row Generation for Kronecker Products of Hadamard Matrices
US20140313288A1 (en) * 2013-04-18 2014-10-23 Tsinghua University Method and apparatus for coded focal stack photographing

Family Cites Families (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3775602A (en) 1972-06-29 1973-11-27 Us Air Force Real time walsh-hadamard transformation of two-dimensional discrete pictures
US5070403A (en) 1989-04-21 1991-12-03 Sony Corporation Video signal interpolation
US5166788A (en) 1990-06-29 1992-11-24 Samsung Electronics Co., Ltd. Motion signal processor
US5262854A (en) 1992-02-21 1993-11-16 Rca Thomson Licensing Corporation Lower resolution HDTV receivers
DE4337047B4 (en) 1993-10-29 2004-11-25 BODENSEEWERK GERäTETECHNIK GMBH Passive image resolution detector arrangement
US5572552A (en) 1994-01-27 1996-11-05 Ericsson Ge Mobile Communications Inc. Method and system for demodulation of downlink CDMA signals
JP2816095B2 (en) 1994-04-26 1998-10-27 三洋電機株式会社 Video camera signal processing circuit
CN1253636A (en) 1995-06-22 2000-05-17 3Dv系统有限公司 Telecentric stop 3-D camera and its method
JPH0954212A (en) 1995-08-11 1997-02-25 Sharp Corp Phase difference film and its production as well liquid crystal display element
FR2753330B1 (en) 1996-09-06 1998-11-27 Thomson Multimedia Sa QUANTIFICATION METHOD FOR VIDEO CODING
US5870144A (en) 1997-03-28 1999-02-09 Adaptec, Inc. Reduced-quality resolution digital video encoder/decoder
US6271876B1 (en) 1997-05-06 2001-08-07 Eastman Kodak Company Using two different capture media to make stereo images of a scene
US20030043918A1 (en) 1999-12-20 2003-03-06 Jiang Hong H. Method and apparatus for performing video image decoding
JP2001304816A (en) 2000-04-26 2001-10-31 Kenichiro Kobayashi Travel measuring system and apparatus using granular dot pattern by laser reflected light
WO2001091461A2 (en) 2000-05-23 2001-11-29 Koninklijke Philips Electronics N.V. Watermark detection
JP4389371B2 (en) 2000-09-28 2009-12-24 株式会社ニコン Image restoration apparatus and image restoration method
US6737652B2 (en) 2000-09-29 2004-05-18 Massachusetts Institute Of Technology Coded aperture imaging
DE60114651T2 (en) 2001-12-14 2006-06-01 Stmicroelectronics S.R.L., Agrate Brianza Method of compressing digital images recorded in color filter array format (CFA)
US20040174434A1 (en) 2002-12-18 2004-09-09 Walker Jay S. Systems and methods for suggesting meta-information to a camera user
SG140441A1 (en) 2003-03-17 2008-03-28 St Microelectronics Asia Decoder and method of decoding using pseudo two pass decoding and one pass encoding
US7339170B2 (en) 2003-07-16 2008-03-04 Shrenik Deliwala Optical encoding and reconstruction
US7680356B2 (en) 2003-10-14 2010-03-16 Thomson Licensing Technique for bit-accurate comfort noise addition
EP1578134A1 (en) 2004-03-18 2005-09-21 STMicroelectronics S.r.l. Methods and systems for encoding/decoding signals, and computer program product therefor
US7532772B2 (en) 2004-07-20 2009-05-12 Duke University Coding for compressive imaging
KR20060019363A (en) 2004-08-27 2006-03-03 삼성테크윈 주식회사 Method for controlling digital photographing apparatus, and digital photographing apparatus adopting the method
WO2006041219A2 (en) 2004-10-15 2006-04-20 Matsushita Electric Industrial Co., Ltd. Enhancement of an image acquired with a multifocal lens
TWM268604U (en) 2004-12-10 2005-06-21 Innolux Display Corp Liquid crystal display device
US7767949B2 (en) 2005-01-18 2010-08-03 Rearden, Llc Apparatus and method for capturing still images and video using coded aperture techniques
TWI301953B (en) 2005-03-14 2008-10-11 Qisda Corp Methods and apparatuses for video encoding
US7830561B2 (en) 2005-03-16 2010-11-09 The Trustees Of Columbia University In The City Of New York Lensless imaging with controllable apertures
WO2006116134A2 (en) 2005-04-21 2006-11-02 William Marsh Rice University Method and apparatus for compressive imaging device
GB0510470D0 (en) 2005-05-23 2005-06-29 Qinetiq Ltd Coded aperture imaging system
US20070009169A1 (en) 2005-07-08 2007-01-11 Bhattacharjya Anoop K Constrained image deblurring for imaging devices with motion sensing
US20070285554A1 (en) 2005-10-31 2007-12-13 Dor Givon Apparatus method and system for imaging
EP2002583A4 (en) 2006-03-17 2013-03-20 Jocelyn Aulin Ofdm in fast fading channel
US8619854B2 (en) 2006-03-27 2013-12-31 Electronics And Telecommunications Research Institute Scalable video encoding and decoding method using switching pictures and apparatus thereof
US7928893B2 (en) 2006-04-12 2011-04-19 William Marsh Rice University Apparatus and method for compressive sensing radar imaging
US7639289B2 (en) 2006-05-08 2009-12-29 Mitsubishi Electric Research Laboratories, Inc. Increasing object resolutions from a motion-blurred image
JP4695557B2 (en) 2006-07-19 2011-06-08 日本放送協会 Element image group correction apparatus, element image group acquisition system, element image group correction method, and element image group correction program
JP4964541B2 (en) * 2006-09-11 2012-07-04 オリンパス株式会社 Imaging apparatus, image processing apparatus, imaging system, and image processing program
US7345603B1 (en) 2006-11-07 2008-03-18 L3 Communications Integrated Systems, L.P. Method and apparatus for compressed sensing using analog projection
US8213500B2 (en) 2006-12-21 2012-07-03 Sharp Laboratories Of America, Inc. Methods and systems for processing film grain noise
US7602183B2 (en) 2007-02-13 2009-10-13 The Board Of Trustees Of The Leland Stanford Junior University K-T sparse: high frame-rate dynamic magnetic resonance imaging exploiting spatio-temporal sparsity
JP5188205B2 (en) * 2007-06-12 2013-04-24 ミツビシ・エレクトリック・リサーチ・ラボラトリーズ・インコーポレイテッド Method for increasing the resolution of moving objects in an image acquired from a scene by a camera
FR2917872A1 (en) 2007-06-25 2008-12-26 France Telecom METHODS AND DEVICES FOR ENCODING AND DECODING AN IMAGE SEQUENCE REPRESENTED USING MOTION TUBES, COMPUTER PROGRAM PRODUCTS AND CORRESPONDING SIGNAL.
KR101399012B1 (en) 2007-09-12 2014-05-26 삼성전기주식회사 apparatus and method for restoring image
KR101412752B1 (en) 2007-11-26 2014-07-01 삼성전기주식회사 Apparatus and method for digital auto-focus
US8204126B2 (en) 2008-01-10 2012-06-19 Panasonic Corporation Video codec apparatus and method thereof
JP5419403B2 (en) 2008-09-04 2014-02-19 キヤノン株式会社 Image processing device
KR101432775B1 (en) 2008-09-08 2014-08-22 에스케이텔레콤 주식회사 Video Encoding/Decoding Method and Apparatus Using Arbitrary Pixel in Subblock
US8300113B2 (en) 2008-10-10 2012-10-30 Los Alamos National Security, Llc Hadamard multimode optical imaging transceiver
JP5185805B2 (en) 2008-12-26 2013-04-17 オリンパス株式会社 Imaging device
KR20100090961A (en) 2009-02-09 2010-08-18 삼성전자주식회사 Imaging method with variable coded aperture device and apparatus using the method
ES2623375T3 (en) 2009-10-20 2017-07-11 The Regents Of The University Of California Holography and incoherent cell microscopy without a lens on a chip
CN102388402B (en) 2010-02-10 2015-08-19 杜比国际公司 Image processing apparatus and image processing method
WO2011103601A2 (en) 2010-02-22 2011-08-25 William Marsh Rice University Improved number of pixels in detector arrays using compressive sensing
US20120044320A1 (en) 2010-03-11 2012-02-23 Trex Enterprises Corp. High resolution 3-D holographic camera
WO2012001463A1 (en) 2010-07-01 2012-01-05 Nokia Corporation A compressed sampling audio apparatus
US20120069209A1 (en) 2010-09-22 2012-03-22 Qualcomm Mems Technologies, Inc. Lensless camera controlled via mems array
US8582820B2 (en) 2010-09-24 2013-11-12 Apple Inc. Coded aperture camera with adaptive image processing
US8644376B2 (en) 2010-09-30 2014-02-04 Alcatel Lucent Apparatus and method for generating compressive measurements of video using spatial and temporal integration
US20130201343A1 (en) 2012-02-07 2013-08-08 Hong Jiang Lenseless compressive image acquisition
US20130201297A1 (en) 2012-02-07 2013-08-08 Alcatel-Lucent Usa Inc. Lensless compressive image acquisition
US9634690B2 (en) 2010-09-30 2017-04-25 Alcatel Lucent Method and apparatus for arbitrary resolution video coding using compressive sampling measurements
JP5764740B2 (en) * 2010-10-13 2015-08-19 パナソニックIpマネジメント株式会社 Imaging device
EP2633267A4 (en) 2010-10-26 2014-07-23 California Inst Of Techn Scanning projective lensless microscope system
US9020029B2 (en) 2011-01-20 2015-04-28 Alcatel Lucent Arbitrary precision multiple description coding
WO2013003485A1 (en) 2011-06-28 2013-01-03 Inview Technology Corporation Image sequence reconstruction based on overlapping measurement subsets
WO2013007272A1 (en) 2011-07-13 2013-01-17 Rayvis Gmbh Transmission image reconstruction and imaging using poissonian detector data
US20130044818A1 (en) 2011-08-19 2013-02-21 Alcatel-Lucent Usa Inc. Method And Apparatus For Video Coding Using A Special Class Of Measurement Matrices
WO2014025425A2 (en) * 2012-05-09 2014-02-13 Duke University Metamaterial devices and methods of using the same
US8842216B2 (en) 2012-08-30 2014-09-23 Raytheon Company Movable pixelated filter array
US9277139B2 (en) * 2013-01-16 2016-03-01 Inview Technology Corporation Generating modulation patterns for the acquisition of multiscale information in received signals
US9230302B1 (en) * 2013-03-13 2016-01-05 Hrl Laboratories, Llc Foveated compressive sensing system
US9681051B2 (en) 2013-08-19 2017-06-13 Massachusetts Institute Of Technology Method and apparatus for motion coded imaging

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100241378A1 (en) * 2009-03-19 2010-09-23 Baraniuk Richard G Method and Apparatus for Compressive Parameter Estimation and Tracking
US20120063641A1 (en) * 2009-04-01 2012-03-15 Curtin University Of Technology Systems and methods for detecting anomalies from data
US20140204385A1 (en) * 2010-04-19 2014-07-24 Florida Atlantic University Mems microdisplay optical imaging and sensor systems for underwater and other scattering environments
US20120203810A1 (en) * 2011-02-04 2012-08-09 Alexei Ashikhmin Method And Apparatus For Compressive Sensing With Reduced Compression Complexity
US20130011051A1 (en) * 2011-07-07 2013-01-10 Lockheed Martin Corporation Coded aperture imaging
US20130070831A1 (en) * 2011-09-16 2013-03-21 Alcatel-Lucent Usa Inc. Method And Apparatus For Low Complexity Robust Reconstruction Of Noisy Signals
US20140211039A1 (en) * 2013-01-31 2014-07-31 Inview Technology Corporation Efficient Transforms and Efficient Row Generation for Kronecker Products of Hadamard Matrices
US20140313288A1 (en) * 2013-04-18 2014-10-23 Tsinghua University Method and apparatus for coded focal stack photographing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Marcia et al. ("Compressive Coded Aperture Imaging, Volume 7246, SPIE 2009) *
Marcia et al. (“Compressive Coded Aperture Imaging, Volume 7246, SPIE 2009) *
Takhar et al. ("A New Compressive Imaging Camera Architecture using Optical-Domain Compression", SPIE 2006) *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108886588A (en) * 2016-01-15 2018-11-23 康耐视股份有限公司 It is used to form the NI Vision Builder for Automated Inspection of the one-dimensional digital representation of low information content scene
US20180035046A1 (en) * 2016-07-29 2018-02-01 Xin Yuan Block-based lensless compressive image acquisition
CN109644232A (en) * 2016-07-29 2019-04-16 诺基亚美国公司 Sectional type is obtained without lens compression image
US10462377B2 (en) 2016-07-29 2019-10-29 Nokia Of America Corporation Single-aperture multi-sensor lensless compressive image acquisition
US10397515B2 (en) * 2017-01-05 2019-08-27 Nokia Of America Corporation Protecting personal privacy in a video monitoring system
US11631708B2 (en) 2018-09-28 2023-04-18 Semiconductor Energy Laboratory Co., Ltd. Image processing method, program, and imaging device
US11089313B2 (en) * 2019-06-04 2021-08-10 Nokia Of America Corporation Protecting personal privacy in a video monitoring system

Also Published As

Publication number Publication date
JP2017527157A (en) 2017-09-14
US20150382000A1 (en) 2015-12-31
US9344736B2 (en) 2016-05-17
JP6652510B2 (en) 2020-02-26

Similar Documents

Publication Publication Date Title
US20150382026A1 (en) Compressive Sense Imaging
US11805333B2 (en) Noise aware edge enhancement
KR20210059712A (en) Artificial Intelligence Techniques for Image Enhancement
US9081731B2 (en) Efficient transforms and efficient row generation for Kronecker products of Hadamard matrices
US9380221B2 (en) Methods and apparatus for light field photography
US20150003738A1 (en) Adaptive quality image reconstruction via a compressed sensing framework
US20150317806A1 (en) Reconstructing an image of a scene captured using a compressed sensing device
Dave et al. Solving inverse computational imaging problems using deep pixel-level prior
US10089719B2 (en) Signal observation device and signal observation method
US10423829B2 (en) Signal observation device and signal observation method
JP6478579B2 (en) Imaging unit, imaging device, and image processing system
US8089534B2 (en) Multi illuminant shading correction using singular value decomposition
EP3162072A1 (en) Systems and methods for compressive sense imaging
JP2012003455A (en) Image processing apparatus, imaging device and image processing program
US8744200B1 (en) Hybrid compressive/Nyquist sampling for enhanced sensing
JP6475269B2 (en) Compressed sensing imaging
EP3162073A1 (en) Compressive sense imaging
CN108475420B (en) Multi-resolution compressed sensing image processing
JP2017521942A5 (en)
Yanagi et al. Optimal transparent wavelength and arrangement for multispectral filter array
Diaz et al. High-dynamic range compressive spectral imaging by adaptive filtering
WO2023042435A1 (en) Image processing device and method
US11928799B2 (en) Electronic device and controlling method of electronic device
Jiang et al. Noise analysis for lensless compressive imaging
JP6294751B2 (en) Image processing apparatus, image processing method, and image processing program

Legal Events

Date Code Title Description
AS Assignment

Owner name: ALCATEL-LUCENT USA INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JIANG, HONG;HUANG, GANG;WILFORD, PAUL ALBIN;REEL/FRAME:033236/0954

Effective date: 20140701

AS Assignment

Owner name: ALCATEL LUCENT, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALCATEL-LUCENT USA INC.;REEL/FRAME:036494/0594

Effective date: 20150828

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION