EP3162072A1 - Systèmes et procédés permettant une imagerie compressive - Google Patents
Systèmes et procédés permettant une imagerie compressiveInfo
- Publication number
- EP3162072A1 EP3162072A1 EP15741629.8A EP15741629A EP3162072A1 EP 3162072 A1 EP3162072 A1 EP 3162072A1 EP 15741629 A EP15741629 A EP 15741629A EP 3162072 A1 EP3162072 A1 EP 3162072A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- compressive
- matrix
- measurements
- processing device
- imaging system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 title abstract description 21
- 239000011159 matrix material Substances 0.000 claims abstract description 84
- 238000005259 measurement Methods 0.000 claims abstract description 83
- 230000033001 locomotion Effects 0.000 claims abstract description 48
- 238000012545 processing Methods 0.000 claims description 30
- 230000006835 compression Effects 0.000 description 9
- 238000007906 compression Methods 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 229940050561 matrix product Drugs 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/3059—Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
- H03M7/3062—Compressive sampling or sensing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
Definitions
- the present disclosure is directed to systems and methods for compressive sense image processing.
- Digital image/video cameras acquire and process a significant amount of raw data.
- the raw pixel data for each of the N pixels of an N-pixel image is first captured and then typically compressed using a suitable compression algorithm for storage and/or transmission.
- compression after capturing the raw data for each of the N pixels of the image is generally useful for reducing the size of the image (or video) captured by the camera, it requires significant computational resources and time.
- compression of the raw pixel data does not always meaningfully reduce the size of the captured images.
- a more recent approach known as compressive sense imaging, acquires compressed image (or video) data using random projections without first collecting the raw data for all of the N pixels of an N-pixel image. For example, a compressive measurement basis is applied to obtain a series of compressive measurements which represent the encoded (i.e., compressed) image. Since a reduced number of compressive measurements are acquired in comparison to the raw data for each of the N pixel values of a desired N-pixel image, this approach can significantly eliminate or reduce the need for applying compression after the raw data is captured .
- a system and method includes determining at least one kernel matrix based on relative motion of an object during a time period corresponding to acquisition of at least one of a plurality of compressive measurements representing a compressed image of the object; determining a reconstruction basis matrix to compensate for the relative motion of the object during the time period based on the at least one kernel matrix; and, generating an uncompressed image of the object from the plurality of compressive measurements using the reconstruction basis matrix .
- the reconstruction basis matrix is determined by applying the at least one kernel matrix to at least one basis of a compressive basis matrix used to acquire the compressive measurements.
- the at least one kernel matrix is determined such that a matrix operation between the at least one kernel matrix and a one-dimensional representation of the uncompressed image represents shifting the position of the object in the one-dimensional representation of the uncompressed image to a previous position of the object based on the relative motion of the object.
- motion data from one or more sensors is used to determing the relative motion of the object during the time period.
- a degree of the relative motion of the object is determined for the time period using the motion data.
- the determined degree of relative motion of the object is used to determine at least one kernel matrix.
- a sparsifying operator is used for generating the image of the object from the plurality of compressive measurements using the reconstruction basis matrix .
- compressive measurements are acquired using a compressive basis matrix during the time period .
- an aggregated sum of light output by a detector is determined to acquire at least one of the compressive measurements.
- the aggregated sum of light is determined by selective enabling or disabling one or more aperture elements of an aperture array based on at least one basis in the compressive basis matrix during the time period.
- FIG. 1 illustrates an example of a compressive sense imaging system in accordance with various aspects of the disclosure.
- FIG. 2 illustrates an example of a camera unit for acquiring compressive measurements of an object using a compressive basis matrix in accordance with one aspect of the disclosure.
- FIG. 3 illustrates an example process for reconstructing an image of the object from the compressive measurements using a reconstruction basis matrix in accordance with various aspects of the disclosure.
- FIG. 4 illustrates an example apparatus for implementing various aspects of the disclosure.
- words used to describe a relationship between elements should be broadly construed to include a direct relationship or the presence of intervening elements unless otherwise indicated. For example, when an element is referred to as being “connected” or “coupled” to another element, the element may be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Similarly, words such as “between”, “adjacent”, and the like should be interpreted in a like fashion .
- FIG. 1 illustrates a schematic example of a compressive imaging acquisition and reconstruction system 100 ("system 100") in accordance with an aspect of the present disclosure.
- Incident light 105 reflecting from an object 110 is received by the camera unit 115, which uses a predetermined compressive measurement basis matrix (referenced hereinafter as “compressive basis matrix”) 120, also sometimes referenced as a sensing or measurement matrix, to generate compressive measurements 125 representing the compressed image of the object 110.
- the compressive measurements 125 representing compressed image of the object 110 may be stored (or transmitted) by a storage/transmission unit 130.
- the reconstruction unit 135 generates an uncompressed image 140 (e.g., for display on a display unit) of the object 110 from the compressive measurements 125.
- the reconstructed or uncompressed image 140 is generated from the compressive measurements 125 while taking into account motion data that represents the motion of the object 110 relative to the camera unit 115.
- the reconstructed image 140 is generated using a reconstruction basis matrix 150 that is different from the compressive basis matrix 120 that is used to generate the compressive measurements 125.
- the reconstruction basis matrix 150 is determined (e.g., generated or updated) based on motion data 155 that represents motion of the object 110 relative to the camera unit 115.
- the units are shown separately in FIG. 1, this is merely to aid understanding of the disclosure. In other aspects the functionality of any or all of the units described above may be implemented using fewer or greater number of units. Furthermore, the functionality attributed to the various units may be implemented in a single processing device or distributed amongst multiple processing devices. Some examples of suitable processing devices include cameras, camera systems, mobile phones, personal computer systems, tablets, set-top boxes, smart phones or any type of computing device configured to acquire, process, or display image data.
- a single processing device may be configured to provide the functionality of each of the units of system 100.
- the single processing device may include, for example, a memory storing one or more instructions, and a processor for executing the one or more instructions, which, upon execution, may configure the processor to provide functionality ascribed to the units.
- the single processing device may include other components typically found in computing devices, such as one or more input/output components for inputting or outputting information to/from the processing device, including a camera, a display, a keyboard, a mouse, network adapter, etc .
- a local processing device may be provided at a first location that is communicatively interconnected with a remote processing device at a remote location via network.
- the local processing device may be configured with the functionality to generate and provide the compressive measurements 125 of the local object 110 to a remote processing device over the network.
- the remote processing device may be configured to receive the compressive measurements from the local processing device, to generate the reconstructed image 140 from the compressive measurements 125 using the reconstruction basis matrix 150, and to display the reconstructed image to a remote user in accordance with the aspects described below.
- the local processing device and the remote processing device may be respectively implemented using an apparatus similar to the single processing device, and may include a memory storing one or more instructions, a processor for executing the one or more instructions, and various input/output components as in the case of the single processing device.
- the network may be an intranet, the Internet, or any type or combination of one or more wired or wireless networks .
- FIG. 2 illustrates an example of a camera unit 115 for acquiring compressive measurements 125 representing the compressed image of the object 110 using compressive sense imaging.
- Incident light 105 reflected off the object 110 is received (e.g., via an optical lens or without) at the camera unit 115 where the light 105 is selectively permitted to pass through an aperture array 220 of N individual aperture elements and strike a photon detector 230.
- the camera unit 115 processes the output of the photon detector 230 in conjunction with the predetermined compressive basis matrix 120 to produce M compressive measurements 125 using compressive sense imaging.
- the M compressive measurements 125 collectively represent the compressed image of the object 110. More particularly, in compressive sense imaging, the number M of the compressive measurements that are acquired is typically significantly less than the N raw data values that are acquired in a conventional camera system having an N-pixel sensor for generating an N-pixel image, thus reducing or eliminating the need for further compression of the raw data values after acquisition.
- the number of compressive measurements M may be pre-selected relative to the N aperture elements of the array 220 based upon the predetermined (e.g., desired) balance between the level of compression and the quality of the N-pixel image 140 that is reconstructed using the M compressive measurements.
- the first element in the first row of array 220 is exemplarily referenced as 220 [1,1]
- the last element in the last row of the array 220 is referenced as 220 [8,8] .
- the size and format of the array 220 may have a significantly greater (or fewer) number of elements, depending on the desired resolution of the image 140.
- Each of the sixty-four aperture elements 220 [1,1] to 220 [8, 8] of the array 220 illustrated in FIG. 2 may be selectively and individually opened or closed (or partially opened or partially closed) to respectively allow or block portions of the light 105 from passing through those elements and reaching the photon detector 230.
- Aperture elements that are fully or partially opened e.g., enabled or activated
- aperture elements that are fully or partially closed e.g., disabled or deactivated
- the aperture array 220 may be implemented as a micro-mirror array of N individually selectable micro-mirrors.
- the aperture array 120 may be implemented as an LCD array.
- the camera unit 115 is configured to selectively enable or disable (e.g., partially or fully) one or more of the N aperture elements of the array 220 in accordance with compressive basis information of the compressive basis matrix 120 and to determine the number M of compressive measurements Y lt Y 2 , ....Y M .
- the number M of the compressive measurements Y 1 ,Y 2 ,....Y M is fewer than the number N aperture elements of the array 220.
- Each of the compressive measurements Y k (7c G [1 ... ]) may be understood as the detected sum (or aggregate) of the light 105 reaching the detector 230 through the array 220 during a respective measurement time t k when particular ones of the N aperture elements of the array 220 are selectively opened (or enabled) and closed (or disabled) in accordance with the corresponding basis b k (7c G [1 ... ]) in the compressive basis matrix 120.
- the compressive measurements Y 1 ,Y 2 ,....Y M ma Y thus be generated during respective times t 1( t 2 , ....t M using respective ones of the compressive bases b lt b 2 ,—b M of the compressive basis matrix 120.
- the compressive basis matrix 120 is the set of M compressive bases b lt b 2 ,—b M r each of which is respectively applied to the array 220 to produce the corresponding ones of the compressive measurements Y 1 , Y 2 ,---Y M ⁇ Furthermore, each measurement basis b lt b 2 ,— b M in the compressive basis matrix 120 is itself an array of N values corresponding to the number N of aperture elements of the array 220.
- each compressive basis b k (7c G [1... ]) of the compressive basis matrix 120 is a set of values b k [1] to b k [64] where each value may be "0" or "1", or a real value between "0" and "1", which corresponds to and determines the state (e.g., fully closed, fully opened, or a state in- between) of a respective aperture element in the 8x8 aperture array 220.
- b k [l] may positionally correspond to and determine the state (e.g., opened or closed) of the first element 220 [1,1] of the array 220 while & fe [64] may positionally correspond to and determine the state of the last element 220 [8, 8] of the array 220.
- a given compressive basis b k is used to produce a corresponding compressive measurement Y K for a time t k as follows.
- the respective values b k [l] to & fe [64] are used to set the state (fully opened, fully closed or partially opened or closed) of the corresponding elements of array 220 in FIG. 2 to acquire the compressive measurement Y K corresponding to time t k .
- a binary value "1" in the basis b k may indicate fully opening (or enabling) the corresponding element in the array 220, whereas the value of "0" in the basis b k , may indicate fully closing (or disabling) the corresponding element in the array 220 (or vice versa) .
- a real value between "0" and "1" in the basis b k may also indicate partially opening or closing the corresponding element, where only a portion of the light is allowed to pass through that corresponding element while another portion is the light is prevented from passing through the corresponding element.
- the sum or aggregate of the light reaching the detector 230 via the array 220 may be detected as the determined value of the compressive measurement Y k corresponding to the time t k .
- the compressive basis matrix 120 may be determined using any desired compressive scheme. For example, in one aspect, at least one, or each, measurement basis b k (7c G
- the compressive basis matrix 120 may be constructed as a randomly (or pseudo-randomly ) permutated Walsh-Hadamard matrix.
- other types of compression schemes/matrices may be used as will be understood by those of ordinary skill in the art.
- at least one, or each, measurement basis b k (7c G [1 ... ]) may be constructed as randomly or pseudo-randomly generated real number between 0 and 1.
- the values of the measurement basis may be binary values, numeric values, text values, or other types of values, which are used for partially or fully enabling or disabling corresponding elements of the array 120.
- the compressive measurements 125 include the number M of compressive measurements Y 1 , Y 2 ,....Y M that are generated, for example, for respective times t 1 ,t 2 ,....t M using respective ones of the compressive bases b lt b 2 ,...b M of a predetermined compressive basis matrix array 120 as described above.
- the reconstruction unit 135 generates (or reconstructs) a N—pixel image 140 (e.g., for display on a display unit) of the object 110 from the M number of compressive measurements Y lt Y 2 , Y 3 ...Y M .
- each compressive measurement Y k is acquired for a corresponding time t k
- a set of compressive measurements may take a certain duration of time to be captured (e.g., from time t t to time t M for compressive measurements Y 1 to Y M .
- all compressive measurements may represent the same scene, if the object moves during the acquisition of the measurements, the image or video reconstructed from these measurements may not accurately capture the object, and the resulting image or video may have undesirable artifacts such as blurring.
- the reconstruction unit 135 determines the N-pixel image 140 using a reconstruction basis matrix 150 that is generated and/or updated based on motion data representing the motion of the object 110 relative to the camera unit 115 during the time period from t to t M in which the compressive measurements 125 were acquired.
- aspects of the disclosure advantageously may provide a better reconstructed image, especially when the object moves relative to the camera unit during the time the set of compressive measurements are acquired.
- the reconstruction basis matrix 150 is generated by updating or modifying one or more of the compressive bases b lt b 2 ,—b M of the compressive basis matrix 120 based on the relative motion of the object 110 to the camera unit 115.
- the reconstruction basis matrix 150 that is generated or updated compensates for the relative motion of the object 110 to the camera unit 115 during one or more of the time periods from time to t M when the compressive measurements Y 1 ,Y 2 ,Y-3...Y M are acquired.
- the reconstruction basis matrix 150 which takes the object motion into account is used by the reconstruction unit to uncompress the M compressive measurements into N-pixel values /, ⁇ » ⁇ 3 --Jw of the reconstructed image 140, which may then be converted, for example, into a two dimensional format to realize a two- dimensional image of the object 110 that is suitable for viewing on a display, printing, or further processing.
- step 302 the process includes determining motion during the one or more time periods for the acquisition of the compressive measurements.
- the motion may be determined based on a change in position of the object 110 relative to the camera unit 115 from an initial position (s) to a new position (s) during any one or more time instances to t M in which the compressive measurements Y 1 , Y 2 , Y 3 ... Y M were generated.
- the determination that the position of the object 110 changed relative to the camera unit 15 in one or more time periods when the compressive measurements 125 were acquired using the compressive basis matrix 220 may be made in several ways .
- the determination of the motion may be based on panning/tilting motion data received from the camera unit 115, where the camera unit pans (and/or tilts) during the acquisition of the compressive measurements.
- the determination of the motion may be made based on motion data indicating the rotation of the earth (e.g., where the object 110 is an astronomical object such as the moon, or a satellite with a known trajectory with respect to the earth or another body) .
- the determination of the motion may be made based on data provided by one or more sensors (e.g., gyroscopes, accelerometers , or other passive or active sensors) that are located on or in proximity to the object 110 or the camera unit 115, or based on other predetermined or manually provided information.
- the motion data may be stored (or transmitted) by a storage/transmission unit 130 for further or later processing.
- the process includes determining the degree of the motion determined during the one or more of the time periods for the acquisition of the compressive measurements in step 302.
- the degree of motion may be determined from the motion data as the magnitude of the change in the position of the object 110 in one or more directions relative to the camera unit 115 during a given period of time (e.g., between times and t k , where (7c G [1 ... ])) .
- the motion data may indicate, for example, that the position of the object 110 changed relative to the camera unit 115 from an initial position i(k— l),j(k— 1) at time to a new position i(k),j(k) at time t k , where (7c G [1 ...
- the process includes determining a kernel matrix based on the degree of motion determined in step 304.
- a NxN kernel matrix K(k) may be defined such that, if the one dimensional representation of the image 140 is denoted as an array / having a number N values, then K(k) ⁇ I represents shifting the position of the object 110 by di(k),d j (k) to compensate for the motion of the object 110 during the given time and t k .
- the kernel matrix K(k) is determined such that the position of the object 115 is shifted to the initial (or previous) position i(k— l),j(k— 1) in K(k) ⁇ I .
- Step 306 may be reiterated to determine a series of kernel matrices K(k) for different times t k _ 1 and t k during which the object 110 moves (or continues to move) relative to the camera unit 115 when respective compressive measurements 3 ⁇ 4-i ) and Y ⁇ acquired.
- a first kernel matrix ⁇ 1) may be determined based on the determined change of the position of the object 115 between time t t when Y 1 was acquired and time t 2 when Y 2 was acquired.
- a second kernel matrix K(2) may be determined based on the determined change of the position of the object 115 between time t 2 when Y 2 was acquired and time t 3 when Y 3 was acquired, and so on.
- a series of kernel matrices K(k) may be determined in step 306 for k ⁇ 2, 3,4, ... M .
- the process includes generating the reconstruction basis matrix 150 using the series of kernel matrices that are determined in step 306 based on the determined motion of the object 110 relative to the camera unit 115.
- the reconstruction basis matrix 150 is generated based on the compressive basis matrix 120 that was used to generate the compressive measurements 125 as follows .
- the reconstruction basis matrix R may be generated (or updated) based on the compressive basis matrix Bas:
- r k may be the same as b k .
- the reconstruction basis matrix R differs from the compressive basis matrix B with respect to those compressive bases which were applied duration of the objects movement relative to the camera unit 115, while remaining the same with respect to the other compressive bases.
- the kernel matrix that is constructed for a corresponding time duration for which it is determined that there was no relative motion of the object may be an NxN identity matrix.
- the reconstruction basis matrix 150 that is generated in step 308 is used to generate an image 140 of the object 110 from the compressive measurements 125.
- the image 140 may be determined in matrix form as :
- W is a sparsifying operator
- / is the one- dimensional matrix representation of the N valued image 140
- R is the reconstruction basis matrix generated in step 308
- Y 1 ,Y 2 ,Y 3 ...Y M are the compressive measurements 125 acquired using the compressive basis matrix 120.
- the sparsifying operator may be generated, for example, by using wavelets, or by using total variations .
- the process described above may be repeated to generate or update the reconstruction basis matrix to compensate for relative motion of the object to the camera for one or more of a series of images or (frames of a video) that are reconstructed from different sets of compressive measurements over a period of time. It is to be understood that the steps described above are merely illustrative and that existing steps may be modified or omitted, additional steps may be added (e.g., the step of determining a compressive basis matrix and the step of determining compressive measurements using the compressive basis matrix), and the order of certain steps may be altered.
- FIG. 4 depicts a high-level block diagram of an example processing device or apparatus 400 suitable for implementing one or more aspects of the disclosure.
- Apparatus 400 comprises a processor 402 that is communicatively interconnected with various input/output devices 404 and a memory 406.
- the processor 402 may be any type of processor such as a general purpose central processing unit (“CPU") or a dedicated microprocessor such as an embedded microcontroller or a digital signal processor ("DSP").
- the input/output devices 404 may be any peripheral device operating under the control of the processor 402 and configured to input data into or output data from the apparatus 400 in accordance with the disclosure, such as, for example, a lens or lensless camera or video capture device.
- the input/output devices 404 may also include conventional network adapters, data ports, and various user interface devices such as a keyboard, a keypad, a mouse, or a display.
- Memory 406 may be any type of memory suitable for storing electronic information, including data and instructions executable by the processor 402.
- Memory 406 may be implemented as, for example, as one or more combinations of a random access memory (RAM) , read only memory (ROM) , flash memory, hard disk drive memory, compact- disk memory, optical memory, etc.
- apparatus 400 may also include an operating system, queue managers, device drivers, or one or more network protocols which may be stored, in one embodiment, in memory 406 and executed by the processor 402.
- the memory 406 may include non-transitory memory storing executable instructions and data, which instructions, upon execution by the processor 402, may configure apparatus 400 to perform the functionality in accordance with the various aspects and steps described above.
- the processor 402 may be configured, upon execution of the instructions, to communicate with, control, or implement all or a part of the functionality with respect to the acquisition or the reconstruction of the compressive measurements as described above.
- the processor may be configured to determine or receive motion data, process the motion data to generate one or more kernel matrices, and to generate the reconstruction basis matrix based on the kernel matrices as described above .
- the processor 402 may also be configured to communicate with and/or control another apparatus 400 to which it is interconnected via, for example a network. In such cases, the functionality disclosed herein may be integrated into each standalone apparatus 400 or may be distributed between one or more apparatus 400. In some embodiments, the processor 402 may also be configured as a plurality of interconnected processors that are situated in different locations and communicatively interconnected with each other.
- FIG. 4 While a particular apparatus configuration is shown in FIG. 4, it will be appreciated that the present disclosure not limited to any particular implementation. For example, in some embodiments, all or a part of the functionality disclosed herein may be implemented using one or more application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or the like.
- ASICs application specific integrated circuits
- FPGAs field programmable gate arrays
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/315,909 US9344736B2 (en) | 2010-09-30 | 2014-06-26 | Systems and methods for compressive sense imaging |
PCT/US2015/035979 WO2015200038A1 (fr) | 2014-06-26 | 2015-06-16 | Systèmes et procédés permettant une imagerie compressive |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3162072A1 true EP3162072A1 (fr) | 2017-05-03 |
Family
ID=53719907
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP15741629.8A Ceased EP3162072A1 (fr) | 2014-06-26 | 2015-06-16 | Systèmes et procédés permettant une imagerie compressive |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP3162072A1 (fr) |
WO (1) | WO2015200038A1 (fr) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020065442A1 (fr) | 2018-09-28 | 2020-04-02 | 株式会社半導体エネルギー研究所 | Procédé de traitement d'image, programme et dispositif d'image |
CN109297925B (zh) * | 2018-10-09 | 2024-07-19 | 天津大学 | 一种基于分块压缩感知的太赫兹高分辨率快速成像装置 |
CN112616050B (zh) * | 2021-01-05 | 2022-09-27 | 清华大学深圳国际研究生院 | 一种压缩成像分类方法及系统 |
WO2023100660A1 (fr) * | 2021-12-03 | 2023-06-08 | パナソニックIpマネジメント株式会社 | Système d'imagerie et procédé d'imagerie |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9681051B2 (en) * | 2013-08-19 | 2017-06-13 | Massachusetts Institute Of Technology | Method and apparatus for motion coded imaging |
-
2015
- 2015-06-16 EP EP15741629.8A patent/EP3162072A1/fr not_active Ceased
- 2015-06-16 WO PCT/US2015/035979 patent/WO2015200038A1/fr active Application Filing
Non-Patent Citations (2)
Title |
---|
None * |
See also references of WO2015200038A1 * |
Also Published As
Publication number | Publication date |
---|---|
WO2015200038A1 (fr) | 2015-12-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9344736B2 (en) | Systems and methods for compressive sense imaging | |
US20220222776A1 (en) | Multi-Stage Multi-Reference Bootstrapping for Video Super-Resolution | |
JP5909540B2 (ja) | 画像処理表示装置 | |
AU2014233518B2 (en) | Noise aware edge enhancement | |
US9025883B2 (en) | Adaptive quality image reconstruction via a compressed sensing framework | |
EP3162072A1 (fr) | Systèmes et procédés permettant une imagerie compressive | |
US20240007608A1 (en) | Multi-Processor Support for Array Imagers | |
WO2017112086A1 (fr) | Super-résolution d'image à multiples étapes avec fusion de référence en utilisant des dictionnaires personnalisés | |
JP6689379B2 (ja) | マルチ解像度圧縮センシング画像処理 | |
JP2004274724A (ja) | 高解像度画像を再構成する方法および装置 | |
US8744200B1 (en) | Hybrid compressive/Nyquist sampling for enhanced sensing | |
EP3162073A1 (fr) | Imagerie de détection de compression | |
JP6475269B2 (ja) | 圧縮センシング撮像 | |
JP2017521942A5 (fr) | ||
US20200184615A1 (en) | Image processing device, image processing method, and image processing program | |
US10326950B1 (en) | Image capture at multiple resolutions | |
JP6310417B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP6114228B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
Li et al. | Coded-exposure camera and its circuits design | |
WO2024137154A1 (fr) | Capture d'images pour un lieu sphérique | |
JP2016100700A (ja) | 撮像装置、撮像方法、撮像プログラム、画像処理装置、画像処理方法および画像処理プログラム | |
WO2024137155A1 (fr) | Stockage redondant de données d'image dans un système d'enregistrement d'image | |
WO2024137156A1 (fr) | Projection d'images sur un lieu sphérique | |
JP2015210596A (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
KR20160061869A (ko) | 가우시안 필터링 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20170126 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ALCATEL LUCENT |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20191108 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20230915 |