WO2014122506A1 - Traitement d'image de sous-images d'une image plénoptique - Google Patents

Traitement d'image de sous-images d'une image plénoptique Download PDF

Info

Publication number
WO2014122506A1
WO2014122506A1 PCT/IB2013/052353 IB2013052353W WO2014122506A1 WO 2014122506 A1 WO2014122506 A1 WO 2014122506A1 IB 2013052353 W IB2013052353 W IB 2013052353W WO 2014122506 A1 WO2014122506 A1 WO 2014122506A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
image
data
plenoptic
particular point
Prior art date
Application number
PCT/IB2013/052353
Other languages
English (en)
Inventor
Mithun Uliyar
Gururaj PUTRAYA
Basavaraja SHANTHAPPA VANDROTTI
Krishna Govindarao
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Publication of WO2014122506A1 publication Critical patent/WO2014122506A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/957Light-field or plenoptic cameras or camera modules

Definitions

  • Embodiments of the present invention relate to image processing. In particular, they relate to processing plenoptic images.
  • a plenoptic camera is a camera that captures four dimensional light field information/radiance of a scene.
  • a method comprising: identifying, in each of a plurality of sub-images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
  • an apparatus comprising: processing circuitry; and at least one memory storing computer program code configured, working with the processing circuitry, to cause at least the following to be performed: identifying, in each of a plurality of sub- images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
  • an apparatus comprising means for identifying, in each of a plurality of sub- images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and means for combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
  • computer program code that, when performed at least one processor, causes at least the following to be performed: identifying, in each of a plurality of sub- images of a plenoptic image, a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
  • Fig. 1 illustrates a first schematic of a focused plenoptic camera
  • Fig. 2 illustrates an apparatus
  • FIG. 3 illustrates a further apparatus
  • Fig. 4 illustrates an array of micro-lenses
  • Fig. 5 illustrates a plenoptic image
  • Fig. 6 illustrates a second schematic of a focused plenoptic camera
  • Fig. 7 illustrates a flow chart of a first method
  • Fig. 8 illustrates a flow chart of a second method
  • Fig. 9 illustrates a flow chart of a third method.
  • Embodiments of the invention relate to processing a plenoptic image to obtain an output image with a high signal to noise ratio (SNR).
  • SNR signal to noise ratio
  • a higher signal to noise ratio may be obtained by binning pixels in the plenoptic image that correspond with a particular point in a scene.
  • the figures illustrate an apparatus 10 comprising: processing circuitry 12; and at least one memory 14 storing computer program code 18 configured, working with the processing circuitry 12, to cause at least the following to be performed: identifying, in each of a plurality of sub-images 32a-32d of a plenoptic image 31 , a pixel location corresponding with a particular point in a scene; and combining data from the identified pixel locations to form a single pixel for the particular point in the scene.
  • Fig. 1 illustrates a schematic of a focused plenoptic camera (otherwise known as Plenoptic Camera 2.0).
  • the focused plenoptic camera comprises a main lens 22 and an array 24 of micro-lenses.
  • the main lens 22 images a real-life scene.
  • the array 24 of micro-lenses is focused on the image plane 23 of the main lens 22.
  • Each micro- lens conveys a portion of the image produced by the main lens 22 onto an image sensor 26, effectively acting as a relaying system.
  • Each micro-lens satisfies the lens equation:
  • v is the distance from the micro-lens to the main lens image plane 23
  • b is the distance from the micro-lens to the image sensor 26
  • f is the focal length of the micro-lens.
  • the focused plenoptic camera illustrated in Fig. 1 is a Keplerian focused plenoptic camera in which the image plane 23 of the main lens 22 is positioned between the main lens 22 and the micro-lens array 24.
  • the micro-lens array 24 is placed between the main lens 22 and the image plane 23 of the main-lens 22.
  • Each micro-lens forms a sub-image on the image sensor 26.
  • the sub-images collectively form a plenoptic image (otherwise known as a "light-field image"). The number of times a particular point in a scene appears in the plenoptic image will depend upon its proximity to the plenoptic camera.
  • a scene point that is closer to the plenoptic camera will be imaged by fewer of the micro-lenses in the array 24 than a scene point that is further away, and will thus appear less frequently in the plenoptic image.
  • a scene point that is closer to the plenoptic camera will be imaged by more of the micro-lenses in the array 24 than a scene point that is further away, and will thus appear more frequently in the plenoptic image.
  • each micro-lens has a different position to the others, a disparity exists when comparing the location of a particular scene point in a sub-image formed by one micro-lens with the location of the same scene point in another sub-image formed by another micro-lens. That is, there will be an offset in the location of a particular scene point in one sub-image relative to the location of the same scene point in another sub-image. Furthermore, since each micro-lens conveys only part of the image formed by the main lens 22 onto the image sensor, individual points in the scene will be imaged by some micro-lenses and not others. This means that each point in the scene will be present in only a subset of the sub-images.
  • Fig. 2 illustrates a first apparatus 10 comprising processing circuitry 12 and a memory 14.
  • the apparatus 10 may, for example, be a chip or a chipset.
  • the processing circuitry 12 is configured to read from and write to the memory 14.
  • the processing circuitry 12 may comprise an output interface via which data and/or commands are output by the processing circuitry 12 and an input interface via which data and/or commands are input to the processing circuitry 12.
  • the processor 12 may be or comprise one or more processors.
  • the processing circuitry 12 may include an analog to digital converter.
  • the memory 14 stores a computer program 17 comprising computer program instructions/code 18 that control the operation of the apparatus 10 when loaded into the processing circuitry 12.
  • the computer program code 18 provides the logic and routines that enable the apparatus 10 to perform the methods illustrated in Figs 7, 8 and 9.
  • the processing circuitry 12, by reading the memory 14, is able to load and execute the computer program 17.
  • the memory 14 is illustrated as a single component it may be implemented as one or more separate components some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/ dynamic/cached storage.
  • the computer program 17 may arrive at the apparatus 10 via any suitable delivery mechanism 30.
  • the delivery mechanism 30 may be, for example, a non-transitory computer-readable storage medium such as a compact disc read-only memory (CD- ROM) or digital versatile disc (DVD).
  • the delivery mechanism 30 may be a signal configured to reliably transfer the computer program 17.
  • the apparatus 10 may cause the propagation or transmission of the computer program 17 as a computer data signal.
  • Fig. 3 illustrates a second apparatus 20.
  • the second apparatus 20 is a plenoptic camera.
  • the second apparatus 20 includes a housing 21 , the first apparatus 10 illustrated in Fig. 2 and the main lens 22, micro-lens array 24 and image sensor 26 of the plenoptic camera illustrated in Fig. 1 .
  • the housing 21 houses the processing circuitry 12, the memory 14, the main lens 22, the micro-lens array 24 and the image sensor 26.
  • the apparatus 20 may also comprise a display.
  • the memory 14 is illustrated in Fig. 3 as storing a plenoptic image 31 , a further plenoptic image 33 and a depth map 35. These items will be described in further detail later.
  • the image sensor 26 may be any type of image sensor, including a charge-coupled device (CCD) sensor and a complementary metal-oxide-semiconductor (CMOS) sensor.
  • CCD charge-coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the array 24 of micro-lenses may include any number of micro-lenses.
  • the elements 12, 14, 22, 24, 26 are operationally coupled and any number or combination of intervening elements can exist between them (including no intervening elements).
  • An aperture 27 is present into the housing 21 that enables light to enter the housing 21 .
  • the arrow labeled with the reference numeral 40 in Fig. 3 illustrates light entering the housing 21 .
  • the arrow labeled with the reference numeral 41 illustrates light being conveyed from the main lens 22 to the micro-lens array 24.
  • the arrow labeled with the reference numeral 42 illustrates light being conveyed from the micro-lens array 24 to the image sensor 26, which obtains and stores electronic image data.
  • the processing circuitry 12 is configured to read image data from the image sensor.
  • An analog to digital converter of the processing circuitry 12 may convert analog voltage/charge data stored by the image sensor 26 (and forming a plenoptic image) into digital data.
  • the processing circuitry 12 may store one or more digital plenoptic images 31 , 33 in the memory 14.
  • Fig. 4 illustrates an example of an array 24 of micro-lenses.
  • the micro-lenses may have a different shape from that illustrated in Fig. 4.
  • each micro-lens may be rectangular or hexagonal in shape.
  • the array 24 shown in Fig. 4 for illustrative purposes has a hundred micro-lenses. In practice, the array 24 may have many more micro-lenses, such as hundreds or thousands of micro-lenses.
  • the micro-lens labeled with the reference numeral 25a in Fig. 4 is considered to have four neighboring micro-lenses 25b-25e. There are two vertically neighboring micro- lenses 25b, 25d and two horizontally neighboring micro-lenses 25c, 25e.
  • Fig. 5 illustrates an example of a plenoptic image 31 .
  • the plenoptic image 31 comprises a plurality of sub-images. A sub-image is generated by each micro-lens in the array.
  • the sub-image labeled with the reference numeral 32a in Fig. 5 is formed by the corresponding micro-lens labeled with the micro-lens labeled 25a.
  • the sub- images labeled with the reference numerals 32b, 32c, 32d and 32e are formed by the micro-lenses labeled with the reference numerals 25b, 25c, 25d and 25e respectively.
  • Fig. 6 is a schematic illustrating the main lens 22, the micro-lens array 24 and the image sensor 26 in a Galilean arrangement.
  • the main lens 22, the micro-lens array 24 and the image sensor 26 might instead be in a Keplerian arrangement.
  • Point C is a point on a (virtual) image plane 23 at which a real-life scene point is imaged by the main lens 22.
  • the point labeled P is a point on the image sensor 26 at which the same point in the real-life scene is imaged by the micro-lens 25a in a sub-image 32a.
  • the point P' is a point at which the same point in a real-life scene is imaged by a neighboring micro-lens 25c in a different sub-image 32c.
  • the micro-lenses are equivalent to pin-hole cameras and the triangle formed by points A, B and C in Fig. 6 is a similar triangle to that formed by points P, P' and C in Fig. 6, we can show that:
  • v is the distance from the micro-lens array 24 to a virtual image corresponding with the real-life scene point imaged at points P and P' and formed by the main lens 22; B is the distance between the micro-lens array 24 and the image sensor 26; and D is the distance between the micro-lenses 25a, 25c.
  • the processing circuitry 12 identifies, in each of a plurality of sub-images of a plenoptic image 31 , a pixel location corresponding with a particular real-life point in a scene. Since the real-life scene point may only have been imaged in a subset of the sub-images, the pixel locations may only be identified in a subset of the sub-images.
  • a depth may be determined for the particular point in the scene and used to identify pixel locations, in multiple sub-images, which correspond with the particular point in the scene.
  • equation (3) may be used to determine the corresponding pixel locations that are present in a subset of the sub-images in the plenoptic image 31 .
  • the processing circuitry 12 combines the data from the identified pixel locations to form a single pixel for the particular point in the scene (for example, by binning the data from the identified pixel locations).
  • the data that is combined may be analog or digital data.
  • the processing circuitry 12 may repeat the process in blocks 701 and 702 for all of the points in the scene that are imaged by the main lens 22 and relayed to the image sensor 26 by the micro-lens array.
  • this enables an output image to be produced that has a high signal to noise ratio, resulting in improved camera performance in low light.
  • the processing circuitry 12 may verify that the data from each of the identified pixel locations relate to the same point in the scene. This may be done by comparing the depths of the identified pixel locations with one another. Alternatively or additionally, it may be done by comparing the pixel values of the identified pixel locations with one another.
  • an additional/further plenoptic image may be captured prior to the capture of the plenoptic image that is used for data combination/binning.
  • the earlier-captured plenoptic image may be used to determine a depth that is used to identify the pixel locations corresponding with a particular point in a scene. The identified pixel locations are then applied to the later-captured plenoptic image in order to combine/bin data and form a single pixel for the particular point in the scene.
  • Fig. 8 illustrates a more detailed description of some embodiments of the method illustrated in Fig. 7, in which the data that is combined at various pixel locations is digital data.
  • a plenoptic image 31 is captured in an analog format by the apparatus 20.
  • the plenoptic image 31 formed on the image sensor 26 is converted from the analog format to a digital format by the processing circuitry 12.
  • the processing circuitry 12 analyses the (digital) plenoptic image 31 to produce a depth map 35 for each pixel in the plenoptic image 31 .
  • a depth map 35 for a particular portion of a scene may be produced by comparing a portion of one sub-image with a portion of another sub-image to identify a matching content portion (that is, a matching set of pixels).
  • the processing circuitry 12 may use the offset/disparity in the position of that content portion from one sub-image relative to another to determine the depth of that content portion. This process may be repeated for each and every portion in an imaged scene to generate a depth map 35 for the whole of the plenoptic image 31 .
  • the depth map 35 may, for example, include a depth value for each pixel in the plenoptic image 31 .
  • Each depth value may be a virtual depth (that is, a distance from the micro-lens array 24 to the image formed by the main lens 22) or a real depth value (that is, a distance from the micro- lens array 24 to the real-life scene point).
  • the processing circuitry 12 applies equation (3) to identify, for each point in a scene, pixel locations in different sub-images of the plenoptic image 31 where the scene point has been imaged.
  • each scene point may appear in multiple sub-images in the plenoptic images, and the number of sub- images in which a particular scene point appears will depend upon its proximity to the plenoptic camera when it was captured. Let us consider a situation where a particular point in a scene has been imaged at a pixel location P on the image sensor 26, in a sub-image formed by a first micro-lens.
  • the processing circuitry 12 may compare the depth of one pixel location P with the depth of the other pixel location P' using the depth map 35. In the event that the depth of the pixel locations P, P' are the same or similar, the processing circuitry 12 determines that the pixel locations P, P' relate to the same scene point. Alternatively or additionally, the processing circuitry 12 may compare the value of the pixels at each pixel location P, P' with one another. In the event that they are the same or similar, the processing circuitry 12 determines that the pixel locations P, P' relate to the same scene point. At block 805 in Fig. 8, the processing circuitry 12 combines the data from the pixel locations that were identified in block 804 as relating to the same scene point (for example, by binning the data from the identified pixel locations).
  • an output image with a high signal to noise ratio is produced by the processing circuitry 12 and stored in the memory 14.
  • the output image is of a conventional/standard format (that is, as opposed to in a plenoptic format) in which there is a single pixel for each individual point in a real-life scene.
  • the processing circuitry 12 may be configured to produce an output image in a standard/conventional format from the plenoptic image 31 , and generate a depth map 35 for each pixel in that output image.
  • the processing circuitry 12 may then apply equation (3) to identify, for each scene point imaged in the output image, multiple pixel locations in the plenoptic image 31 where the scene point has been reproduced.
  • the processing circuitry 12 may compare the value of the pixels at the identified pixel locations with the value of the corresponding pixel in the output image to verify that they relate to the same real-life scene point. Depth values cannot be compared in these embodiments because depth map 35 only includes a depth value for the pixel in the output image.
  • a depth map 35 is only produced for each of the pixels in an output image generated from the plenoptic image 31 rather than for each of the pixels in the plenoptic image 31 itself, because the plenoptic image 31 contains more pixels. For example, if a plenoptic image includes 10 megapixels, the output image might only include around 2 megapixels.
  • Fig. 9 illustrates a more detailed description of some embodiments of the method illustrated in Fig. 7, in which the data that is combined at various pixel locations is analog data.
  • the analog data that is combined may, for example, be voltage or charge values.
  • the image sensor 26 is a destructive readout image sensor 26. That is, when analog data is read from the sensor 26, it is destroyed and cannot be recovered.
  • a first plenoptic image 31 is captured by the apparatus 20.
  • the processing circuitry 12 digitizes the first plenoptic image 31 as described above in relation to block 802 of Fig. 8.
  • a depth map 35 is generated for each of the pixels in the first plenoptic image 31 as described above in relation to block 803 of Fig. 8.
  • the processing circuitry 12 applies equation (3) to identify, for each point in a scene, multiple pixel locations in the first plenoptic image 31 where the scene point has been imaged, as described above in relation to block 804 of Fig. 8.
  • the apparatus 20 captures a second plenoptic image 33.
  • the processing circuitry 12 uses the second plenoptic image 33 to combine analog data from the pixel locations that were identified in block 904 (from the first plenoptic image 31 ) as relating to the same scene point.
  • the processing circuitry 12 may, for example, bin the data from the identified pixel locations.
  • an output image with a high signal to noise ratio is produced by the processing circuitry 12 and stored in the memory 14.
  • the output image is of a conventional/standard format (as opposed to in a plenoptic format) in which there is a single pixel for each individual point in a real-life scene.
  • combining/binning analog data often produces an output image with a higher signal to noise ratio than combining/binning digital data.
  • the second plenoptic image 33 may be captured before, at the same time, or after generation of the depth map 35 in block 903 and the identification of the pixel locations in block 904.
  • the first and second plenoptic images 31 , 33 may be full resolution plenoptic images. That is, the first and second plenoptic images 31 , 33 may use the full resolution of the image sensor 26 for a particular image aspect ratio (such as 4:3, 16:9, etc.).
  • the second plenoptic image 33 may be a full resolution image and the first plenoptic image 31 may be a lower resolution image, such as a viewfinder image.
  • a viewfinder image is an image that is captured and output to a display which enables a user to see a scene on the display in real-time. Use of a viewfinder image may reduce the time between capturing the first and second plenoptic images 31 , 33, advantageously minimizing any differences in content between the two plenoptic images 31 , 33. Another potential benefit is that it may also take less time for the processing circuitry 12 to generate a depth map 35 for a viewfinder image than a full resolution image.
  • the first and second plenoptic images 31 , 33 are different frames of a video. They may be consecutive frames of a video. In such embodiments, it may not be necessary to generate a depth map 35 for every frame in the video in the manner described above. Instead, depth tracking may be performed in which the processing circuitry 12 adjusts the depth map 35 from frame to frame by analyzing how the content in the video changes from one frame to the next. New depth values may be determined for "new content” that appears in a particular frame, whereas old depth values (determined in relation to a prior frame) may be maintained for "old content" that was present in one or more prior frames.
  • the image sensor 26 is not a destructive readout sensor.
  • references to 'computer-readable storage medium', 'processing circuitry', 'processor' etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field- programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry.
  • References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.
  • circuitry refers to all of the following:
  • circuits such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.
  • circuitry would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or other network device.
  • the blocks illustrated in figure 6, 7, 8 and 9 may represent steps in a method and/or sections of code in the computer program 17. The illustration of a particular order to the blocks does not necessarily imply that there is a required or preferred order for the blocks and the order and arrangement of the block may be varied. Furthermore, it may be possible for some blocks to be omitted.
  • the apparatus 10 illustrated in Fig. 2 may form part of a computer rather than a plenoptic camera such as that illustrated Fig. 3.
  • the apparatus that performs the image processing to produce an output image having a high signal to ratio need not be or form part of the apparatus that was used to capture the original plenoptic image.
  • the method(s) described above may also be applied to a plenoptic image captured by an array of cameras.
  • generation of the depth map, identification of pixels corresponding with individual points in the scene and binning the identified pixels can be performed as described above in relation to a Plenoptic camera 2.0.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention se rapporte à un procédé, à un appareil et à un programme d'ordinateur. Le procédé selon l'invention consiste : à identifier, dans chacune d'une pluralité de sous-images d'une image plénoptique, une position de pixel qui correspond à un point particulier dans une scène; et à combiner des données à partir des positions de pixels identifiées de sorte à former un seul pixel pour le point particulier dans la scène.
PCT/IB2013/052353 2013-02-07 2013-03-25 Traitement d'image de sous-images d'une image plénoptique WO2014122506A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN529/CHE/2013 2013-02-07
IN529CH2013 2013-02-07

Publications (1)

Publication Number Publication Date
WO2014122506A1 true WO2014122506A1 (fr) 2014-08-14

Family

ID=48446418

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2013/052353 WO2014122506A1 (fr) 2013-02-07 2013-03-25 Traitement d'image de sous-images d'une image plénoptique

Country Status (1)

Country Link
WO (1) WO2014122506A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016149438A1 (fr) * 2015-03-17 2016-09-22 Cornell University Appareil d'imagerie à profondeur de champ, procédés et applications

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020061131A1 (en) * 2000-10-18 2002-05-23 Sawhney Harpreet Singh Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery
US20100003024A1 (en) * 2007-12-10 2010-01-07 Amit Kumar Agrawal Cameras with Varying Spatio-Angular-Temporal Resolutions
EP2244484A1 (fr) * 2009-04-22 2010-10-27 Raytrix GmbH Système d'imagerie numérique pour synthétiser une image utilisant des données enregistrées avec une caméra plénoptique
US20120242855A1 (en) * 2011-03-24 2012-09-27 Casio Computer Co., Ltd. Device and method including function for reconstituting an image, and storage medium
US8345144B1 (en) * 2009-07-15 2013-01-01 Adobe Systems Incorporated Methods and apparatus for rich image capture with focused plenoptic cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020061131A1 (en) * 2000-10-18 2002-05-23 Sawhney Harpreet Singh Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery
US20100003024A1 (en) * 2007-12-10 2010-01-07 Amit Kumar Agrawal Cameras with Varying Spatio-Angular-Temporal Resolutions
EP2244484A1 (fr) * 2009-04-22 2010-10-27 Raytrix GmbH Système d'imagerie numérique pour synthétiser une image utilisant des données enregistrées avec une caméra plénoptique
US8345144B1 (en) * 2009-07-15 2013-01-01 Adobe Systems Incorporated Methods and apparatus for rich image capture with focused plenoptic cameras
US20120242855A1 (en) * 2011-03-24 2012-09-27 Casio Computer Co., Ltd. Device and method including function for reconstituting an image, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ADELSON E H ET AL: "SINGLE LENS STEREO WITH A PLENOPTIC CAMERA", TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE, PISCATAWAY, USA, vol. 14, no. 2, 28 February 1992 (1992-02-28), pages 99 - 106, XP000248474, ISSN: 0162-8828, DOI: 10.1109/34.121783 *
CHRISTIAN PERWASS ET AL: "Single lens 3D-camera with extended depth-of-field", PROCEEDINGS OF SPIE, vol. 8291, 5 February 2012 (2012-02-05), pages 829108 - 1, XP055072572, ISSN: 0277-786X, DOI: 10.1117/12.909882 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016149438A1 (fr) * 2015-03-17 2016-09-22 Cornell University Appareil d'imagerie à profondeur de champ, procédés et applications
US10605916B2 (en) 2015-03-17 2020-03-31 Cornell University Depth field imaging apparatus, methods, and applications
US10983216B2 (en) 2015-03-17 2021-04-20 Cornell University Depth field imaging apparatus, methods, and applications

Similar Documents

Publication Publication Date Title
CN110428366B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN106412214B (zh) 一种终端及终端拍摄的方法
US9524556B2 (en) Method, apparatus and computer program product for depth estimation
Georgiev et al. Lytro camera technology: theory, algorithms, performance analysis
US8401316B2 (en) Method and apparatus for block-based compression of light-field images
US9390530B2 (en) Image stitching
CN109474780B (zh) 一种用于图像处理的方法和装置
Villalba et al. Smartphone image clustering
TWI615027B (zh) 高動態範圍圖像的生成方法、拍照裝置和終端裝置、成像方法
EP2786556B1 (fr) Commande d'acquisition d'image et/ou commande de traitement d'image
US20130021504A1 (en) Multiple image processing
US9342875B2 (en) Method for generating image bokeh effect and image capturing device
US10284770B2 (en) Dual-camera focusing method and apparatus, and terminal device
WO2019056527A1 (fr) Procédé et dispositif de capture
JP2015231220A (ja) 画像処理装置、撮像装置、画像処理方法、撮像方法及びプログラム
TW201501533A (zh) 調整對焦位置的方法及電子裝置
US7796806B2 (en) Removing singlet and couplet defects from images
KR102069269B1 (ko) 영상 안정화 장치 및 방법
WO2022160857A1 (fr) Procédé et appareil de traitement d'images, support de stockage lisible par ordinateur et dispositif électronique
JP2014120122A (ja) 領域抽出装置、領域抽出方法、及びコンピュータプログラム
Gao et al. Camera model identification based on the characteristic of CFA and interpolation
Zhou et al. Unmodnet: Learning to unwrap a modulo image for high dynamic range imaging
TW201607296A (zh) 快速產生影像景深圖的方法及影像處理裝置
TW201911853A (zh) 雙攝像頭影像擷取裝置及其攝像方法
WO2014122506A1 (fr) Traitement d'image de sous-images d'une image plénoptique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13723218

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13723218

Country of ref document: EP

Kind code of ref document: A1