WO2015055892A1 - Procédé, appareil et produit-programme informatique pour la détection et la correction de défauts d'image - Google Patents

Procédé, appareil et produit-programme informatique pour la détection et la correction de défauts d'image Download PDF

Info

Publication number
WO2015055892A1
WO2015055892A1 PCT/FI2014/050776 FI2014050776W WO2015055892A1 WO 2015055892 A1 WO2015055892 A1 WO 2015055892A1 FI 2014050776 W FI2014050776 W FI 2014050776W WO 2015055892 A1 WO2015055892 A1 WO 2015055892A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
micro
object point
disparity value
corresponding object
Prior art date
Application number
PCT/FI2014/050776
Other languages
English (en)
Inventor
Gururaj Gopal Putraya
Basavaraja S V
Mithun Uliyar
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of WO2015055892A1 publication Critical patent/WO2015055892A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/21Indexing scheme for image data processing or generation, in general involving computational photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • Various implementations relate generally to method, apparatus, and computer program product for object detection and correction of in image defect.
  • CMOS complementary metal oxide semiconductor
  • each row starts exposure at a small time difference compared to the neighboring row, thereby resulting in an image defect phenomenon known as rolling shutter artifact. Due to the rolling shutter artifact, scenes comprising imaging fast moving objects or a fast changing flashes may be captured with various undesirable visual effects.
  • a method comprising: accessing an image comprising a plurality of object points associated with a scene, the image comprising a plurality of micro-images arranged in a plurality of rows; determining a first disparity value based on an object point of the plurality of object points in a first microimage and a corresponding object point in at least one second micro-image, the first micro-image and the at least one second micro-image being associated with a row of the plurality of rows; determining a second disparity value based on the object point in the first micro-image and a corresponding object point in at least one third micro-image, the at least one third micro-image being associated with at least one neighboring row of the row; and comparing the first disparity value and the second disparity value to determine a presence of an image defect in the image.
  • an apparatus comprising at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least: access an image comprising a plurality of object points associated with a scene, the image comprising a plurality of micro-images arranged in a plurality of rows; determine a first disparity value based on an object point of the plurality of object points in a first micro-image and a corresponding object point in at least one second micro-image, the first micro-image and the at least one second micro-image being associated with a row of the plurality of rows; determine a second disparity value based on the object point in the first micro-image and a corresponding object point in at least one third micro-image, the at least one third microimage being associated with at least one neighboring row of the row; and compare the first disparity value and the second disparity value to determine a presence of an image defect in the image.
  • a computer program product comprising at least one computer-readable storage medium, the computer-readable storage medium comprising a set of instructions, which, when executed by one or more processors, cause an apparatus to perform at least: access an image comprising a plurality of object points associated with a scene, the image comprising a plurality of micro-images arranged in a plurality of rows; determine a first disparity value based on an object point of the plurality of object points in a first micro-image and a corresponding object point in at least one second micro-image, the first micro-image and the at least one second micro-image being associated with a row of the plurality of rows; determine a second disparity value based on the object point in the first micro-image and a corresponding object point in at least one third micro-image, the at least one third microimage being associated with at least one neighboring row of the row; and compare the first disparity value and the second disparity value to determine a presence of an image defect in the image.
  • an apparatus comprising: means for accessing an image comprising a plurality of object points associated with a scene, the image comprising a plurality of micro-images arranged in a plurality of rows; means for determining a first disparity value based on an object point of the plurality of object points in a first micro-image and a corresponding object point in at least one second microimage, the first micro-image and the at least one second micro-image being associated with a row of the plurality of rows; means for determining a second disparity value based on the object point in the first micro-image and a corresponding object point in at least one third micro-image, the at least one third micro-image being associated with at least one neighboring row of the row; and means for comparing the first disparity value and the second disparity value to determine a presence of an image defect in the image.
  • a computer program comprising program instructions which when executed by an apparatus, cause the apparatus to: access an image comprising a plurality of object points associated with a scene, the image comprising a plurality of micro-images arranged in a plurality of rows; determine a first disparity value based on an object point of the plurality of object points in a first micro- image and a corresponding object point in at least one second micro-image, the first micro-image and the at least one second micro-image being associated with a row of the plurality of rows; determine a second disparity value based on the object point in the first micro-image and a corresponding object point in at least one third micro-image, the at least one third micro-image being associated with at least one neighboring row of the row; and compare the first disparity value and the second disparity value to determine a presence of an image defect in the image.
  • FIGURES 1 A and 1 B illustrate an example image and an example image capturing device configuration, respectively in accordance with an example embodiment
  • FIGURE 2 illustrates a device, in accordance with an example embodiment
  • FIGURE 3 illustrates an example block diagram of an apparatus, in accordance with an example embodiment
  • FIGURES 4 illustrates an example representation of an image portion devoid of an image defect, in accordance with an example embodiment
  • FIGURES 5 illustrates an example representation of an image portion comprising an image defect, in accordance with an example embodiment
  • FIGURES 6 illustrates an example representation of correction of image defect in an image, in accordance with an example embodiment
  • FIGURE 7 is a flowchart depicting an example method, in accordance with an example embodiment.
  • FIGURE 8 is a flowchart depicting an example method for detection and correction of image defect, in accordance with another example embodiment.
  • the image may be a light-field image.
  • the terms 'light- field image' may refer to an infinite collection of vectors representative of the light converging at a point from all possible angles in three dimension (3D).
  • a light-field image is a complete representation of a visual scene and contains all possible views of the scene.
  • the light-field image comprises an angular information, for example, a four dimensional (4D) information of all the light rays associated with the scene in 3D.
  • An exemplary light-field image is illustrated with reference to FIGURE 1A.
  • the image may be captured by utilizing a light-field image capturing device, such as a plenoptic camera.
  • FIGURE 1A illustrates an example of a light-field image 102 in accordance with an embodiment.
  • the light-field image 102 comprises a 2D image that includes a plurality of small images associated with a scene.
  • the plurality of small images may be termed as an array of "micro-images".
  • each of the micro-images associated with the scene may comprise depth information associated with the scene.
  • a device configured to capture the light-field image may include an array of micro lenses that enables the light- field camera to record not only image intensity, but also the distribution of intensity in different directions at each point. For generating an image from the light-field image, pixels from multiple micro-images may be selected.
  • An example configuration of micro- lenses in a device configured to capture a light-field image is illustrated and described in FIGURE 1 B.
  • An example device configured for capturing light-field image along with various components thereof is disclosed in FIGURE 2.
  • FIGURE 1 B illustrates example configuration of a device 120, in accordance with an embodiment.
  • the device 120 is configured to capture a light-field image.
  • the device 120 may include an array of micro-lenses comprising, for example, a micro-lens 122 and a micro-lens 124, and an image sensor 126.
  • the array of micro lenses are configured to create a map of light intensity for an object, for example, an object located at point 128 in the image at an image plane of the main lens.
  • the array of micro lenses may be configured at a distance (represented as 130) from the image sensor 126.
  • the image senor 126 may be a charge-coupled device (CCD).
  • CCD charge-coupled device
  • the rays of light may be incident at the optical element, thereby generating an image, for example, images 132, 134 at an image plane at a focal distance from the optical element.
  • Each micro-lens may split a beam coming towards it from the optical element into rays coming from different "pinhole" locations on the aperture of the optical element.
  • Each of the rays may be recorded as a pixel on the image sensor 126, and the pixels under each micro-lens may collectively form an n-pixel image.
  • the n-pixel image under each array of lens may be referred to as a macro-pixel, and the device may generate a micro-image at each macro- pixel.
  • the light-field image captured by the device may generate a plurality of micro- images of a scene.
  • the light-field image may be processed for generating an image that is devoid of image defects such as rolling shutter artifact.
  • the depth offset associated with the micro-lens may be computed based on the following expression:
  • D is the distance between adjacent micro-lenses, for example micro-lenses 122, 124.
  • p is the object point (shown as 128 in FIGURE 1 B) imaged by the main-lens in front of the micro-lens.
  • v is the distance (shown as 136 in FIGURE 1 B) between the imaged object point 128 and the micro-lens array (comprising micro-lenses 122, 124).
  • the imaged object point depends on the depth at which the point is present in front of the image capturing device.
  • the distance V depends on the depth of the scene.
  • B is the distance (shown as 130 in FIGURE 1 B) between the micro-lens array and the sensor.
  • P is the pixel location (shown as 132 in FIGURE 1 B) where the object point 'p' is imaged for top micro-lens 122 (assuming pin-hole imaging).
  • P' is the pixel location (shown as 134 in FIGURE 1 B) where the object point 'p' is imaged for bottom micro-lens 124 (assuming pin-hole imaging).
  • V depth
  • FIGURE 2 illustrates a device 200 in accordance with an example embodiment. It should be understood, however, that the device 200 as illustrated and hereinafter described is merely illustrative of one type of device that may benefit from various embodiments, therefore, should not be taken to limit the scope of the embodiments. As such, it should be appreciated that at least some of the components described below in connection with the device 200 may be optional and thus in an example embodiment may include more, less or different components than those described in connection with the example embodiment of FIGURE 2.
  • the device 200 could be any of a number of types of electronic devices, for example, portable digital assistants (PDAs), pagers, mobile televisions, gaming devices, cellular phones, all types of computers (for example, laptops, mobile computers or desktops), cameras, audio/video players, radios, global positioning system (GPS) devices, media players, mobile digital assistants, or any combination of the aforementioned, and other types of communications devices.
  • PDAs portable digital assistants
  • pagers mobile televisions
  • gaming devices for example, laptops, mobile computers or desktops
  • computers for example, laptops, mobile computers or desktops
  • GPS global positioning system
  • media players media players
  • mobile digital assistants or any combination of the aforementioned, and other types of communications devices.
  • the device 200 may include an antenna 202 (or multiple antennas) in operable communication with a transmitter 204 and a receiver 206.
  • the device 200 may further include an apparatus, such as a controller 208 or other processing device that provides signals to and receives signals from the transmitter 204 and receiver 206, respectively.
  • the signals may include signaling information in accordance with the air interface standard of the applicable cellular system, and/or may also include data corresponding to user speech, received data and/or user generated data.
  • the device 200 may be capable of operating with one or more air interface standards, communication protocols, modulation types, and access types.
  • the device 200 may be capable of operating in accordance with any of a number of first, second, third and/or fourth-generation communication protocols or the like.
  • the device 200 may be capable of operating in accordance with second-generation (2G) wireless communication protocols IS-136 (time division multiple access (TDMA)), GSM (global system for mobile communication), and IS-95 (code division multiple access (CDMA)), or with third-generation (3G) wireless communication protocols, such as Universal Mobile Telecommunications System (UMTS), CDMA1000, wideband CDMA (WCDMA) and time division-synchronous CDMA (TD-SCDMA), with 3.9G wireless communication protocol such as evolved- universal terrestrial radio access network (E- UTRAN), with fourth-generation (4G) wireless communication protocols, or the like.
  • 2G wireless communication protocols IS-136 (time division multiple access (TDMA)
  • GSM global system for mobile communication
  • CDMA code division multiple access
  • third-generation (3G) wireless communication protocols such as Universal Mobile Telecommunications System (UMTS), CDMA1000, wideband CDMA (WCDMA) and time division-synchronous CDMA (TD-SCDMA), with 3.9G wireless communication protocol such as evolved- universal terrestrial radio access network (E- UT
  • computer networks such as the Internet, local area network, wide area networks, and the like; short range wireless communication networks such as Bluetooth® networks, Zigbee® networks, Institute of Electric and Electronic Engineers (IEEE) 802.1 1x networks, and the like; wireline telecommunication networks such as public switched telephone network (PSTN).
  • PSTN public switched telephone network
  • the controller 208 may include circuitry implementing, among others, audio and logic functions of the device 200.
  • the controller 208 may include, but are not limited to, one or more digital signal processor devices, one or more microprocessor devices, one or more processor(s) with accompanying digital signal processor(s), one or more processor(s) without accompanying digital signal processor(s), one or more special-purpose computer chips, one or more field-programmable gate arrays (FPGAs), one or more controllers, one or more application-specific integrated circuits (ASICs), one or more computer(s), various analog to digital converters, digital to analog converters, and/or other support circuits. Control and signal processing functions of the device 200 are allocated between these devices according to their respective capabilities.
  • the controller 208 thus may also include the functionality to convolutionally encode and interleave message and data prior to modulation and transmission.
  • the controller 208 may additionally include an internal voice coder, and may include an internal data modem.
  • the controller 208 may include functionality to operate one or more software programs, which may be stored in a memory.
  • the controller 208 may be capable of operating a connectivity program, such as a conventional Web browser.
  • the connectivity program may then allow the device 200 to transmit and receive Web content, such as location-based content and/or other web page content, according to a Wireless Application Protocol (WAP), Hypertext Transfer Protocol (HTTP) and/or the like.
  • WAP Wireless Application Protocol
  • HTTP Hypertext Transfer Protocol
  • the controller 208 may be embodied as a multi-core processor such as a dual or quad core processor. However, any number of processors may be included in the controller 208.
  • the device 200 may also comprise a user interface including an output device such as a ringer 210, an earphone or speaker 212, a microphone 214, a display 216, and a user input interface, which may be coupled to the controller 208.
  • the user input interface which allows the device 200 to receive data, may include any of a number of devices allowing the device 200 to receive data, such as a keypad 218, a touch display, a microphone or other input device.
  • the keypad 218 may include numeric (0-9) and related keys (#, * ), and other hard and soft keys used for operating the device 200.
  • the keypad 218 may include a conventional QWERTY keypad arrangement.
  • the keypad 218 may also include various soft keys with associated functions.
  • the device 200 may include an interface device such as a joystick or other user input interface.
  • the device 200 further includes a battery 220, such as a vibrating battery pack, for powering various circuits that are used to operate the device 200, as well as optionally providing mechanical vibration as a detectable output.
  • the device 200 includes a media-capturing element, such as a camera, video and/or audio module, in communication with the controller 208.
  • the media-capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission.
  • the camera module 222 may include a digital camera (or array of multiple cameras) capable of forming a digital image file from a captured image.
  • the camera module 222 includes all hardware, such as a lens or other optical component(s), and software for creating a digital image file from a captured image.
  • the camera module 222 may include the hardware needed to view an image, while a memory device of the device 200 stores instructions for execution by the controller 208 in the form of software to create a digital image file from a captured image.
  • the camera module 222 may further include a processing element such as a co-processor, which assists the controller 208 in processing image data and an encoder and/or decoder for compressing and/or decompressing image data.
  • the encoder and/or decoder may encode and/or decode according to a JPEG standard format or another like format.
  • the encoder and/or decoder may employ any of a plurality of standard formats such as, for example, standards associated with H.261 , H.262/ MPEG-2, H.263, H.264, H.264/MPEG-4, MPEG- 4, and the like.
  • the camera module 222 may provide live image data to the display 216.
  • the display 216 may be located on one side of the device 100 and the camera module 222 may include a lens positioned on the opposite side of the device 200 with respect to the display 216 to enable the camera module 222 to capture images on one side of the device 200 and present a view of such images to the user positioned on the other side of the device 200.
  • the camera module(s) can also be on any side, but normally on the opposite side of the display 216 or on the same side of the display 216 (for example, video call cameras).
  • the device 200 may further include a user identity module (UIM) 224.
  • the UIM 224 may be a memory device having a processor built in.
  • the UIM 224 may include, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), or any other smart card.
  • SIM subscriber identity module
  • UICC universal integrated circuit card
  • USIM universal subscriber identity module
  • R-UIM removable user identity module
  • the UIM 224 typically stores information elements related to a mobile subscriber.
  • the device 200 may be equipped with memory.
  • the device 200 may include volatile memory 226, such as volatile random access memory (RAM) including a cache area for the temporary storage of data.
  • RAM volatile random access memory
  • the device 200 may also include other non-volatile memory 228, which may be embedded and/or may be removable.
  • the non-volatile memory 228 may additionally or alternatively comprise an electrically erasable programmable read only memory (EEPROM), flash memory, hard drive, or the like.
  • EEPROM electrically erasable programmable read only memory
  • flash memory any number of pieces of information, and data, used by the device 100 to implement the functions of the device 200.
  • FIGURE 3 illustrates an apparatus 300 for detecting and correction of image defects in an image of a scene, in accordance with an example embodiment.
  • the apparatus 300 may be employed, for example, in the device 200 of FIGURE 2.
  • the apparatus 300 may also be employed on a variety of other devices both mobile and fixed, and therefore, embodiments should not be limited to application on devices such as the device 200 of FIGURE 2.
  • embodiments may be employed on a combination of devices including, for example, those listed above.
  • various embodiments may be embodied wholly at a single device, (for example, the device 200 or in a combination of devices.
  • the devices or elements described below may not be mandatory and thus some may be omitted in certain embodiments.
  • the apparatus 300 includes or otherwise is in communication with at least one processor 302 and at least one memory 304.
  • the at least one memory 304 include, but are not limited to, volatile and/or non-volatile memories.
  • volatile memory include, but are not limited to, random access memory, dynamic random access memory, static random access memory, and the like.
  • the non-volatile memory include, but are not limited to, hard disks, magnetic tapes, optical disks, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, flash memory, and the like.
  • the memory 304 may be configured to store information, data, applications, instructions or the like for enabling the apparatus 300 to carry out various functions in accordance with various example embodiments.
  • the memory 304 may be configured to buffer input data comprising media content for processing by the processor 302.
  • the memory 304 may be configured to store instructions for execution by the processor 302.
  • An example of the processor 302 may include the controller 208.
  • the processor 302 may be embodied in a number of different ways.
  • the processor 302 may be embodied as a multi-core processor, a single core processor; or combination of multi- core processors and single core processors.
  • the processor 302 may be embodied as one or more of various processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like.
  • the multi-core processor may be configured to execute instructions stored in the memory 304 or otherwise accessible to the processor 302.
  • the processor 302 may be configured to execute hard coded functionality.
  • the processor 302 may represent an entity, for example, physically embodied in circuitry, capable of performing operations according to various embodiments while configured accordingly.
  • the processor 302 may be specifically configured hardware for conducting the operations described herein.
  • the processor 302 is embodied as an executor of software instructions, the instructions may specifically configure the processor 302 to perform the algorithms and/or operations described herein when the instructions are executed.
  • the processor 302 may be a processor of a specific device, for example, a mobile terminal or network device adapted for employing embodiments by further configuration of the processor 302 by instructions for performing the algorithms and/or operations described herein.
  • the processor 302 may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processor 302.
  • ALU arithmetic logic unit
  • a user interface 306 may be in communication with the processor 302. Examples of the user interface 306 include, but are not limited to, input interface and/or output user interface.
  • the input interface is configured to receive an indication of a user input.
  • the output user interface provides an audible, visual, mechanical or other output and/or feedback to the user.
  • Examples of the input interface may include, but are not limited to, a keyboard, a mouse, a joystick, a keypad, a touch screen, soft keys, and the like.
  • the output interface may include, but are not limited to, a display such as light emitting diode display, thin-film transistor (TFT) display, liquid crystal displays, active- matrix organic light-emitting diode (AMOLED) display, a microphone, a speaker, ringers, vibrators, and the like.
  • the user interface 306 may include, among other devices or elements, any or all of a speaker, a microphone, a display, and a keyboard, touch screen, or the like.
  • the processor 302 may comprise user interface circuitry configured to control at least some functions of one or more elements of the user interface 306, such as, for example, a speaker, ringer, microphone, display, and/or the like.
  • the processor 302 and/or user interface circuitry comprising the processor 302 may be configured to control one or more functions of one or more elements of the user interface 306 through computer program instructions, for example, software and/or firmware, stored on a memory, for example, the at least one memory 304, and/or the like, accessible to the processor 302.
  • the apparatus 300 may include an electronic device.
  • the electronic device include communication device, media capturing device with communication capabilities, computing devices, and the like.
  • Some examples of the electronic device may include a mobile phone, a personal digital assistant (PDA), and the like.
  • Some examples of computing device may include a laptop, a personal computer, and the like.
  • Some examples of electronic device may include a camera.
  • the electronic device may include a user interface, for example, the Ul 306, having user interface circuitry and user interface software configured to facilitate a user to control at least one function of the electronic device through use of a display and further configured to respond to user inputs.
  • the electronic device may include a display circuitry configured to display at least a portion of the user interface of the electronic device. The display and display circuitry may be configured to facilitate the user to control at least one function of the electronic device.
  • the electronic device may be embodied as to include a transceiver.
  • the transceiver may be any device operating or circuitry operating in accordance with software or otherwise embodied in hardware or a combination of hardware and software.
  • the transceiver may be configured to receive media content. Examples of media content may include audio content, video content, data, and a combination thereof.
  • the electronic device may be embodied as to include a plurality of image sensors, such as an image sensor 308 and image sensor 310. Though only two image sensors 308 and 310 are shown in the example representation of FIGURE 3, but the electronic device may include more than two image sensors.
  • the image sensors 308 and 310 may be in communication with the processor 302 and/or other components of the apparatus 300.
  • the image sensors 308 and 310 may be in communication with other imaging circuitries and/or software, and are configured to capture digital images or to capture video or other graphic media.
  • the image sensors 308 and 310 and other circuitries, in combination, may be example of at least one camera module such as the camera module 222 of the device 200.
  • the image sensors 308 and 310 may also be configured to capture a plurality of microimages depicting a scene from different positions (or different angles).
  • the image sensors 308 and 310 may be accompanied with corresponding lenses to capture two views of the scene, such as stereoscopic views.
  • These components (302-310) may communicate to each other via a centralized circuit system 312 to perform detection and correction of rolling shutter artifact in an images of the scene.
  • the centralized circuit system 312 may be configured to, among other things, provide or enable communication between the components (302-310) of the apparatus 300.
  • the centralized circuit system 312 may be a central printed circuit board (PCB) such as a motherboard, main board, system board, or logic board.
  • the centralized circuit system 312 may also, or alternatively, include other printed circuit assemblies (PCAs) or communication channel media.
  • the processor 302 is configured to, with the content of the memory 304, and optionally with other components described herein, to cause the apparatus 300 to perform processing of images associated with images defects such as a rolling shutter artifact.
  • a 'rolling shutter condition' may cause a rolling shutter artifact in the captured images.
  • the rolling shutter condition occurs in image sensors associated with the image capturing device, wherein each row of the micro-images begins exposure thereof at a small time difference compared to a neighboring row, and thus the shutter is called a rolling shutter.
  • the image is displayed as if captured at a single time instant, however the capture is such that each row has a different starting time.
  • Rolling shutter artifact in the captured images is more prominent when imaging fast moving objects or fast changing flashes, and the like.
  • rolling shutter artifacts may include artifacts such as wobble, skew, smear, partial exposure, and the like.
  • the processor 302 is configured to, with the content of the memory 304, and optionally with other components described herein, to cause the apparatus 300 to facilitate access of an image associated with a scene.
  • the image may include a plurality of micro-images associated with slightly different views of the scene comprising one or more objects.
  • the image may include a plurality of object points (for example, pixels) associated with the one or more objects.
  • the plurality of micro-images of the scene may be captured such that there may exists a disparity in at least one object of the scene among the plurality of micro-images.
  • the plurality of micro- images may comprise stereoscopic images of the scene.
  • a stereo camera may capture a first micro-image and at least one second micro-image, such that, the first micro-image may include a slight parallax with the at least one second micro-image representing the same scene.
  • the first micro-image and the at least one second micro-image may also be received from a camera capable of capturing multiple views of the scene, for example, a multi-baseline camera, an array camera, a plenoptic camera and a light field camera.
  • the plurality of micro-images may be prerecorded or stored in an apparatus 300, or may be received from sources external to the apparatus 300.
  • the apparatus 300 is caused to receive the plurality of micro-images from external storage medium such as DVD, Compact Disk (CD), flash drive, memory card, or from external storage locations through Internet, Bluetooth ® , and the like.
  • a processing means may be configured to facilitate access of the plurality of micro-images of the scene comprising the one or more objects, wherein there exists a disparity in the at least one object of the scene between the plurality of micro-images, for example, the first micro-image and the at least one second micro-image.
  • An example of the processing means may include the processor 302, which may be an example of the controller 208, and/or the image sensors 308 and 310.
  • the plurality of micro-images may be arranged in a plurality of rows in the image.
  • An example of arrangement of the plurality of micro-images in the plurality of row is illustrated and explained with reference to FIGURE 4.
  • the micro-images of the plurality of micro-images may be associated with one or more disparity values.
  • the disparity values may be indicative of a depth (or distance from a reference location, such as the image capturing device) of an object point.
  • the term 'disparity' may describe an offset of the object point (for example, a pixel) in a micro-image relative to a corresponding object point in other microimage.
  • the disparity value for an object point estimated by using the two or more micro-images of a row may be equal to the disparity value for the object point estimated between the two or more micro-images associated with a neighboring row.
  • An example of computation of disparity values for two or more micro-images of a row being equal is illustrated and explained with reference to FIGURE 4.
  • the disparity values estimated by using the two or more micro-images of the same row may not be equal to the disparity values estimated between the two or more micro-images associated with neighboring rows.
  • An example rolling shutter condition being detected based on unequal disparity values for micro-images of various neighboring rows is illustrated and explained with reference to FIGURE 5.
  • the processor 302 is configured to, with the content of the memory 304, and optionally with other components described herein, to cause the apparatus 300 to determine a first disparity value based on an object point of the plurality of object points in a first micro-image and a corresponding object point in at least one second micro-image associated with the plurality of micro-images.
  • the first micro-image and the at least one second micro-image are associated with a row of the plurality of rows.
  • an object point may have a single disparity value between the first micro-image and the at least one second micro-image.
  • the object point may have a single disparity value, as is illustrated and explained further with reference to FIGURE 4.
  • the object point may have multiple single disparity values between the first micro-image and the at least one second micro-image.
  • the object point may have disparity values associated with multiple directions, for instance an offset along x-axis and an offset along y-axis.
  • the disparity values between the micro-images associated with a same row may be referred to as first disparity values, and the disparity values between the micro-images associated with neighboring rows may be referred to as second disparity value.
  • a processing means may be configured to determine the first disparity value between the at least one object point in the first micro-image and the at least one second micro-image associated with the plurality of micro-images.
  • An example of the processing means may include the processor 302, which may be an example of the controller 208, and/or the image sensors 308 and 310.
  • the processor 302 is configured to, with the content of the memory 304, and optionally with other components described herein, to cause the apparatus 300 to determine the first disparity value based at least on a detected position of the object point in the first micro-image and a detected position of the corresponding object point in the at least one second micro-image.
  • a difference in the locations of the object point in the first micro-image and the second micro-image may be a first disparity of the object point between the first micro-image and the second microimage.
  • a processing means may be configured to determine at least one first disparity value between the object points of the one or more objects in the first micro-image and corresponding object points in the at least one second microimage, wherein the first disparity value for an object point between the first micro-image and the second micro-image is determined based at least on the detected position of the object point in the first micro-image and the detected position of the object point in the second micro-image.
  • An example of the processing means may include the processor 302, which may be an example of the controller 208.
  • the first disparity value may be estimated based on patch matching along an epipolar line of the first micro-image and the at least one second micro-image.
  • the patch matching performed along the neighboring micro-images for example, the first micro-image and the at least one second micro-image located along y-axis may provide a y-shift.
  • the patch matching performed along the neighboring micro-images located along x-axis may provide an x- shift.
  • the shift along the x-direction and the shift along the y-direction may be estimated disparity along the x-direction and the y-direction, respectively.
  • the processor 302 is configured to, with the content of the memory 304, and optionally with other components described herein, to cause the apparatus 300 to determine a second disparity value based on the object point in the first micro-image and the at least one third micro-image.
  • the at least one third micro-image may be associated with a neighboring row of the first micro-image.
  • the second disparity value for an object point between the first micro-image and the at least one third micro-image is determined based at least on a detected position of the object point in the first micro-image and a detected position of the object point in the at least one third micro-image.
  • a difference in the locations of the object point in the first micro-image and the at least one third micro-image may be a disparity of the object point between the first micro-image and the at least one third micro-image.
  • a processing means may be configured to determine the second disparity value between the object points of the one or more objects in the first micro-image and the corresponding object points in the at least one third microimage.
  • a processing means may be configured to determine the second disparity value for the object point between the first micro-image and the at least one third micro-image based at least on the detected position of the object point in the first micro-image and the detected position of the object point in the at least one third microimage.
  • An example of the processing means may include the processor 302, which may be an example of the controller 208.
  • the processor 302 is configured to, with the content of the memory 304, and optionally with other components described herein, to cause the apparatus 300 to detect the object points of the one or more objects in at least one third micro-image based on the detection of the object point of the one or more objects in the first micro-image. For instance, for every object point detected in the first micro-image, a corresponding object point is detected in the at least one third micro- image. In an example embodiment, detecting a corresponding object point in the second micro-image comprises searching for the object point in an entire block associated with the at least one third micro-image.
  • the second disparity value may be determined based on a shift in a position of the matching object point with respect to a position of the object point in the first micro-image.
  • the second disparity value may include a shift of the matching object point in multiple directions for example, in x-direction and in y-direction.
  • the disparity value associated with an object point may include disparity in multiple directions.
  • a processing means may be configured to determine the corresponding object points in at least one third micro-image based on the detection of the object point in the first micro-image.
  • An example of the processing means may include the processor 302, which may be an example of the controller 208.
  • the processor 302 is configured to, with the content of the memory 304, and optionally with other components described herein, to cause the apparatus 300 to compare the first disparity value and the second disparity value to determine presence of an image defect in the image.
  • the image defect comprises the rolling shutter artifact.
  • a processing means may be configured to compare the first disparity value and the second disparity value to determine presence of an image defect in the image.
  • An example of the processing means may include the processor 302, which may be an example of the controller 208.
  • the processor 302 is configured to, with the content of the memory 304, and optionally with other components described herein, to cause the apparatus 300 to correct the rolling shutter artifact in the image.
  • a patch-based method may be utilized for reconstructing the image such that the rolling shutter artifact is corrected.
  • the rolling shutter artifact may be corrected based on the first disparity value and the second disparity value.
  • a processing means may be configured to correct the rolling shutter artifact in the image.
  • An example of the processing means may include the processor 302, which may be an example of the controller 208.
  • the processor 302 is configured to, with the content of the memory 304, and optionally with other components described herein, to cause the apparatus 300 to correct the rolling shutter artifact by selecting shifted patches from the at least one third micro-image.
  • the patches may be shifted by an amount equal to the shift in the at least one third micro-image. For example, if the disparity between the first micro-image and the at least one second micro-image is 'x' number of pixels, then while reconstructing the image, the patch may be selected from a distance that is shifted by 'x' pixels from the center of the at least one second micro-image.
  • the patch may be selected from a distance that is shifted by 'y' pixels from the center of the at least one third micro-image.
  • FIGURE 4 illustrates an example representation of disparity determination in an image 400, in accordance with an example embodiment.
  • the image 400 may be a portion of a stereoscopic images, for example, the image 102 (FIGURE 1A).
  • the image 102 may be assumed that the image 102 is not associated with an image defect, such as a rolling shutter artifact.
  • the image portion 400 may include a plurality of micro-images such as micro-images 402, 404, 406, 408, 410, 412, 414, 416 and 418 that may be arranged in a plurality of rows.
  • the micro-images 402, 404, and 406 may be arranged in a first row, the micro-images 408, 410, 412 in a second row and the micro-images 414, 416 and 418 in a third row.
  • the micro-images may comprise a plurality of object points associated with one or more objects of the image.
  • the plurality of object points are represented by the plurality of small boxes in FIGURE 4.
  • the micro-image may include object points such as object points 420, 422, 424.
  • the object points 420, 422, 424 may be associated with distinct portions of an objects in the image, and are thus depicted in different shades/colors.
  • each of the microimages 402, 404, 406, 408, 410, 412, 414, 416 and 418 may include copies of object points, such as object points 420, 422, 424 (shown in the micro-image 404).
  • the object points corresponding to the object points 420, 422, 424 in the micro-images 404, 406, 408, 410, 412, 414, 416 and 418 are shown in same color as that of the object points 420, 422, 424.
  • 'D1 1 ' may represent the first disparity value calculated for an object point in the micro-image 404 using the micro-images 402 and 406; 'D12' may represent the second disparity value for the same object point in micro-image 404 estimated using 408 to 412, 'D13' may represent the second disparity value for the same object point in micro-image 404' estimated using 414 to 418; 'D22' may represent the first disparity value estimated for the object point in micro-image 410 using the micro-images 408 and 412; 'D33' may represent the first disparity value estimated for the object point in micro-image 416 using the micro-images 414 and 418; and 'D23' may represent the second disparity value estimated for the object point in micro-image 410 using the micro- images 414 to 418.
  • the first disparity value (D1 1 ) estimated using the first row may be equal to 3 pixels.
  • the first disparity value (D22) estimated using the second row, and the second disparity value (D33) estimated using third row is equal to 3 pixels.
  • the second disparity value D12 estimated for the object points associated with the micro-image 404 using the mirco-images 408, 410, 412 of the second row is 3 pixels.
  • the disparity value (for example, the first disparity value and/or the second disparity value) of an object point in the plurality of micro-images may be determined by determining a shift in the pixels associated with the same object points in the neighboring micro-images.
  • the object points (or pixels) corresponding to the object points (or pixels) 420, 422, and 424 respectively of the mirco-image 402 have shifted in other micro-images such as the microimages 404, 406, 408, 410, 412, 414, 416 and 418.
  • the relative shift of the corresponding object points (or pixels) in the neighboring micro-images may be utilized in determining the disparity values in the image.
  • FIGURE 5 illustrates an example representation of disparity determination in an image portion 500, in accordance with an example embodiment.
  • the image portion 500 may include a plurality of micro-images such as micro-images 502, 504, 506, 508, 510, 512, 514, 516 and 518 that may be arranged in a plurality of rows.
  • the micro-images 502, 504, and 506 may be arranged in a first row, the micro-images 508, 510, 512 in a second row and the micro-images 514, 516 and 518 in a third row of the image 500.
  • Each of the micro-images may comprise a plurality of object points associated with one or more objects of the image 500.
  • the plurality of object points are represented by the plurality of small boxes in FIGURE 5.
  • the micro-image 502 may include object points such as object points 520, 522, 524.
  • the object points 520, 522, 524 may be associated with distinct portions of an objects in the image, and are thus depicted in different shades/colors.
  • the object points corresponding to the object points 520, 522, 524 in the micro-images 504, 506, 508, 510, 512, 514, 516 and 518 are shown in same color as that of the object points 520, 522, 524.
  • the image portion 500 may be a portion of the image 102 (FIGURE 1 ).
  • the image portion 500 of the image 102 comprises a rolling shutter artifact.
  • the object points in the neighboring microimages may be shifted by unequal distances.
  • the shift (or the disparity) between the corresponding object points in the micro-images of the first row and the second row may be different from the shift between the corresponding object points in the micro-images of the first row and the third row.
  • the first disparity value D1 1 is 3 pixels
  • the first disparity value D22 is 3 pixels
  • the second disparity value D12 is 3 pixels in y-direction and 2 pixels in x-direction.
  • the first disparity value D33 is 3 pixels
  • the second disparity values D13 is 3 pixels in y-direction and 4 pixels in x-direction.
  • the rolling shutter artifact may be corrected by utilizing a patch-based method.
  • the patch-based method for correcting the rolling shutter artifact is described in detail with reference to FIGURE 6.
  • FIGURE 6 illustrates an example representation of image defect determination and correction in an image, for example, the image 500 (FIGURE 5), in accordance with an example embodiment.
  • the image 500 may be associated with an image defect such as rolling shutter artifact.
  • the image defect may be corrected by selecting patches in the mirco-images associated with the plurality of micro-images of the image 500, and concatenating the selected patches to generate the reconstructed image, for example a reconstructed image 600 (illustrated with reference to FIGURE 6).
  • corresponding patches may be selected from the corresponding micro-images of the image 500.
  • patches 602, 604, 606, 608, 610, 612, 614, 616, and 618 in the reconstructed image 600 may be selected from the micro-images 502, 504, 506, 508, 510, 512, 514, 516 and 518, respectively.
  • the patches may be selected from the plurality of micro-images based on the first disparity value and the second disparity value of the object points.
  • the patches selected for reconstructing the first row of the image 600 may be selected from the micro-images 502, 504, and 506 of the first row of the image 500 based on the first disparity value.
  • the patches that may be shifted by a value equal to the first disparity value may be selected from the first micro- image to form the first row of the reconstructed image 600.
  • the patches that may be shifted by a value equal to the second disparity value in the second row may form the second row of the reconstructed image 600.
  • the patches 608, 610, 612 in the neighboring rows are determined to be shifted from the corresponding patches in the first row by an amount equal to the second disparity values. For example, if the second disparity value between the micro-image 502 associated with the first row and the at least one second micro-image 504, 506 associated with the neighboring row is ' ⁇ ' number of pixels, then while reconstructing the image, the patch may be selected from the second row that is shifted by ' ⁇ ' pixels from the center of the at least one second micro-image.
  • the patch may be selected from a distance that is shifted by ' ⁇ 2' pixels from the center of the at least one third microimage.
  • the reconstructed image 600 accounts for the x-shift in the object points.
  • performing patch-based correction in the image having both static and moving portions result in misalignment of static portion of the images in the reconstructed image.
  • the reconstruction of the image may be performed based on a detection of matching pixels in the plurality of micro-images instead of the matching of the patches.
  • the detection of matching pixels and thereafter performing correction based on matching the corresponding pixels facilitates in precluding the misalignments of static portions of the image in the reconstructed images.
  • selection of one of the patch-based reconstruction and the pixel-based reconstruction may be performed based on a depth information of the plurality of pixels associated with the image.
  • the depth information of the plurality of pixels associated with the image may be determined.
  • the patch-based method may be utilized for generating the reconstructed image.
  • the image reconstruction may be performed based on the pixel-based method.
  • the pixel-based method may include selecting a pixel associated with a nil x-shift (e.g., pixel with zero x-shift) and utilize the selected pixel in the reconstructing the image.
  • FIGURE 7 is a flowchart depicting example method 700 for detecting an image defect in an image, in accordance with an example embodiment.
  • the method 700 includes detecting the image defect in the image based on disparity values associated with one or more object points in the image.
  • the method 700 depicted in the flow chart may be executed by, for example, the apparatus 300 of FIGURE 3.
  • the method 700 includes facilitating receipt of an image such as an image 102 (FIGURE 1 ) of a scene.
  • the image 102 may be received from a media capturing device including two sensors and related components, or from external sources such as DVD, Compact Disk (CD), flash drive, memory card, or received from external storage locations through Internet, Bluetooth ® , and the like.
  • the image 102 may include a plurality of micro-images associated with the scene.
  • the plurality of micro-images may be associated with a plurality of views of the scene.
  • the plurality of micro-images may be arranged in a plurality of rows in the image.
  • the image may include a plurality of object points associated with one or more objects of the scene.
  • the plurality of object points may comprise pixels.
  • the object points detected in a first micro-image may correspond to similar object points in the plurality of micro-images.
  • the method 700 includes determining a first disparity value between an object point in a first micro-image and a corresponding object point in at least one second micro-image.
  • the first micro-image may include three object points 11 , I2, and I3, and a neighboring micro-image in the same row may include corresponding object points 11 ', I2', and I3', respectively.
  • the first micro-image and the at least one second micro-image may be associated with a row of the plurality of rows.
  • each of the plurality of micro-images may be scanned by suitable object detection techniques to detect corresponding object points in the remaining micro-images.
  • the first disparity value may be determined by determining a shift in a position of the corresponding object point associated with the at least one second micro-image with respect to a position of the object point in the first micro-image.
  • the first disparity value may include a plurality of disparity values, for example, the disparity values associated with an object point in a micro-image in the row and the corresponding object point in other microimages associated with the same row.
  • the method 700 includes determining a second disparity value between the object point in the first micro-image and a corresponding object point in at least one third micro-image.
  • the second disparity value may be determined by detecting a shift in a position of the corresponding object point associated with the at least one third micro-image with respect to a position of the object point in the first micro-image.
  • the at least one third micro-image is associated with at least one neighboring row of the row comprising the first micro-image. For example, if the first micro-image belongs to the first row of the plurality of rows, then the at least one second micro-image may be associated with one of the plurality of neighboring rows, for example the second row, the third row, and the like.
  • the second disparity value may include a plurality of disparity values, for example, the disparity values associated with an object point in a micro-image in the first row and the corresponding object point in the micro-images associated with the neighboring rows.
  • the method 700 includes comparing the first disparity value and the second disparity value to determine presence of an image defect in the image.
  • an absence of the rolling shutter artifact may be determined in the image. If however, the first disparity value is not determined to be equal to the second disparity value, then the rolling shutter artifact may be determined to be present in the image.
  • An example embodiment describing equal values for the first disparity and the second disparity is explained with reference to FIGURE 5.
  • the rolling shutter artifact may be corrected in the image based on the first disparity value and the second disparity value. An example embodiment for the correction of the rolling shutter artifact in the image is explained with reference to FIGURE 6.
  • FIGURE 8 is a flowchart depicting an example method 800, in accordance with another example embodiment.
  • the method 800 depicted in the flow chart may be executed by, for example, the apparatus 300 of FIGURE 3.
  • the method 800 includes detecting a presence of image defect such as rolling shutter artifact in images, and correcting the same.
  • Various operations described in the method 800 may be performed at an image of a scene captured by a multi-baseline camera, an array camera, a plenoptic camera and a light field camera.
  • the method 800 includes facilitating access of an image, for example, a light-field image.
  • the image may be captured by a stereo camera.
  • the image may be captured by a multi-baseline camera, an array camera, a plenoptic camera or a light-field camera.
  • an example of the image may be the image 102 (FIGURE 1 ).
  • the image may be received by the apparatus 300 or otherwise captured by the sensors.
  • the image may include a plurality of micro-images of the scene.
  • the plurality of micro-images may correspond to images of the scene captured from different viewpoints.
  • the micro-images may be arranged in a plurality of rows, for example, a first row, a second row and so on.
  • each of the plurality of micro-images may include a plurality of object points associated with one or more object of the scene.
  • the objects points may comprise pixels associated with the one or more objects.
  • the method 800 includes detecting, corresponding to a first object point associated with a first micro-image, an object point in the at least one second micro-image and the at least one third micro-image.
  • the first microimage and the second micro-image may belong to the same row.
  • each of the object points such as object point belonging to a micro-image may have copies thereof in the rest of the micro-images.
  • the micro-images belonging to the same row as well as neighboring rows may be scanned to determine the corresponding object points.
  • a first disparity value may be determined between the object point in the first micro-image and the corresponding object point in the at least one second micro-image.
  • the first disparity value may correspond to the disparity value between the corresponding object points belonging to the micro-images of the same row.
  • a similar object point in at least one third micro image is detected.
  • the first micro-image and the third micro- image may belong to neighboring rows of micro-images.
  • a second disparity value may be determined between the at least one object point in the first micro-image and the corresponding object point in the at least one third micro-image.
  • the second disparity value may correspond to the disparity value between the corresponding object points belonging to the micro-images of the neighboring rows.
  • the first disparity value may be compared with the second disparity value.
  • comparison of the first disparity value with the second disparity value may correspond to a comparison of a shift between the corresponding object points in the micro-images belonging to the same row and neighboring rows, respectively.
  • the comparison between the first disparity value and the second disparity value may refer to shift between the corresponding object points of the micro-images 402 and 404 (FIGURE 4).
  • the rolling shutter artifact may be corrected at block 818.
  • the rolling shutter artifact may be corrected based on the first disparity value and the second disparity value. For example, for reconstructing an image devoid of the rolling shutter artifact, a patch comprising the object point may be selected in a micro-image, and a similar patch may be searched in other micro-images. The searched patches belonging to the other micro-images may be shifted by an amount equal to one of the first disparity value and the second disparity value due to the presence of the rolling shutter artifact in the image.
  • the patches from the micro-images belonging to the same row may be determined by the first disparity value, and the patches from the micro-images belonging to the neighboring row may be determined by the second disparity value.
  • the selected patches may be concatenated to generate a reconstructed image.
  • the reconstruction of the image may be performed based on a detection of matching pixels in the plurality of micro-images instead of matching the patches.
  • performing patch-based correction in the image having both still and moving portions result in image misalignment of static portion of the images in the reconstructed image.
  • the detection of matching pixels and thereafter, selection performing correction based on matching the corresponding pixels facilitates in precluding the misalignments of the static portions of the image in the reconstructed images.
  • a depth information of the plurality of pixels associated with the image may be determined.
  • the patch-based method may be utilized for generating the reconstructed image.
  • the reconstruction may be performed based on the pixel-based method.
  • the pixel-based method may include selecting at least one first corresponding object point from the at least one second micro-image based on the first disparity value, and at least one second corresponding object point from the at least one third micro-image based on the second disparity value.
  • the selected at least one first corresponding object point and the at least one second corresponding object point may be concatenated to generate a corrected (or reconstructed) image.
  • the correction of the rolling shutter artifact and the generation of the reconstructed image is explained in detail with reference to FIGURE 6.
  • the methods depicted in these flow charts may be executed by, for example, the apparatus 300 of FIGURE 3.
  • Operations of the flowchart, and combinations of operation in the flowcharts may be implemented by various means, such as hardware, firmware, processor, circuitry and/or other device associated with execution of software including one or more computer program instructions.
  • one or more of the procedures described in various embodiments may be embodied by computer program instructions.
  • the computer program instructions, which embody the procedures, described in various embodiments may be stored by at least one memory device of an apparatus and executed by at least one processor in the apparatus.
  • Any such computer program instructions may be loaded onto a computer or other programmable apparatus (for example, hardware) to produce a machine, such that the resulting computer or other programmable apparatus embody means for implementing the operations specified in the flowchart.
  • These computer program instructions may also be stored in a computer-readable storage memory (as opposed to a transmission medium such as a carrier wave or electromagnetic signal) that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture the execution of which implements the operations specified in the flowchart.
  • the computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions, which execute on the computer or other programmable apparatus provide operations for implementing the operations in the flowchart.
  • the operations of the methods are described with help of apparatus 300. However, the operations of the methods can be described and/or practiced by using any other apparatus.
  • a technical effect of one or more of the example embodiments disclosed herein is to detect image defects such as rolling shutter artifact (for example, in stereoscopic images) of a scene, where there exists a disparity between the objects in the images.
  • Various embodiments provide techniques for correcting the rolling shutter artifact in the image. For instance, a first disparity value between various object points belonging to a same row of micro-images associated with the image may be estimated. Similarly, a second disparity value between various object points belonging to neighboring rows of micro-images may be estimated. The first disparity value may be compared with the second disparity value to determine presence of the rolling shutter artifact.
  • the rolling shutter artifact may be corrected by selecting patches/pixels associated with the object point shifted by a distance equal to the first disparity value (for reconstructing the same row of the image), and a second disparity value (for reconstructing the neighboring row of the image).
  • the selected patches/pixels may be concatenated to generate the reconstructed image that is devoid of rolling shutter artifact.
  • Various embodiments described above may be implemented in software, hardware, application logic or a combination of software, hardware and application logic.
  • the software, application logic and/or hardware may reside on at least one memory, at least one processor, an apparatus or, a computer program product.
  • the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media.
  • a "computer-readable medium" may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of an apparatus described and depicted in FIGURES 2 and/or 3.
  • a computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
  • the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

Dans un mode de réalisation cité à titre d'exemple, la présente invention concerne un procédé, un appareil et un produit-programme informatique. Le procédé consiste à accéder à une image comportant une pluralité de micro-images disposées dans des rangées et une pluralité de points objet associés à une scène. Une première valeur de disparité est déterminée sur la base d'un objet parmi la pluralité de points objet dans une première micro-image et d'un point objet correspondant dans au moins une deuxième micro-image, et une seconde valeur de disparité est déterminée sur la base du point objet dans la première micro-image et d'un point objet correspondant dans au moins un troisième micro-image. La première micro-image est ladite deuxième micro-image appartiennent à la même rangée. La première micro-image est ladite troisième micro-image appartiennent à des rangées voisines dans lesdites plusieurs rangées. La première valeur de disparité est comparée à la second valeur de disparité pour déterminer la présence ou non de défauts dans l'image.
PCT/FI2014/050776 2013-10-15 2014-10-14 Procédé, appareil et produit-programme informatique pour la détection et la correction de défauts d'image WO2015055892A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN4640/CHE/2013 2013-10-15
IN4640CH2013 IN2013CH04640A (fr) 2013-10-15 2014-10-14

Publications (1)

Publication Number Publication Date
WO2015055892A1 true WO2015055892A1 (fr) 2015-04-23

Family

ID=52827708

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2014/050776 WO2015055892A1 (fr) 2013-10-15 2014-10-14 Procédé, appareil et produit-programme informatique pour la détection et la correction de défauts d'image

Country Status (2)

Country Link
IN (1) IN2013CH04640A (fr)
WO (1) WO2015055892A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130120356A1 (en) * 2010-03-03 2013-05-16 Todor G. Georgiev Methods, Apparatus, and Computer-Readable Storage Media for Depth-Based Rendering of Focused Plenoptic Camera Data
US20130128081A1 (en) * 2009-01-20 2013-05-23 Todor G. Georgiev Methods and Apparatus for Reducing Plenoptic Camera Artifacts

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130128081A1 (en) * 2009-01-20 2013-05-23 Todor G. Georgiev Methods and Apparatus for Reducing Plenoptic Camera Artifacts
US20130120356A1 (en) * 2010-03-03 2013-05-16 Todor G. Georgiev Methods, Apparatus, and Computer-Readable Storage Media for Depth-Based Rendering of Focused Plenoptic Camera Data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BISHOP, T. ET AL.: "The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 34, no. ISSUE:, August 2011 (2011-08-01) *

Also Published As

Publication number Publication date
IN2013CH04640A (fr) 2015-04-24

Similar Documents

Publication Publication Date Title
US9524556B2 (en) Method, apparatus and computer program product for depth estimation
US11210799B2 (en) Estimating depth using a single camera
US9542750B2 (en) Method, apparatus and computer program product for depth estimation of stereo images
US9443130B2 (en) Method, apparatus and computer program product for object detection and segmentation
US10003743B2 (en) Method, apparatus and computer program product for image refocusing for light-field images
US9478036B2 (en) Method, apparatus and computer program product for disparity estimation of plenoptic images
US9892522B2 (en) Method, apparatus and computer program product for image-driven cost volume aggregation
US10176558B2 (en) Method, apparatus and computer program product for motion deblurring of images
US9245315B2 (en) Method, apparatus and computer program product for generating super-resolved images
US20150170370A1 (en) Method, apparatus and computer program product for disparity estimation
EP3061071A1 (fr) Procédé, appareil et produit programme informatique permettant de modifier l'éclairage dans une image
US9400937B2 (en) Method and apparatus for segmentation of foreground objects in images and processing thereof
US20130083002A1 (en) Methods and apparatus for conditional display of a stereoscopic image pair
EP2750391B1 (fr) Procédé, appareil et produit de programme informatique pour le traitement d'images
US20140028878A1 (en) Method, apparatus and computer program product for processing of multimedia content
US10097807B2 (en) Method, apparatus and computer program product for blending multimedia content
WO2015055892A1 (fr) Procédé, appareil et produit-programme informatique pour la détection et la correction de défauts d'image
US9691127B2 (en) Method, apparatus and computer program product for alignment of images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14853939

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14853939

Country of ref document: EP

Kind code of ref document: A1