EP3834170B1 - Apparatus and methods for generating high dynamic range media, based on multi-stage compensation of motion - Google Patents

Apparatus and methods for generating high dynamic range media, based on multi-stage compensation of motion Download PDF

Info

Publication number
EP3834170B1
EP3834170B1 EP19860846.5A EP19860846A EP3834170B1 EP 3834170 B1 EP3834170 B1 EP 3834170B1 EP 19860846 A EP19860846 A EP 19860846A EP 3834170 B1 EP3834170 B1 EP 3834170B1
Authority
EP
European Patent Office
Prior art keywords
exposure
media
level
media frames
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP19860846.5A
Other languages
German (de)
French (fr)
Other versions
EP3834170A1 (en
EP3834170A4 (en
Inventor
Mandakinee Singh PATEL
Green Rosh K S
Anmol BISWAS
Bindigan Hariprasanna PAWAN PRASAD
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of EP3834170A1 publication Critical patent/EP3834170A1/en
Publication of EP3834170A4 publication Critical patent/EP3834170A4/en
Application granted granted Critical
Publication of EP3834170B1 publication Critical patent/EP3834170B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to the field of media processing devices suitable for processing two or more media of different exposures and more particularly to apparatus and methods for generating a High Dynamic Range (HDR) media, based on a multi-stage compensation of motion in a captured scene.
  • HDR High Dynamic Range
  • HDR High Dynamic Range
  • conventional methods disclose about image sensors that can be used to capture images having rows of long exposure image pixel values that can be interleaved with rows of short exposure image pixel values.
  • a combined long exposure image and a combined short exposure image may be generated using the long exposure and the short exposure values from an interleaved image frames and the interpolated values from a selected one of the interleaved image frames.
  • the High Dynamic Range (HDR) images may be generated using the combined long exposure and short exposure images.
  • apparatus for obtaining a motion adaptive High Dynamic Range (HDR) image may be disclosed. Further, a motion degree of a first image and a second image taken using different exposure times may be calculated and the motion calculation intensity may be adjusted based on the calculated motion degree.
  • the motion compensation intensity involves global motion compensation and/or location motion compensation.
  • the images subjected to compensation may be synthesized and output, so that the image having High Dynamic Range (HDR) may be obtained.
  • HDR high dynamic range
  • a ghost map comprises of three entities such as true ghosts caused due to local object motion, false ghosts caused due to improper image registration, and false ghosts caused due to improper exposure alignment.
  • the conventional methods may consider the three entities together thereby, degrading final output image quality.
  • the complexity of simultaneous correction of ghosts and halos may increase exponentially as the number of image frames increases.
  • de-ghosting methods may estimate a ghost map by first aligning the exposures of input images followed by a photometric difference. The conventional exposure alignment method may not be able to handle very bright and dark regions of the image that may result in false ghosts (i.e. regions detected erroneously as ghosts), which can in turn lead to reduced dynamic range in the output image.
  • FIGs. 1a and 1b illustrates a schematic diagram of an example scenario, where darker and saturated regions in a HDR image are reproduced with less detail using conventional method. As depicted in FIG. 1b , the darker regions and the saturated regions are not reproduced with increased exposure and saturation. Further, the conventional methods may require more time to capture image frames of the scene. Further, as depicted in FIG. 1b , the ghost arefacts in the saturated region of the captured image are not rectified or removed in the HDR image.
  • the conventional methods may not disclose methods to handle plurality of parameters such as noise, halos, dynamic range and ghosting in for generating High Dynamic Range (HDR) media.
  • HDR High Dynamic Range
  • U.S. Patent Application 2013/028509 A1 discloses an apparatus and method for generating a High Dynamic Range (HDR) image from which a ghost blur is removed based on a multi-exposure fusion.
  • the apparatus may include an HDR weight map calculation unit to calculate an HDR weight map for multiple exposure frames that are received, a ghost probability calculation unit to calculate a ghost probability for each image by verifying a ghost blur for the multiple exposure frames, an HDR weight map updating unit to update the calculated HDR weight map based on the calculated ghost probability, and a multi-scale blending unit to generate an HDR image by reflecting the updated HDR weight map to the multiple exposure frames.
  • HDR weight map calculation unit to calculate an HDR weight map for multiple exposure frames that are received
  • a ghost probability calculation unit to calculate a ghost probability for each image by verifying a ghost blur for the multiple exposure frames
  • an HDR weight map updating unit to update the calculated HDR weight map based on the calculated ghost probability
  • a multi-scale blending unit to generate an HDR image by reflecting
  • the principal object of the embodiments herein is to disclose apparatus and methods for generating a High Dynamic Range (HDR) media, based on a multi-stage compensation of motion in a captured scene.
  • HDR High Dynamic Range
  • Another object of the embodiments herein is to disclose apparatus and methods for handling large number of input media with minimal Image Quality (IQ) tuning.
  • IQ Image Quality
  • Another object of the embodiments herein is to disclose apparatus and methods for ghost modeling of media to model improper exposure alignment and image registration errors for avoiding false ghosts.
  • Another object of the embodiments herein is to disclose apparatus and methods for enhancing dynamic range of the media, particularly in saturated and dark regions of the scene.
  • the object of the embodiments herein is to disclose apparatus and methods for handling large number of input media with minimal Image Quality (IQ) tuning.
  • IQ Image Quality
  • Another object of the embodiments herein is to disclose apparatus and methods for ghost modeling of media to model improper exposure alignment and image registration errors for avoiding false ghosts.
  • Another object of the embodiments herein is to disclose apparatus and methods for enhancing dynamic range of the media, particularly in saturated and dark regions of the scene.
  • the embodiments herein achieve apparatus and methods for generating a High Dynamic Range (HDR) media, based on multi-stage compensation of motion in a captured scene.
  • HDR High Dynamic Range
  • FIG. 2 illustrates a block diagram of apparatus 200 for generating a High Dynamic Range (HDR) media, based on a multi-stage compensation of motion in a captured scene, according to embodiments as disclosed herein.
  • HDR High Dynamic Range
  • the apparatus 200 includes a memory unit 202, a storage unit 206, a database 208, a display unit 210, and a processor 212. Further, the apparatus 200 further includes a processing module 204, residing in the memory unit 202. The apparatus 200 can also be referred herein after as an electronic device 200. When the machine readable instructions are executed by the processing module 204, the processing module 204 causes the electronic device 200 to acquire data associated with the electronic device 200 commissioned in the computing environment. Further, the processing module 204 causes the electronic device 200 to generate a High Dynamic Range (HDR) media, based on multi-stage compensation of motion in a captured scene.
  • HDR High Dynamic Range
  • Examples of the apparatus 200/electronic device 200 can be at least one of, but not limited to, a mobile phone, a smart phone, a tablet, a handheld device, a phablet, a laptop, a computer, a wearable computing device, a server, an Internet of Things (IoT) device, a vehicle infotainment system, a camera, a web camera, a digital single-lens reflex (DSLR) camera, a video camera, a digital camera, a mirror-less camera, a still camera, or any other device that comprises at least one camera.
  • the apparatus 200 may comprise other components such as input/output interface(s), communication interface(s) and so on (not shown).
  • the apparatus 200 may comprise a user application interface (not shown) and an application management framework (not shown) and an application framework (not shown) for generating a High Dynamic Range (HDR) media, based on a multi-stage compensation of motion in a captured scene.
  • the application framework can be a software library that provides a fundamental structure to support the development of applications for a specific environment.
  • the application framework may also be used in developing graphical user interface (GUI) and web-based applications.
  • GUI graphical user interface
  • an application management framework may be responsible for the management and maintenance of the application and definition of the data structures used in databases and data files.
  • the apparatus 200 can operate as a standalone device or as a connected (e.g., networked) device that connects to other computer systems/devices over a wired or wireless communication network. Further, the methods and apparatus described herein may be implemented on different computing devices that comprise at least one camera.
  • the apparatus 200 may detect/identify a scene and capture a plurality of frames.
  • the optical media comprising the captured plurality of frames is converted into an electric signal.
  • the structure of the apparatus 200 may include an optical system (i.e. lens or image sensor), a photoelectric conversion system (i.e. charged couple device (CCD), camera tube sensors, and so on.) and a circuitry (such as a video processing circuit).
  • the image sensor may output a Lux (Lux) value, which is a unit of illumination reflected light intensity.
  • a color difference signal (U, V) may include two colors such as hue and saturation, and represented by Cr and Cb. Cr reflects the difference between the red parts of the RGB signal values of RGB input luminance signal.
  • the apparatus 200 may include a dual camera that may comprise two different image sensors such as at least one of, but not limited to, a Charge-Coupled Devices (CCD) sensor, an active pixel sensor, a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, a N-type Metal-Oxide-Semiconductor (NMOS, Live MOS) sensor, a bayer filter sensor, a quadra sensor, a tetra sensor, a Foveon sensor, a 3CCD sensor, a RGB (Red Green Blue) sensor, and so on.
  • CCD Charge-Coupled Devices
  • CMOS Complementary Metal-Oxide-Semiconductor
  • NMOS N-type Metal-Oxide-Semiconductor
  • bayer filter sensor a quadra sensor, a tetra sensor, a Foveon sensor, a 3CCD sensor, a RGB (Red Green Blue) sensor, and so on.
  • each image sensor may capture still image snapshots and/or video sequences.
  • each image sensor may include color filter arrays (CFAs) arranged on a surface of individual sensors or sensor elements.
  • CFAs color filter arrays
  • the image sensors may be arranged in line, a triangle, a circle or another pattern.
  • the apparatus 200 may activate certain sensors and deactivate other sensors without moving any sensor.
  • the camera residing in the apparatus 200 may include functions such as automatic focus (autofocus or AF), automatic white balance (AWB), and automatic exposure control (AEC) to produce pictures or video that are in focus, spectrally balanced, and exposed properly.
  • AWB, AEC and AF are sometimes referred to herein as 3A convergence.
  • An optimal exposure period may be estimated using a light meter (not shown), and/or capturing one or more images by the image sensor.
  • the apparatus 200 is configured to align exposure of a plurality of media frames with a high exposure (I h ) level to a low exposure (Ii) level, by selecting from a registered plurality of media frames comprising a plurality of exposure levels corresponding to the captured scene.
  • the exposures are aligned using a pixel based intensity correspondence method such as at least one of, but not limited to, a polynomial curve fitting, a histogram matching and so on, between high exposure image and low exposure image.
  • the registered plurality of media frames aligned to the low exposure (Ii) level is compared with a media frame of low exposure (Ii) level of a non-registered plurality of media frames comprising the plurality of exposure levels, to generate a first photometric difference map.
  • the comparison with a media frame of low exposure (Ii) level is performed by taking a per pixel intensity difference between two images, in turn the photometric difference map is generated.
  • the apparatus 200 is configured to generate a first ghost map, based on the generated photometric difference map of the aligned plurality of media frames to the low exposure (I l ) level.
  • the generated first ghost map is similar to the photometric difference map between two images.
  • the apparatus 200 is configured to correct an exposure alignment error of a registered plurality of media frames with the low exposure (I l ) level, by performing a forward exposure alignment of the registered plurality of media frames with a low exposure (I l ) level to a high exposure (I h ) level, and performing a backward exposure alignment of the forward exposure aligned media frames, back to the low exposure (I l ) level.
  • the first ghost map is the difference between exposure aligned image and original image.
  • the difference/error is depicted as a ghost.
  • the performed exposure alignment is not according to the reality and it can be an error (false ghost) in reality. Accordingly, the error may need to be corrected.
  • the registered plurality of media frames aligned back to the low exposure (I l ) level is compared with the media frame of low exposure (I l ) level of the non-registered plurality of media frames comprising the plurality of exposure levels, to generate a second photometric difference map.
  • the generated second photometric difference map includes generating the error due to photometric alignment.
  • the apparatus 200 is configured to correct a media registration error of the aligned at least one of the high exposure level and the low exposure level of the registered plurality of the media frames comprising the plurality of exposure levels.
  • the media registration error is corrected by first generating edge maps of the registered inputs using methods such as canny edge detector and then taking a photometric difference between the edge maps. This map is the error due to media registration.
  • the apparatus 200 is configured to remove a plurality of false ghost artefacts in the at least one of, the corrected exposure alignment error, the corrected media registration error, and the first ghost map, corresponding to the registered plurality of media frames, by using the first ghost map and the second photometric difference map.
  • the apparatus 200 is configured to generate a second ghost map using the generated first ghost map, based on removing the plurality of false ghost artefacts.
  • the plurality of false ghost artefacts is removed to set the pixels detected as false ghosts to zero, in turn effectively removing the false ghosts.
  • the apparatus 200 is configured to generate the HDR media, based on the generated second ghost map, wherein the HDR media is generated based on generating a weight map corresponding to the second ghost map associated with the corrected plurality of media frames and blending at least two corrected media frames using the weight maps.
  • the weight map is a function of input image pixel intensities as well as second ghost map. The weight map may be suppressed depending on the value of ghost map, so that image blending does not take place in regions with motion so that there is no ghost in the final image.
  • the exposure alignment of the media frame is performed by using histogram matching method.
  • aligning the exposure of the plurality of media frames with the high exposure (I h ) level to the low exposure (I l ) level is performed by polynomial fitting method.
  • correcting the media registration error comprises analyzing an edge information of the aligned at least one of the high exposure level and the low exposure level of the registered plurality of the media frames.
  • the apparatus 200 is configured to estimate high frequency noise present in the registered plurality of media frames after removing the false ghost artefacts.
  • the apparatus 200 is configured to enhance the registered plurality of media frames after removing the high frequency noise, to generate the second ghost map, by applying a morphological operation.
  • the morphological operation includes erosion for removing unwanted pixels and dilation to fill holes in the ghost map.
  • the apparatus 200 is configured to identify contribution of pixels in the enhanced plurality of media frames by estimating the weight map using a pre-defined lookup table and the generated second ghost map.
  • the weight map can be a function of input image intensity and ghost map. The function is pre-determined as a look-up-table of pixel intensity values.
  • the weight maps are used to blend at least two corrected media frames.
  • the apparatus 200 is configured to enhance contrast of the blended media frames by adjusting a media curve to a higher dynamic range.
  • the apparatus 200 is configured to provide a first set of consecutive exposure frames of the non-registered plurality of media frames, to generate a first set of intermediate media frames from a plurality of first stage basis HDR modules, for processing the HDR media.
  • the apparatus 200 is configured to provide the first set of intermediate media frames from the plurality of first stage basis HDR modules to a plurality of second stage basis HDR modules to generate a second set of intermediate media frames.
  • the apparatus 200 is configured to provide iteratively the intermediate media frames from the first stage basis HDR modules to the (N-1)th stage basis HDR modules to generate HDR media.
  • the basis HDR modules are connected each other in series, to operate on media frames that are captured consecutively, to remove ghost and halo artefacts.
  • the basis HDR modules comprise enhancing dynamic rage of the media frames in multiple stage basis HDR modules.
  • the basis HDR modules comprises tone mapping of the final HDR media suitable for display and image compression stage.
  • the apparatus 200 is configured to boost exposure of the plurality of media frames with low exposure (I l ) level to obtain aligned exposure ( I' 1 ) level. In an embodiment, the apparatus 200 is configured to suppress exposure of the plurality of media frames with aligned exposure ( I' 1 ) level to obtain suppressed exposure ( I" 1 ) level. In an embodiment, the apparatus 200 is configured to estimate exposure alignment error by determining difference between media frames with low exposure (Ii) level and suppressed exposure ( I" 1 ) level, as
  • FIG. 2 illustrates functional components of the computer implemented system.
  • the component may be a hardware component, a software component, or a combination of hardware and software. Some of the components may be application level software, while other components may be operating system level components.
  • the connection of one component to another may be a close connection where two or more components are operating on a single hardware platform. In other cases, the connections may be made over network connections spanning long distances.
  • Each embodiment may use different hardware, software, and interconnection architectures to achieve the functions described.
  • FIG. 3 illustrates a detailed view of a processing module as shown in FIG. 2 , comprising various modules, according to embodiments as disclosed herein.
  • the apparatus 200 comprises a processing module 204 stored in the memory unit 202 (depicted in FIG. 2 ).
  • the processing module 204 may comprise a plurality of sub modules.
  • the plurality of sub modules can comprise of, an exposure alignment module 302, a ghost map generation module 304, an exposure alignment error correction module 306, a media registration error correction module 308, a false ghost removal module 310, and a HDR media generation module 312.
  • the exposure alignment module 302 is configured to align exposure of a plurality of media frames with a high exposure (I h ) level to a low exposure (I l ) level, by selecting from a registered plurality of media frames comprising a plurality of exposure levels corresponding to the captured scene.
  • the plurality of exposure levels corresponding to the captured scene can be consecutive exposure levels.
  • the plurality of exposure levels corresponding to the captured scene can be non-consecutive exposure levels.
  • the registered plurality of media frames aligned to the low exposure (I l ) level is compared with a media frame of low exposure (I l ) level of a non-registered plurality of media frames comprising the plurality of exposure levels, to generate a first photometric difference map.
  • the ghost map generation module 304 is configured to generate a first ghost map, based on the generated photometric difference map of the aligned plurality of media frames to the low exposure (I l ) level.
  • the exposure alignment error correction module 306 is configured to correct an exposure alignment error of a registered plurality of media frames with the low exposure (I l ) level, by performing a forward exposure alignment of the registered plurality of media frames with a low exposure (I l ) level to a high exposure (I h ) level, and performing a backward exposure alignment of the forward exposure aligned media frames, back to the low exposure (I l ) level.
  • the registered plurality of media frames aligned back to the low exposure (I l ) level is compared with the media frame of low exposure (I l ) level of the non-registered plurality of media frames comprising the plurality of exposure levels, to generate a second photometric difference map.
  • the registration error correction module 308 is configured to correct a media registration error of the aligned at least one of the high exposure level and the low exposure level of the registered plurality of the media frames comprising the plurality of exposure levels.
  • the false ghost removal module 310 is configured to remove a plurality of false ghost artefacts in the at least one of, the corrected exposure alignment error the corrected media registration error, and the first ghost map, corresponding to the registered plurality of media frames, by using the first ghost map and the second photometric difference map.
  • the ghost map generation module 304 is configured to generate a second ghost map using the generated first ghost map, based on removing the plurality of false ghost artefacts.
  • the HDR media generation module 312 is configured to generate the HDR media, based on the generated second ghost map.
  • the HDR media is generated based on generating a weight map corresponding to the second ghost map associated with the corrected plurality of media frames and blending at least two corrected media frames using the weight maps.
  • the embodiments herein can comprise hardware and software elements.
  • the embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc.
  • the functions performed by various modules described herein may be implemented in other modules or combinations of other modules.
  • a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • FIG. 4 illustrates a block diagram of basis HDR modules for generating multi-stage motion compensated High Dynamic Range (HDR) media, according to embodiments as disclosed herein.
  • HDR High Dynamic Range
  • a first set of consecutive exposure frames of the non-registered plurality of media frames is used, to generate a first set of intermediate media frames from a plurality of first stage basis HDR modules, for processing the HDR media.
  • the first set of intermediate media frames is provided by the plurality of first stage basis HDR modules to a plurality of second stage basis HDR modules to generate a second set of intermediate media frames.
  • the intermediate media frames from the first stage basis HDR modules are used iteratively to (N-1)th stage basis HDR modules to generate HDR media.
  • tone mapping is performed to convert the final HDR media output from the (N-1)th stage, to a form that may be suitable for displaying on the display unit 210 of the electronic device 200 or for image compression scenario in the electronic device 200.
  • FIG. 5 illustrates a detailed view of a ghost modeling as shown in FIG. 4 , comprising various modules, according to embodiments as disclosed herein.
  • the apparatus 200 is configured to perform high quality ghost modeling within each basis HDR module.
  • the apparatus is configured to perform forward exposure alignment and backward exposure alignment of consecutive exposure media frames for estimation of an exposure alignment error.
  • the apparatus is configured to compute a first ghost map from the exposure aligned media frame and a captured media frame.
  • the apparatus is configured to compute edge information for estimation of a media registration error.
  • the apparatus is configured to estimate of the exposure alignment error and the media registration error using the computed edge information and the exposure aligned media frames.
  • the apparatus is configured to generate a second ghost map using an estimated first ghost map and the false ghost artefacts based on estimation of a high frequency noise and false ghosts in the estimated media frame.
  • FIG. 6 illustrates a scalable architecture for generating multi-stage motion compensated High Dynamic Range (HDR) media, according to an example.
  • HDR High Dynamic Range
  • Embodiments herein comprise basis-HDR modules, which takes a plurality of input exposures and provides intermediate outputs.
  • the multi-frame HDR media processing may be expediently divided into multiple stages that may have multiple basis HDR modules/blocks.
  • the multi-stage scalable architecture comprises of basis-HDR modules connected in such a way that each module operates on media frames that may be captured consecutively, to minimize the amount of ghosting and halos artefacts. Further, a progressive improvement in dynamic range over multiple stages is achieved.
  • the common functionality of basis HDR may ensure scalability without the need for extensive IQ tuning.
  • the image/media I1, I2, I3, up to IN are consecutive exposure input image frames captured for HDR media generation.
  • the 'B k ' represents the basis HDR at 'k th ' stage for exposure input image"I".
  • FIGs. 7a and 7b illustrates a schematic diagram for generating the ghost map and the weight map respectively, according to an example.
  • the basis-HDR module may take a plurality of input exposures and comprises of plurality of image processing stages such as a registration stage, an exposure alignment stage, a ghost modeling stage, a weight map generation stage, a blending stage, and a contrast enhancement.
  • the multi stage HDR may use multiple modules/blocks of basis HDR in each stage.
  • the basis HDR block may generate intermediate ghost-free and halo-free HDR output media/image frames.
  • Each basis HDR block may handle only two image frames at once, providing better control over ghosts and halos artefacts.
  • the exposure alignment error correction module 306 may model errors incurred due to any exposure alignment methods.
  • the error estimation for an exposure boosting operation (exposure alignment from lower to higher exposure (I l ⁇ I h )) may comprise an exposure boosting of I l to obtain I' l , an exposure suppression of I' l to obtain I" l , and an exposure alignment error can be estimated as
  • the error estimation for an exposure suppression operation can be performed vice-versa.
  • the error estimation for the exposure suppression operation i.e.
  • exposure alignment from higher to lower exposure may comprise an exposure suppression of I h to obtain I' h , an exposure boosting of I' h to obtain I" h ,and an exposure alignment error can be estimated as
  • .Further, histogram matching method may be used as the exposure alignment operator during exposure alignment of the media frames.
  • FIG. 8 illustrates a schematic diagram of an example scenario for generating high speed auto HDR image, according to an example.
  • a triple exposure HDR image frame capture is captured by the electronic device 200.
  • the multiple exposure image frames are captured at time instants for example, t, t+1, t+2, and so on.
  • the successive image frames may be blended together that may progressively propagate to the next stage as depicted in FIG. 8 .
  • tone Mapping is performed to generate the final HDR image output.
  • the image comprises over exposed regions
  • details in image may be enhanced using the basis HDR modules, as shown in FIG. 9a .
  • noise in the image is reduced, as shown in FIG. 9a .
  • FIG. 9b illustrates a schematic diagram of an example scenario for generating massive HDR image, according to an example.
  • 13 frames may be used for multi exposure massive HDR image.
  • the basis HDR modules may be scaled to produce massive HDR image and may ensures minimal ghosting artefacts with higher dynamic range.
  • FIG. 9c illustrates a schematic diagram of an example scenario for recording HDR video, according to an example.
  • a high quality HDR video recording using multiple basis HDR modules are generated based on exposing alternate frames differently, to obtain a progressive high dynamic range video.
  • FIG. 10a is a schematic diagram of an example scenario of removed ghost artefacts in saturated regions of the captured image, according to an example.
  • the true ghost artefacts such as hand motion of a human, sun light, vehicle motion, are eliminated in the generated HDR, as shown in the FIG. 10a .
  • FIG. 10b is a schematic diagram of an example scenario of removed ghost artefacts in dark regions of the captured image, according to an example.
  • the images may include darker regions, and the darker regions are enhanced with higher exposure as shown in FIG. 10b .
  • the details in the image are reproduced appropriately by basis HDR modules and the execution time for HDR image is comparatively reduced. For example, 4 frames may be executed in 650 milliseconds, 7 frames may be executed in 850 milliseconds, and 10 frames may be executed in 1200 milliseconds.
  • FIG. 11a is a flow chart depicting a method 1100a for generating a High Dynamic Range (HDR) media, based on the multi-stage compensation of motion in the captured scene, according to embodiments as disclosed herein.
  • HDR High Dynamic Range
  • the method 1100a includes aligning an exposure of a plurality of media frames with a high exposure (I h ) level to a low exposure (Ii) level, by selecting from a registered plurality of media frames comprising a plurality of exposure levels corresponding to the captured scene.
  • the method 1100a includes generating a first ghost map, based on the generated photometric difference map of the aligned plurality of media frames to the low exposure (Ii) level.
  • the method 1100a includes correcting an exposure alignment error of a registered plurality of media frames with the low exposure (Ii) level, by performing a forward exposure alignment of the registered plurality of media frames with a low exposure (Ii) level to a high exposure (I h ) level, and performing a backward exposure alignment of the forward exposure aligned media frames, back to the low exposure (Ii) level.
  • the method 1100a includes correcting a media registration error of the aligned at least one of the high exposure level and the low exposure level of the registered plurality of the media frames comprising the plurality of exposure levels.
  • the method 1100a includes removing a plurality of false ghost artefacts in the at least one of, the corrected exposure alignment error the corrected media registration error, and the first ghost map, corresponding to the registered plurality of media frames, by using the first ghost map and the second photometric difference map.
  • the method 1100a includes generating a second ghost map using the generated first ghost map, based on removing the plurality of false ghost artefacts.
  • the method 1100a includes generating the HDR media, based on the generated second ghost map, wherein the HDR media is generated based on generating a weight map corresponding to the second ghost map associated with the corrected plurality of media frames and blending at least two corrected media frames using the weight maps.
  • method 1100a may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 11a may be omitted.
  • FIG. 11b is a flow chart depicting a method 1100b for enhancing contrast of the blended media frames by adjusting a media curve to a higher dynamic range, according to embodiments as disclosed herein.
  • the method 1100b includes estimating high frequency noise in the registered plurality of media frames after removing the false ghost artefacts.
  • the method 1100b includes enhancing the registered plurality of media frames after removing the high frequency noise, to generate the second ghost map, by applying morphological operation.
  • the method 1100b includes identifying contribution of a pixels in the enhanced plurality of media frames by estimating the weight map using a pre-defined lookup table and the generated second ghost map.
  • the method 1100b includes enhancing contrast of the blended media frames by adjusting a media curve to a higher dynamic range.
  • method 1100b may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 11b may be omitted.
  • FIG. 11c is a flow chart depicting a method 1100c for providing iteratively the intermediate media frames from the first stage basis HDR modules to (N-1)th stage basis HDR modules, according to embodiments as disclosed herein.
  • the method 1100c includes providing, first set of consecutive exposure frames of the non-registered plurality of media frames, to generate a first set of intermediate media frames from a plurality of first stage basis HDR modules, for processing the HDR media.
  • the method 1100c includes providing the first set of intermediate media frames from the plurality of first stage basis HDR modules to a plurality of second stage basis HDR modules to generate a second set of intermediate media frames.
  • the method 1100c includes providing iteratively the intermediate media frames from the first stage basis HDR modules to (N-1)th stage basis HDR modules to generate HDR media.
  • method 1100c may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 11c may be omitted.
  • FIG. 11d is a flow chart depicting a method 1100d for estimating exposure alignment error, according to embodiments as disclosed herein.
  • the method 1100d includes boosting exposure of the plurality of media frames with low exposure (I l ) level to obtain aligned exposure ( I' l ) level.
  • the method 1100d includes suppressing, by the processor (212), exposure of the plurality of media frames with aligned exposure ( I' l ) level to obtain suppressed exposure ( I" l ) level.
  • the method 1100d includes estimating exposure alignment error by determining difference between media frames with low exposure (Ii) level and suppressed exposure ( I" l )level, as
  • method 1100d may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 11d may be omitted.
  • the embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the elements.
  • the elements shown in FIG. 2, 3 , 4 , 5 can be at least one of a hardware device, or a combination of hardware device and software module.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Description

    [Technical Field]
  • The present disclosure relates to the field of media processing devices suitable for processing two or more media of different exposures and more particularly to apparatus and methods for generating a High Dynamic Range (HDR) media, based on a multi-stage compensation of motion in a captured scene.
  • [Background Art]
  • In general, even though a camera sensor provides 10 bit depth images, several operations within an Image Signal Processor (ISP) may restrict the 10 bit image to 8 bit depth image for saving power. Accordingly, computational imaging techniques may recover the lost information (i.e. reduced bits from 10 bit to 8 bit), by processing image to a multi-frame High Dynamic Range (HDR) image. Further, multiple image frames of different exposures are captured and blended together to form a single High Dynamic Range (HDR) image frame.
  • Currently, conventional methods may use short and long exposure image frames and, may significantly sacrifice parameters such as dynamic range, noise, and may also suffer from artefacts such as halos. Further, moving objects within the scene may have effects in image quality during multi exposure fusion. Accordingly, if, multi exposure fusion is not rightly handled, then it may cause significant artefacts known as ghosts. Furthermore, the ghosting may increase with the number of image frames used for blending. Increasing the number of image frames for blending may improve dynamic range and noise based on extreme ghosting of the image.
  • Further, conventional methods disclose about image sensors that can be used to capture images having rows of long exposure image pixel values that can be interleaved with rows of short exposure image pixel values. A combined long exposure image and a combined short exposure image may be generated using the long exposure and the short exposure values from an interleaved image frames and the interpolated values from a selected one of the interleaved image frames. Further, the High Dynamic Range (HDR) images may be generated using the combined long exposure and short exposure images.
  • In yet another conventional method, apparatus for obtaining a motion adaptive High Dynamic Range (HDR) image may be disclosed. Further, a motion degree of a first image and a second image taken using different exposure times may be calculated and the motion calculation intensity may be adjusted based on the calculated motion degree. The motion compensation intensity involves global motion compensation and/or location motion compensation. The images subjected to compensation may be synthesized and output, so that the image having High Dynamic Range (HDR) may be obtained.
  • Further, conventional methods disclose a process for generating the high dynamic range (HDR) image from a bracketed image sequence, even in the presence of scene or camera motion. Further, a reference image and warped image(s) may be combined to create a radiance map representing the HDR image.
  • Additionally, the amount of ghosting may increase with number of input image frames. The halos in the HDR image may increase due to increased exposure variance between input image frames. Further, the global reference frame selection can introduce large amount of ghosting and halos. Accordingly, a ghost map comprises of three entities such as true ghosts caused due to local object motion, false ghosts caused due to improper image registration, and false ghosts caused due to improper exposure alignment. The conventional methods may consider the three entities together thereby, degrading final output image quality. The complexity of simultaneous correction of ghosts and halos may increase exponentially as the number of image frames increases. Further, de-ghosting methods may estimate a ghost map by first aligning the exposures of input images followed by a photometric difference. The conventional exposure alignment method may not be able to handle very bright and dark regions of the image that may result in false ghosts (i.e. regions detected erroneously as ghosts), which can in turn lead to reduced dynamic range in the output image.
  • FIGs. 1a and 1b illustrates a schematic diagram of an example scenario, where darker and saturated regions in a HDR image are reproduced with less detail using conventional method. As depicted in FIG. 1b, the darker regions and the saturated regions are not reproduced with increased exposure and saturation. Further, the conventional methods may require more time to capture image frames of the scene. Further, as depicted in FIG. 1b, the ghost arefacts in the saturated region of the captured image are not rectified or removed in the HDR image.
  • Hence, the conventional methods may not disclose methods to handle plurality of parameters such as noise, halos, dynamic range and ghosting in for generating High Dynamic Range (HDR) media.
  • U.S. Patent Application 2013/028509 A1 discloses an apparatus and method for generating a High Dynamic Range (HDR) image from which a ghost blur is removed based on a multi-exposure fusion. The apparatus may include an HDR weight map calculation unit to calculate an HDR weight map for multiple exposure frames that are received, a ghost probability calculation unit to calculate a ghost probability for each image by verifying a ghost blur for the multiple exposure frames, an HDR weight map updating unit to update the calculated HDR weight map based on the calculated ghost probability, and a multi-scale blending unit to generate an HDR image by reflecting the updated HDR weight map to the multiple exposure frames.
  • [Disclosure of Invention] [Technical Problem]
  • The principal object of the embodiments herein is to disclose apparatus and methods for generating a High Dynamic Range (HDR) media, based on a multi-stage compensation of motion in a captured scene.
  • Another object of the embodiments herein is to disclose apparatus and methods for handling large number of input media with minimal Image Quality (IQ) tuning.
  • Another object of the embodiments herein is to disclose apparatus and methods for ghost modeling of media to model improper exposure alignment and image registration errors for avoiding false ghosts.
  • Another object of the embodiments herein is to disclose apparatus and methods for enhancing dynamic range of the media, particularly in saturated and dark regions of the scene.
  • [Solution to Problem]
  • Aspects of the present disclosure are to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below.
  • The invention is defined by the appended claims.
  • [Advantageous Effects of Invention]
  • The object of the embodiments herein is to disclose apparatus and methods for handling large number of input media with minimal Image Quality (IQ) tuning.
  • Another object of the embodiments herein is to disclose apparatus and methods for ghost modeling of media to model improper exposure alignment and image registration errors for avoiding false ghosts.
  • Another object of the embodiments herein is to disclose apparatus and methods for enhancing dynamic range of the media, particularly in saturated and dark regions of the scene.
  • [Brief Description of Drawings]
  • Embodiments herein are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
    • FIGs. 1a and 1b illustrates a schematic diagram of an example scenario, where darker and saturated regions in a HDR image are reproduced with less detail using conventional methods;
    • FIG. 2 illustrates a block diagram of apparatus for generating a High Dynamic Range (HDR) media, based on a multi-stage compensation of motion in a captured scene, according to embodiments as disclosed herein;
    • FIG. 3 illustrates a detailed view of a processing module as shown in FIG. 2, comprising various modules, according to embodiments as disclosed herein;
    • FIG. 4 illustrates a block diagram of basis HDR modules for generating multi-stage motion compensated High Dynamic Range (HDR) media, according to embodiments as disclosed herein;
    • FIG. 5 illustrates a detailed view of a ghost modeling as shown in FIG. 4, comprising various modules, according to embodiments as disclosed herein;
    • FIG. 6 illustrates a scalable architecture for generating multi-stage motion compensated High Dynamic Range (HDR) media, according to an example not explicitly disclosing all features of the independent claims;
    • FIGs. 7a and 7b illustrates a schematic diagram for generating a ghost map and weight map respectively, according to examples not explicitly disclosing all features of the independent claims;
    • FIG. 8 illustrates a schematic diagram of an example scenario for generating high speed auto HDR image, according to an example not explicitly disclosing all features of the independent claims;
    • FIG. 9a illustrates a schematic diagram of an example scenario for generating low light de-noised HDR image, according to an example not explicitly disclosing all features of the independent claims;
    • FIG. 9b illustrates a schematic diagram of an example scenario for generating massive HDR image, according to an example not explicitly disclosing all features of the independent claims;
    • FIG. 9c illustrates a schematic diagram of an example scenario for recording HDR video, according to an example not explicitly disclosing all features of the independent claims;
    • FIG. 10a is a schematic diagram of an example scenario of removed ghost artefacts in saturated regions of the captured image, according to an example not explicitly disclosing all features of the independent claims;
    • FIG. 10b is a schematic diagram of an example scenario of removed ghost artefacts in dark regions of the captured image, according to an example not explicitly disclosing all features of the independent claims;
    • FIG. 11a is a flow chart depicting a method for generating a High Dynamic Range (HDR) media, based on the multi-stage compensation of motion in the captured scene, according to embodiments as disclosed herein;
    • FIG. 11b is a flow chart depicting a method for enhancing contrast of the blended media frames by adjusting a media curve to a higher dynamic range, according to embodiments as disclosed herein;
    • FIG. 11c is a flow chart depicting a method for providing iteratively the intermediate media frames from the first stage basis HDR modules to (N-1)th stage basis HDR modules, according to embodiments as disclosed herein; and
    • FIG. 11d is a flow chart depicting a method for estimating exposure alignment error, according to embodiments as disclosed herein.
    [Mode for the Invention]
  • The example embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The description herein is intended merely to facilitate an understanding of ways in which the example embodiments herein can be practiced and to further enable those of skill in the art to practice the example embodiments herein. Accordingly, this disclosure should not be construed as limiting the scope of the example embodiments herein.
  • The embodiments herein achieve apparatus and methods for generating a High Dynamic Range (HDR) media, based on multi-stage compensation of motion in a captured scene. Referring now to the drawings, and more particularly to FIGs. 2 through 11d, where similar reference characters denote corresponding features consistently throughout the figures, there are shown example embodiments.
  • FIG. 2 illustrates a block diagram of apparatus 200 for generating a High Dynamic Range (HDR) media, based on a multi-stage compensation of motion in a captured scene, according to embodiments as disclosed herein.
  • The apparatus 200 includes a memory unit 202, a storage unit 206, a database 208, a display unit 210, and a processor 212. Further, the apparatus 200 further includes a processing module 204, residing in the memory unit 202. The apparatus 200 can also be referred herein after as an electronic device 200. When the machine readable instructions are executed by the processing module 204, the processing module 204 causes the electronic device 200 to acquire data associated with the electronic device 200 commissioned in the computing environment. Further, the processing module 204 causes the electronic device 200 to generate a High Dynamic Range (HDR) media, based on multi-stage compensation of motion in a captured scene.
  • Examples of the apparatus 200/electronic device 200 can be at least one of, but not limited to, a mobile phone, a smart phone, a tablet, a handheld device, a phablet, a laptop, a computer, a wearable computing device, a server, an Internet of Things (IoT) device, a vehicle infotainment system, a camera, a web camera, a digital single-lens reflex (DSLR) camera, a video camera, a digital camera, a mirror-less camera, a still camera, or any other device that comprises at least one camera. The apparatus 200 may comprise other components such as input/output interface(s), communication interface(s) and so on (not shown). The apparatus 200 may comprise a user application interface (not shown) and an application management framework (not shown) and an application framework (not shown) for generating a High Dynamic Range (HDR) media, based on a multi-stage compensation of motion in a captured scene. The application framework can be a software library that provides a fundamental structure to support the development of applications for a specific environment. The application framework may also be used in developing graphical user interface (GUI) and web-based applications. Further, an application management framework may be responsible for the management and maintenance of the application and definition of the data structures used in databases and data files.
  • The apparatus 200 can operate as a standalone device or as a connected (e.g., networked) device that connects to other computer systems/devices over a wired or wireless communication network. Further, the methods and apparatus described herein may be implemented on different computing devices that comprise at least one camera.
  • The apparatus 200 may detect/identify a scene and capture a plurality of frames. The optical media comprising the captured plurality of frames is converted into an electric signal. The structure of the apparatus 200 may include an optical system (i.e. lens or image sensor), a photoelectric conversion system (i.e. charged couple device (CCD), camera tube sensors, and so on.) and a circuitry (such as a video processing circuit). The image sensor may output a Lux (Lux) value, which is a unit of illumination reflected light intensity. Further, a color difference signal (U, V) may include two colors such as hue and saturation, and represented by Cr and Cb. Cr reflects the difference between the red parts of the RGB signal values of RGB input luminance signal. Cb signal reflects the blue part of the RGB input with the RGB values of the luminance difference signal. In an example, the apparatus 200 may include a dual camera that may comprise two different image sensors such as at least one of, but not limited to, a Charge-Coupled Devices (CCD) sensor, an active pixel sensor, a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, a N-type Metal-Oxide-Semiconductor (NMOS, Live MOS) sensor, a bayer filter sensor, a quadra sensor, a tetra sensor, a Foveon sensor, a 3CCD sensor, a RGB (Red Green Blue) sensor, and so on.
  • Further, the image sensor may capture still image snapshots and/or video sequences. Also, each image sensor may include color filter arrays (CFAs) arranged on a surface of individual sensors or sensor elements. The image sensors may be arranged in line, a triangle, a circle or another pattern. The apparatus 200 may activate certain sensors and deactivate other sensors without moving any sensor. The camera residing in the apparatus 200 may include functions such as automatic focus (autofocus or AF), automatic white balance (AWB), and automatic exposure control (AEC) to produce pictures or video that are in focus, spectrally balanced, and exposed properly. AWB, AEC and AF are sometimes referred to herein as 3A convergence. An optimal exposure period may be estimated using a light meter (not shown), and/or capturing one or more images by the image sensor.
  • The apparatus 200 is configured to align exposure of a plurality of media frames with a high exposure (Ih) level to a low exposure (Ii) level, by selecting from a registered plurality of media frames comprising a plurality of exposure levels corresponding to the captured scene. The exposures are aligned using a pixel based intensity correspondence method such as at least one of, but not limited to, a polynomial curve fitting, a histogram matching and so on, between high exposure image and low exposure image. In an embodiment, the registered plurality of media frames aligned to the low exposure (Ii) level is compared with a media frame of low exposure (Ii) level of a non-registered plurality of media frames comprising the plurality of exposure levels, to generate a first photometric difference map. The comparison with a media frame of low exposure (Ii) level is performed by taking a per pixel intensity difference between two images, in turn the photometric difference map is generated.
  • The apparatus 200 is configured to generate a first ghost map, based on the generated photometric difference map of the aligned plurality of media frames to the low exposure (Il) level. The generated first ghost map is similar to the photometric difference map between two images. In an embodiment, the apparatus 200 is configured to correct an exposure alignment error of a registered plurality of media frames with the low exposure (Il) level, by performing a forward exposure alignment of the registered plurality of media frames with a low exposure (Il) level to a high exposure (Ih) level, and performing a backward exposure alignment of the forward exposure aligned media frames, back to the low exposure (Il) level. The first ghost map is the difference between exposure aligned image and original image. For example, if, there is an error in the exposure alignment, then the difference/error is depicted as a ghost. However, the performed exposure alignment is not according to the reality and it can be an error (false ghost) in reality. Accordingly, the error may need to be corrected.
  • The registered plurality of media frames aligned back to the low exposure (Il) level is compared with the media frame of low exposure (Il) level of the non-registered plurality of media frames comprising the plurality of exposure levels, to generate a second photometric difference map. The generated second photometric difference map includes generating the error due to photometric alignment. In an embodiment, the apparatus 200 is configured to correct a media registration error of the aligned at least one of the high exposure level and the low exposure level of the registered plurality of the media frames comprising the plurality of exposure levels. The media registration error is corrected by first generating edge maps of the registered inputs using methods such as canny edge detector and then taking a photometric difference between the edge maps. This map is the error due to media registration.
  • The apparatus 200 is configured to remove a plurality of false ghost artefacts in the at least one of, the corrected exposure alignment error, the corrected media registration error, and the first ghost map, corresponding to the registered plurality of media frames, by using the first ghost map and the second photometric difference map. In an embodiment, the apparatus 200 is configured to generate a second ghost map using the generated first ghost map, based on removing the plurality of false ghost artefacts. The plurality of false ghost artefacts is removed to set the pixels detected as false ghosts to zero, in turn effectively removing the false ghosts.
  • The apparatus 200 is configured to generate the HDR media, based on the generated second ghost map, wherein the HDR media is generated based on generating a weight map corresponding to the second ghost map associated with the corrected plurality of media frames and blending at least two corrected media frames using the weight maps. In an embodiment, the weight map is a function of input image pixel intensities as well as second ghost map. The weight map may be suppressed depending on the value of ghost map, so that image blending does not take place in regions with motion so that there is no ghost in the final image.
  • The exposure alignment of the media frame is performed by using histogram matching method. In an embodiment, aligning the exposure of the plurality of media frames with the high exposure (Ih) level to the low exposure (Il) level is performed by polynomial fitting method. In an embodiment, correcting the media registration error comprises analyzing an edge information of the aligned at least one of the high exposure level and the low exposure level of the registered plurality of the media frames.
  • In an embodiment, the apparatus 200 is configured to estimate high frequency noise present in the registered plurality of media frames after removing the false ghost artefacts. In an embodiment, the apparatus 200 is configured to enhance the registered plurality of media frames after removing the high frequency noise, to generate the second ghost map, by applying a morphological operation. The morphological operation includes erosion for removing unwanted pixels and dilation to fill holes in the ghost map. In an embodiment, the apparatus 200 is configured to identify contribution of pixels in the enhanced plurality of media frames by estimating the weight map using a pre-defined lookup table and the generated second ghost map. The weight map can be a function of input image intensity and ghost map. The function is pre-determined as a look-up-table of pixel intensity values. In an embodiment, the weight maps are used to blend at least two corrected media frames. In an embodiment, the apparatus 200 is configured to enhance contrast of the blended media frames by adjusting a media curve to a higher dynamic range.
  • In an embodiment, the apparatus 200 is configured to provide a first set of consecutive exposure frames of the non-registered plurality of media frames, to generate a first set of intermediate media frames from a plurality of first stage basis HDR modules, for processing the HDR media. In an embodiment, the apparatus 200 is configured to provide the first set of intermediate media frames from the plurality of first stage basis HDR modules to a plurality of second stage basis HDR modules to generate a second set of intermediate media frames. In an embodiment, the apparatus 200 is configured to provide iteratively the intermediate media frames from the first stage basis HDR modules to the (N-1)th stage basis HDR modules to generate HDR media. In an embodiment, the basis HDR modules are connected each other in series, to operate on media frames that are captured consecutively, to remove ghost and halo artefacts. In an embodiment, the basis HDR modules comprise enhancing dynamic rage of the media frames in multiple stage basis HDR modules. In an embodiment, the basis HDR modules comprises tone mapping of the final HDR media suitable for display and image compression stage.
  • In an embodiment, the apparatus 200 is configured to boost exposure of the plurality of media frames with low exposure (Il) level to obtain aligned exposure (I' 1) level. In an embodiment, the apparatus 200 is configured to suppress exposure of the plurality of media frames with aligned exposure (I' 1) level to obtain suppressed exposure (I" 1) level. In an embodiment, the apparatus 200 is configured to estimate exposure alignment error by determining difference between media frames with low exposure (Ii) level and suppressed exposure (I" 1) level, as |I 1-I" 1|.
  • FIG. 2 illustrates functional components of the computer implemented system. In some cases, the component may be a hardware component, a software component, or a combination of hardware and software. Some of the components may be application level software, while other components may be operating system level components. In some cases, the connection of one component to another may be a close connection where two or more components are operating on a single hardware platform. In other cases, the connections may be made over network connections spanning long distances. Each embodiment may use different hardware, software, and interconnection architectures to achieve the functions described.
  • FIG. 3 illustrates a detailed view of a processing module as shown in FIG. 2, comprising various modules, according to embodiments as disclosed herein.
  • The apparatus 200 comprises a processing module 204 stored in the memory unit 202 (depicted in FIG. 2). The processing module 204 may comprise a plurality of sub modules. The plurality of sub modules can comprise of, an exposure alignment module 302, a ghost map generation module 304, an exposure alignment error correction module 306, a media registration error correction module 308, a false ghost removal module 310, and a HDR media generation module 312.
  • In an embodiment, the exposure alignment module 302 is configured to align exposure of a plurality of media frames with a high exposure (Ih) level to a low exposure (Il) level, by selecting from a registered plurality of media frames comprising a plurality of exposure levels corresponding to the captured scene. In an embodiment herein, the plurality of exposure levels corresponding to the captured scene can be consecutive exposure levels. In an embodiment herein, the plurality of exposure levels corresponding to the captured scene can be non-consecutive exposure levels. In an embodiment, the registered plurality of media frames aligned to the low exposure (Il) level is compared with a media frame of low exposure (Il) level of a non-registered plurality of media frames comprising the plurality of exposure levels, to generate a first photometric difference map. In an embodiment, the ghost map generation module 304 is configured to generate a first ghost map, based on the generated photometric difference map of the aligned plurality of media frames to the low exposure (Il) level. In an embodiment, the exposure alignment error correction module 306 is configured to correct an exposure alignment error of a registered plurality of media frames with the low exposure (Il) level, by performing a forward exposure alignment of the registered plurality of media frames with a low exposure (Il) level to a high exposure (Ih) level, and performing a backward exposure alignment of the forward exposure aligned media frames, back to the low exposure (Il) level. In an embodiment, the registered plurality of media frames aligned back to the low exposure (Il) level is compared with the media frame of low exposure (Il) level of the non-registered plurality of media frames comprising the plurality of exposure levels, to generate a second photometric difference map.
  • In an embodiment, the registration error correction module 308 is configured to correct a media registration error of the aligned at least one of the high exposure level and the low exposure level of the registered plurality of the media frames comprising the plurality of exposure levels. In an embodiment, the false ghost removal module 310 is configured to remove a plurality of false ghost artefacts in the at least one of, the corrected exposure alignment error the corrected media registration error, and the first ghost map, corresponding to the registered plurality of media frames, by using the first ghost map and the second photometric difference map. In an embodiment, the ghost map generation module 304 is configured to generate a second ghost map using the generated first ghost map, based on removing the plurality of false ghost artefacts. In an embodiment, the HDR media generation module 312 is configured to generate the HDR media, based on the generated second ghost map. In an embodiment, the HDR media is generated based on generating a weight map corresponding to the second ghost map associated with the corrected plurality of media frames and blending at least two corrected media frames using the weight maps.
  • The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • FIG. 4 illustrates a block diagram of basis HDR modules for generating multi-stage motion compensated High Dynamic Range (HDR) media, according to embodiments as disclosed herein.
  • In an embodiment, a first set of consecutive exposure frames of the non-registered plurality of media frames is used, to generate a first set of intermediate media frames from a plurality of first stage basis HDR modules, for processing the HDR media. In an embodiment, the first set of intermediate media frames is provided by the plurality of first stage basis HDR modules to a plurality of second stage basis HDR modules to generate a second set of intermediate media frames. In an embodiment, the intermediate media frames from the first stage basis HDR modules are used iteratively to (N-1)th stage basis HDR modules to generate HDR media. In an embodiment, tone mapping is performed to convert the final HDR media output from the (N-1)th stage, to a form that may be suitable for displaying on the display unit 210 of the electronic device 200 or for image compression scenario in the electronic device 200.
  • FIG. 5 illustrates a detailed view of a ghost modeling as shown in FIG. 4, comprising various modules, according to embodiments as disclosed herein.
  • The apparatus 200 is configured to perform high quality ghost modeling within each basis HDR module. In an embodiment, the apparatus is configured to perform forward exposure alignment and backward exposure alignment of consecutive exposure media frames for estimation of an exposure alignment error. In an embodiment, the apparatus is configured to compute a first ghost map from the exposure aligned media frame and a captured media frame. In an embodiment, the apparatus is configured to compute edge information for estimation of a media registration error. In an embodiment, the apparatus is configured to estimate of the exposure alignment error and the media registration error using the computed edge information and the exposure aligned media frames. In an embodiment, the apparatus is configured to generate a second ghost map using an estimated first ghost map and the false ghost artefacts based on estimation of a high frequency noise and false ghosts in the estimated media frame.
  • FIG. 6 illustrates a scalable architecture for generating multi-stage motion compensated High Dynamic Range (HDR) media, according to an example.
  • Embodiments herein comprise basis-HDR modules, which takes a plurality of input exposures and provides intermediate outputs. The multi-frame HDR media processing may be expediently divided into multiple stages that may have multiple basis HDR modules/blocks. The multi-stage scalable architecture comprises of basis-HDR modules connected in such a way that each module operates on media frames that may be captured consecutively, to minimize the amount of ghosting and halos artefacts. Further, a progressive improvement in dynamic range over multiple stages is achieved. The common functionality of basis HDR may ensure scalability without the need for extensive IQ tuning.
  • Further, as depicted in FIG. 6, the image/media I1, I2, I3, up to IN, are consecutive exposure input image frames captured for HDR media generation. The 'Bk', represents the basis HDR at 'kth' stage for exposure input image"I".
  • FIGs. 7a and 7b illustrates a schematic diagram for generating the ghost map and the weight map respectively, according to an example.
  • Accordingly, the basis-HDR module may take a plurality of input exposures and comprises of plurality of image processing stages such as a registration stage, an exposure alignment stage, a ghost modeling stage, a weight map generation stage, a blending stage, and a contrast enhancement. Further, the multi stage HDR may use multiple modules/blocks of basis HDR in each stage. The basis HDR block may generate intermediate ghost-free and halo-free HDR output media/image frames. Each basis HDR block may handle only two image frames at once, providing better control over ghosts and halos artefacts.
  • Further, the exposure alignment error correction module 306 (as shown in FIG. 3) may model errors incurred due to any exposure alignment methods. The error estimation for an exposure boosting operation (exposure alignment from lower to higher exposure (Il → Ih)) may comprise an exposure boosting of Il to obtain I'l , an exposure suppression of I'l to obtain I"l , and an exposure alignment error can be estimated as |Il -I"l |. The error estimation for an exposure suppression operation can be performed vice-versa. The error estimation for the exposure suppression operation (i.e. exposure alignment from higher to lower exposure (Il → Ih)) may comprise an exposure suppression of Ih to obtain I'h , an exposure boosting of I'h to obtain I"h ,and an exposure alignment error can be estimated as |Ih -I"h |.Further, histogram matching method may be used as the exposure alignment operator during exposure alignment of the media frames.
  • FIG. 8 illustrates a schematic diagram of an example scenario for generating high speed auto HDR image, according to an example.
  • In an example, a triple exposure HDR image frame capture is captured by the electronic device 200. The multiple exposure image frames are captured at time instants for example, t, t+1, t+2, and so on. The successive image frames may be blended together that may progressively propagate to the next stage as depicted in FIG. 8. Further, tone Mapping is performed to generate the final HDR image output.
  • FIG. 9a illustrates a schematic diagram of an example scenario for generating low light de-noised HDR image, according to an example.
  • In an example, if the image comprises over exposed regions, then details in image may be enhanced using the basis HDR modules, as shown in FIG. 9a. Further, if the image comprises under exposed regions, then noise in the image is reduced, as shown in FIG. 9a.
  • FIG. 9b illustrates a schematic diagram of an example scenario for generating massive HDR image, according to an example.
  • In an example, 13 frames may be used for multi exposure massive HDR image. The basis HDR modules may be scaled to produce massive HDR image and may ensures minimal ghosting artefacts with higher dynamic range.
  • FIG. 9c illustrates a schematic diagram of an example scenario for recording HDR video, according to an example.
  • In an example, a high quality HDR video recording using multiple basis HDR modules are generated based on exposing alternate frames differently, to obtain a progressive high dynamic range video.
  • FIG. 10a is a schematic diagram of an example scenario of removed ghost artefacts in saturated regions of the captured image, according to an example.
  • In an example, the true ghost artefacts such as hand motion of a human, sun light, vehicle motion, are eliminated in the generated HDR, as shown in the FIG. 10a.
  • FIG. 10b is a schematic diagram of an example scenario of removed ghost artefacts in dark regions of the captured image, according to an example.
  • In an example, the images may include darker regions, and the darker regions are enhanced with higher exposure as shown in FIG. 10b. The details in the image are reproduced appropriately by basis HDR modules and the execution time for HDR image is comparatively reduced. For example, 4 frames may be executed in 650 milliseconds, 7 frames may be executed in 850 milliseconds, and 10 frames may be executed in 1200 milliseconds.
  • FIG. 11a is a flow chart depicting a method 1100a for generating a High Dynamic Range (HDR) media, based on the multi-stage compensation of motion in the captured scene, according to embodiments as disclosed herein.
  • At step 1102, the method 1100a includes aligning an exposure of a plurality of media frames with a high exposure (Ih) level to a low exposure (Ii) level, by selecting from a registered plurality of media frames comprising a plurality of exposure levels corresponding to the captured scene. At step 1104, the method 1100a includes generating a first ghost map, based on the generated photometric difference map of the aligned plurality of media frames to the low exposure (Ii) level. At step 1106, the method 1100a includes correcting an exposure alignment error of a registered plurality of media frames with the low exposure (Ii) level, by performing a forward exposure alignment of the registered plurality of media frames with a low exposure (Ii) level to a high exposure (Ih) level, and performing a backward exposure alignment of the forward exposure aligned media frames, back to the low exposure (Ii) level. At step 1108, the method 1100a includes correcting a media registration error of the aligned at least one of the high exposure level and the low exposure level of the registered plurality of the media frames comprising the plurality of exposure levels. At step 1110, the method 1100a includes removing a plurality of false ghost artefacts in the at least one of, the corrected exposure alignment error the corrected media registration error, and the first ghost map, corresponding to the registered plurality of media frames, by using the first ghost map and the second photometric difference map. At step 1112, the method 1100a includes generating a second ghost map using the generated first ghost map, based on removing the plurality of false ghost artefacts. At step 1114, the method 1100a includes generating the HDR media, based on the generated second ghost map, wherein the HDR media is generated based on generating a weight map corresponding to the second ghost map associated with the corrected plurality of media frames and blending at least two corrected media frames using the weight maps.
  • The various actions in method 1100a may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 11a may be omitted.
  • FIG. 11b is a flow chart depicting a method 1100b for enhancing contrast of the blended media frames by adjusting a media curve to a higher dynamic range, according to embodiments as disclosed herein.
  • At step 1122, the method 1100b includes estimating high frequency noise in the registered plurality of media frames after removing the false ghost artefacts. At step 1124, the method 1100b includes enhancing the registered plurality of media frames after removing the high frequency noise, to generate the second ghost map, by applying morphological operation. At step 1126, the method 1100b includes identifying contribution of a pixels in the enhanced plurality of media frames by estimating the weight map using a pre-defined lookup table and the generated second ghost map. At step 1128, the method 1100b includes enhancing contrast of the blended media frames by adjusting a media curve to a higher dynamic range.
  • The various actions in method 1100b may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 11b may be omitted.
  • FIG. 11c is a flow chart depicting a method 1100c for providing iteratively the intermediate media frames from the first stage basis HDR modules to (N-1)th stage basis HDR modules, according to embodiments as disclosed herein.
  • At step 1132, the method 1100c includes providing, first set of consecutive exposure frames of the non-registered plurality of media frames, to generate a first set of intermediate media frames from a plurality of first stage basis HDR modules, for processing the HDR media. At step 1134, the method 1100c includes providing the first set of intermediate media frames from the plurality of first stage basis HDR modules to a plurality of second stage basis HDR modules to generate a second set of intermediate media frames. At step 1136, the method 1100c includes providing iteratively the intermediate media frames from the first stage basis HDR modules to (N-1)th stage basis HDR modules to generate HDR media.
  • The various actions in method 1100c may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 11c may be omitted.
  • FIG. 11d is a flow chart depicting a method 1100d for estimating exposure alignment error, according to embodiments as disclosed herein.
  • At step 1142, the method 1100d includes boosting exposure of the plurality of media frames with low exposure (Il) level to obtain aligned exposure (I'l ) level. At step 1144, the method 1100d includes suppressing, by the processor (212), exposure of the plurality of media frames with aligned exposure (I'l ) level to obtain suppressed exposure (I"l ) level. At step 1146, the method 1100d includes estimating exposure alignment error by determining difference between media frames with low exposure (Ii) level and suppressed exposure (I"l )level, as |Il -I"l|.
  • The various actions in method 1100d may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 11d may be omitted.
  • The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the elements. The elements shown in FIG. 2, 3, 4, 5 can be at least one of a hardware device, or a combination of hardware device and software module.
  • The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the scope of the attached claims.

Claims (6)

  1. An apparatus (200) for generating a High Dynamic Range, HDR, media comprising:
    a processor (212); and
    a memory unit (202) coupled to the processor (212), wherein the memory unit (202) comprises a processing module (204) configured to:
    align exposure of a plurality of media frames with a high exposure (Ih) level to a low exposure (Il) level, by performing a pixel based intensity correspondence method on a registered plurality of media frames comprising a plurality of exposure levels corresponding to the captured scene, wherein the method is performed by using histogram matching
    generate a first photometric difference map by comparing the registered plurality of media frames aligned with the low exposure (Il) level with a non-registered plurality of media frames comprising the plurality of exposure levels, wherein the comparison with a media frame of the low exposure (Il) level is performed by taking a per pixel intensity difference between two images,
    wherein the non-registered plurality is said plurality of media frames;
    generate a first ghost map, based on the generated photometric difference map of the aligned plurality of media frames to the low exposure (Il) level;
    determine an exposure alignment error of the registered plurality of media frames with the low exposure (Il) level, by performing a forward exposure alignment of the registered plurality of media frames with a low exposure (Il) level to a high exposure (Ih) level, and performing a backward exposure alignment of the forward exposure aligned media frames, back to the low exposure (Il) level;
    generate a second photometric difference map by comparing the registered plurality of media frames aligned back to the low exposure (Il) level with the media frame of low exposure (Il) level of the non-registered plurality of media frames, wherein the second photometric difference map is the exposure alignment error;
    correct a media registration error of the registered plurality of the media frames comprising the plurality of exposure levels, wherein the media registration error is the error defined by a photometric difference between edge maps of the registered plurality of the media frames;
    remove a plurality of false ghost artefacts in the first ghost map, corresponding to the registered plurality of media frames, by using the first ghost map and the second photometric difference map;
    generate a second ghost map using the generated first ghost map, based on removing the plurality of false ghost artefacts; and
    generate the HDR media, based on the generated second ghost map, wherein the HDR media is generated based on generating a weight map corresponding to the second ghost map and blending at least two corrected media frames using the weight maps.
  2. The apparatus (200) of claim 1, wherein the processing module (204) is further configured to:
    estimate high frequency noise in the registered plurality of media frames after removing the false ghost artefacts;
    enhance the registered plurality of media frames after removing the high frequency noise, to generate the second ghost map, by applying morphological operation to the registered plurality of media frames.
  3. The apparatus (200) as claimed in claim 1, wherein the processing module (204) is further configured to:
    boost exposure of the plurality of media frames with low exposure (Il) level to obtain aligned exposure (I'l) level;
    suppress exposure of the plurality of media frames with aligned exposure (I'l) level to obtain suppressed exposure (I"l) level; and
    estimate exposure alignment error by determining difference between media frames with low exposure (Il) level and suppressed exposure (I"l) level, as |Il-I"l|.
  4. A computer- implemented method (1100a) for generating a High Dynamic Range (HDR) media, comprising:
    aligning, by a processor (212), an exposure of a plurality of media frames with a high exposure (Ih) level to a low exposure (Il) level, by performing a pixel based intensity correspondence method on a registered plurality of media frames comprising a plurality of exposure levels corresponding to the captured scene, wherein the method is performed by using histogram matching
    generating a first photometric difference map by comparing the registered plurality of media frames aligned with the low exposure (Il) level with a non-registered plurality of media frames comprising the plurality of exposure levels, wherein the comparison with a media frame of the low exposure (Il) level is performed by taking a per pixel intensity difference between two images,
    wherein the non-registered plurality is said plurality of media frames;
    generating, by the processor (212), a first ghost map, based on the generated photometric difference map of the aligned plurality of media frames to the low exposure (Il) level;
    determining,
    by the processor (212), an exposure alignment error of the registered plurality of media frames with the low exposure (Il) level, by performing a forward exposure alignment of the registered plurality of media frames with a low exposure (Il) level to a high exposure (Ih) level, and performing a backward exposure alignment of the forward exposure aligned media frames, back to the low exposure (Ii) level
    generating a second photometric difference map by comparing the registered plurality of media frames aligned back to the low exposure (Il) level with the media frame of low exposure (Il) level of the non-registered plurality of media frames, wherein the second photometric difference map is the exposure alignment error;
    correcting, by the processor (212), a media registration error of the registered plurality of the media frames comprising the plurality of exposure levels, wherein the media registration error is the error defined by a photometric difference between edge maps of the registered plurality of the media frames;
    removing, by the processor (212), a plurality of false ghost artefacts in the first ghost map, corresponding to the registered plurality of media frames, by using the first ghost map and the second photometric difference map;
    generating, by the processor (212), a second ghost map using the generated first ghost map, based on removing the plurality of false ghost artefacts; and
    generating, by the processor (212), the HDR media, based on the generated second ghost map, wherein the HDR media is generated based on generating a weight map corresponding to the second ghost map and blending at least two corrected media frames using the weight maps.
  5. The computer- implemented method (1100a) of claim 4, the method (1100b) further comprises:
    estimating, by the processor (212), high frequency noise in the registered plurality of media frames after removing the false ghost artefacts;
    enhancing, by the processor (212), the registered plurality of media frames after removing the high frequency noise, to generate the second ghost map, by applying morphological operation to the registered plurality of media frames.
  6. The computer- implemented method (1100a) of claim 4, the method (1100d) further comprises:
    boosting, by the processor (212), exposure of the plurality of media frames with low exposure (Il) level to obtain aligned exposure (I'l) level;
    suppressing, by the processor (212), exposure of the plurality of media frames with aligned exposure (I'l) level to obtain suppressed exposure (I"l) level; and
    estimating, by the processor (212), exposure alignment error by determining difference between media frames with low exposure (Il) level and suppressed exposure (I"l) level, as |Il - I"l|.
EP19860846.5A 2018-09-11 2019-09-11 Apparatus and methods for generating high dynamic range media, based on multi-stage compensation of motion Active EP3834170B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201841034147 2018-09-11
PCT/KR2019/011884 WO2020055196A1 (en) 2018-09-11 2019-09-11 Apparatus and methods for generating high dynamic range media, based on multi-stage compensation of motion

Publications (3)

Publication Number Publication Date
EP3834170A1 EP3834170A1 (en) 2021-06-16
EP3834170A4 EP3834170A4 (en) 2021-11-03
EP3834170B1 true EP3834170B1 (en) 2024-06-19

Family

ID=69778643

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19860846.5A Active EP3834170B1 (en) 2018-09-11 2019-09-11 Apparatus and methods for generating high dynamic range media, based on multi-stage compensation of motion

Country Status (3)

Country Link
US (1) US11563898B2 (en)
EP (1) EP3834170B1 (en)
WO (1) WO2020055196A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020117379A1 (en) * 2018-12-06 2020-06-11 Gopro, Inc. High dynamic range anti-ghosting and fusion
CN112785537B (en) * 2021-01-21 2024-07-30 北京小米松果电子有限公司 Image processing method, device and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7142723B2 (en) 2003-07-18 2006-11-28 Microsoft Corporation System and process for generating high dynamic range images from multiple exposures of a moving scene
JP4645736B2 (en) 2008-12-22 2011-03-09 ソニー株式会社 Image processing apparatus, image processing method, and program
US8525900B2 (en) * 2009-04-23 2013-09-03 Csr Technology Inc. Multiple exposure high dynamic range image capture
KR101614914B1 (en) 2009-07-23 2016-04-25 삼성전자주식회사 Motion adaptive high dynamic range image pickup apparatus and method
JP2012257193A (en) * 2011-05-13 2012-12-27 Sony Corp Image processing apparatus, image pickup apparatus, image processing method, and program
KR101699919B1 (en) * 2011-07-28 2017-01-26 삼성전자주식회사 High dynamic range image creation apparatus of removaling ghost blur by using multi exposure fusion and method of the same
US8913153B2 (en) 2011-10-06 2014-12-16 Aptina Imaging Corporation Imaging systems and methods for generating motion-compensated high-dynamic-range images
WO2014099320A1 (en) * 2012-12-17 2014-06-26 The Trustees Of Columbia University In The City Of New York Methods, systems, and media for high dynamic range imaging
US9258490B2 (en) * 2014-02-28 2016-02-09 Konica Minolta Laboratory U.S.A., Inc. Smoothing of ghost maps in a ghost artifact detection method for HDR image creation
CN105931213B (en) 2016-05-31 2019-01-18 南京大学 The method that high dynamic range video based on edge detection and frame difference method removes ghost
US11151731B2 (en) * 2019-08-06 2021-10-19 Samsung Electronics Co., Ltd. Apparatus and method for efficient regularized image alignment for multi-frame fusion

Also Published As

Publication number Publication date
US11563898B2 (en) 2023-01-24
EP3834170A1 (en) 2021-06-16
US20220053117A1 (en) 2022-02-17
EP3834170A4 (en) 2021-11-03
WO2020055196A1 (en) 2020-03-19

Similar Documents

Publication Publication Date Title
US9288392B2 (en) Image capturing device capable of blending images and image processing method for blending images thereof
US20210073957A1 (en) Image processor and method
JP5213670B2 (en) Imaging apparatus and blur correction method
US8890983B2 (en) Tone mapping for low-light video frame enhancement
CN110930301B (en) Image processing method, device, storage medium and electronic equipment
US9613408B2 (en) High dynamic range image composition using multiple images
US9307212B2 (en) Tone mapping for low-light video frame enhancement
WO2021047345A1 (en) Image noise reduction method and apparatus, and storage medium and electronic device
JP2010166558A (en) Image forming apparatus
WO2007058126A1 (en) Image processing system and image processing program
US11941791B2 (en) High-dynamic-range image generation with pre-combination denoising
WO2020011112A1 (en) Image processing method and system, readable storage medium, and terminal
US20220101503A1 (en) Method and apparatus for combining low-dynamic range images to a single image
EP3834170B1 (en) Apparatus and methods for generating high dynamic range media, based on multi-stage compensation of motion
JPWO2017154293A1 (en) Image processing apparatus, imaging apparatus, image processing method, and program
WO2007085004A2 (en) Hand jitter reduction for compensating for linear displacement
CN110930440B (en) Image alignment method, device, storage medium and electronic equipment
CN110211065B (en) Color correction method and device for food material image
CN109447925B (en) Image processing method and device, storage medium and electronic equipment
JP4389671B2 (en) Image processing apparatus, image processing method, and computer program
KR101070057B1 (en) Image processing method using adaptive gamma correction curve
CN106920217B (en) Image correction method and device
Yoo et al. A digital ISO expansion technique for digital cameras
CN117880642A (en) Multi-image HDR real-time processing method based on multi-lens video acquisition
JP2023064501A (en) Image processing device and image processing method

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210309

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

A4 Supplementary search report drawn up and despatched

Effective date: 20211001

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/30 20170101ALI20210927BHEP

Ipc: G06T 5/50 20060101ALI20210927BHEP

Ipc: H04N 5/235 20060101ALI20210927BHEP

Ipc: G06T 5/00 20060101AFI20210927BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20230405

REG Reference to a national code

Ref document number: 602019053997

Country of ref document: DE

Ref country code: DE

Ref legal event code: R079

Free format text: PREVIOUS MAIN CLASS: G06T0005000000

Ipc: G06T0005500000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/30 20170101ALI20240118BHEP

Ipc: H04N 23/741 20230101ALI20240118BHEP

Ipc: G06T 5/90 20240101ALI20240118BHEP

Ipc: G06T 5/50 20060101AFI20240118BHEP

INTG Intention to grant announced

Effective date: 20240212

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602019053997

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240619

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240619

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240619

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240822

Year of fee payment: 6

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20240920

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240822

Year of fee payment: 6