CN109636765B - High dynamic display method based on image multiple exposure fusion - Google Patents

High dynamic display method based on image multiple exposure fusion Download PDF

Info

Publication number
CN109636765B
CN109636765B CN201811332568.8A CN201811332568A CN109636765B CN 109636765 B CN109636765 B CN 109636765B CN 201811332568 A CN201811332568 A CN 201811332568A CN 109636765 B CN109636765 B CN 109636765B
Authority
CN
China
Prior art keywords
image
exposure
detail
layer
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811332568.8A
Other languages
Chinese (zh)
Other versions
CN109636765A (en
Inventor
史超超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
TCL China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL China Star Optoelectronics Technology Co Ltd filed Critical TCL China Star Optoelectronics Technology Co Ltd
Priority to CN201811332568.8A priority Critical patent/CN109636765B/en
Priority to PCT/CN2019/072435 priority patent/WO2020093600A1/en
Publication of CN109636765A publication Critical patent/CN109636765A/en
Application granted granted Critical
Publication of CN109636765B publication Critical patent/CN109636765B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a high dynamic display method based on image multiple exposure fusion, which comprises an original image input step, a multiple exposure image generation step, a human eye region-of-interest information extraction step, a human eye region-of-interest weight calculation step, a basic layer and detail layer extraction step and an image fusion step. The method comprises the steps of generating a plurality of exposure images with different exposure values from an original image, then respectively extracting an image basic layer and an image detail layer from each exposure image, fusing the image basic layers by specific weight values, fusing the image detail layers by specific detail enhancement details, and finally fusing the fused basic layer and the fused detail layer for the second time to obtain a fused image, wherein the fused image has a better display effect.

Description

High dynamic display method based on image multiple exposure fusion
Technical Field
The invention relates to a high dynamic display method based on image multiple exposure fusion, which is used for enhancing an image in different regions, respectively enhancing human eye interested regions of different exposure images, and reserving more details in the human eye interested regions to enhance the overall visual effect of the image.
Background
Digital cameras have been replacing traditional cameras for many years because they have the advantage of taking pictures quickly without the need to wash the negative. However, the image captured by the digital camera is easily overexposed due to high light and easily overlooked due to low light, so many people have developed a multi-exposure fusion method to perform digital post-processing on the image of the camera to obtain a high-dynamic display image.
The prior art multi-exposure fusion method generates multiple exposure images by establishing a proper exposure function on one image. The weight calculation for each exposure image obtains the mean value of the images as the center value.
Each different exposed image tends to focus differently on the accents, e.g., a darker exposed image tends to focus on the brightest region (e.g., the sky). Conversely, where the brightest exposed image needs enhancement is the darker area detail. Therefore, the prior art is uniform in number, and cannot obtain a good image display effect, and the produced image is often whitish or fuzzy.
Therefore, it is necessary to provide a high dynamic display method based on image multiple exposure fusion to solve the problems of the prior art.
Disclosure of Invention
The invention provides a high dynamic display method based on image multiple exposure fusion, which aims to solve the problem that an image in the prior art is whitish or fuzzy after being processed.
The invention mainly aims to provide a high dynamic display method of multiple exposure fusion, which comprises the following steps:
a multi-exposure image generation step including generating a plurality of exposure images from an original image using an appropriate sigmoid function;
extracting human eye region-of-interest information, including extracting a plurality of human eye regions-of-interest in each exposure image through an image significance-based model;
a human eye region-of-interest weight calculation step, including calculating the weight value of each human eye region-of-interest in each exposure image;
a basic layer and detail layer extraction step, including extracting an image basic layer and an image detail layer from each of the exposure images; and
and the step of fusing the images comprises fusing all the image base layers to generate a fused base layer, fusing all the image detail layers to generate a fused detail layer, and finally fusing the fused base layer and the fused detail layer to generate a fused image.
In an embodiment of the present invention, the method further includes: a computer providing step including providing a computer; an original image input step including inputting the original image to the computer; the multi-exposure image generation step, the human eye region-of-interest information extraction step, the human eye region-of-interest weight calculation step, the base layer and detail layer extraction step, and the image fusion step are executed through the computer operation.
In an embodiment of the invention, the multi-exposure fused high dynamic display method further comprises an output step, wherein the output step comprises outputting the fused image to an external electronic device through the computer.
In an embodiment of the present invention, the exposure values of the plurality of exposure images are different.
In an embodiment of the invention, the plurality of exposure images are gray-scale images.
In an embodiment of the invention, the base layer and detail layer extracting step extracts the image base layer and the image detail layer from each of the exposure images using a principal component analysis method.
In an embodiment of the present invention, the step of fusing the images includes: and an image base layer fusion step, including performing weighted fusion on the plurality of image base layers according to the weight value of each human eye region of interest in each exposure image through the computer operation, so as to generate the fusion base layer.
In an embodiment of the present invention, the step of fusing the images further includes: and an image detail layer fusion step, which comprises the step of fusing a plurality of image detail layers through computer operation so as to generate the fused detail layer, wherein the image detail layer fusion step comprises the steps of firstly generating a plurality of detail enhancement coefficients and then fusing the plurality of image detail layers through a detail layer fusion calculation formula.
In an embodiment of the present invention, the step of fusing the images further includes: a fused image generating step of fusing the fused base layer and the fused detail layer by the computer operation to generate the fused image.
In an embodiment of the present invention, the detail enhancement factor is 1+ sqrt (std (l)/255-std (b)/255); wherein L represents the gray value of the exposure image of the original image, B represents the gray value of the base layer, sqrt represents the square root, and Std represents the standard deviation.
In an embodiment of the present invention, the detail layer fusion calculation formula is as follows: d ═ (a x D1+ b x D2+ c x D3)/(a + b + c); wherein D1, D2, D3 respectively represent gray-scale values of the plurality of image detail layers, and a, b, c are the detail enhancement coefficients.
In an embodiment of the invention, the fusion base layer and the fusion detail layer are fused into the fusion image through a linear superposition method.
Compared with the prior art, the image multi-exposure fusion high-dynamic display method generates a plurality of exposure images with different exposure values from an original image, extracts the image base layers and the image detail layers from the exposure images respectively, fuses the image base layers with specific weight values, fuses the image detail layers with specific detail enhancement details, and finally performs second fusion on the fusion base layers and the fusion detail layers to obtain a fusion image. The fused image obtained by the method of the invention is used for strengthening details of the region of interest of human eyes, so that the method has better display effect compared with the prior art image strengthening method which is optimized according to the average value of the image.
In order to make the aforementioned and other objects of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below:
drawings
FIG. 1 is a block diagram of a computer suitable for the high dynamic display method of image multi-exposure fusion according to the present invention.
FIG. 2 is a block diagram of an image processing flow of the high dynamic display method of image multi-exposure fusion according to the present invention.
FIG. 3 is a flowchart illustrating the steps of the method for displaying high dynamic image by multi-exposure fusion according to the present invention.
FIG. 4 is a flowchart of another step of the method for displaying high dynamic image by multi-exposure fusion according to the present invention.
FIG. 5 is a schematic diagram of an original image of the high dynamic display method for multi-exposure fusion of images according to the present invention.
FIG. 6 is a schematic diagram of a low-exposure image of an original image in the high-dynamic display method of image multi-exposure fusion according to the present invention.
FIG. 7 is a schematic diagram of an intermediate exposure image of an original image in the high dynamic display method of image multi-exposure fusion according to the present invention.
FIG. 8 is a schematic diagram of a high-exposure image of an original image in the method for displaying high dynamic image fusion by multiple exposures according to the present invention.
FIG. 9 is a schematic diagram of an exposure image of another embodiment of an original image of the high dynamic display method for image multi-exposure fusion according to the present invention.
FIG. 10 is a diagram illustrating an image base layer of an original image according to another embodiment of the present invention.
FIG. 11 is a schematic diagram of image detail layers of an original image in another embodiment of the method for displaying high dynamic image with multi-exposure blending according to the present invention.
Detailed Description
Referring to fig. 1 and fig. 3, the method for displaying high dynamic image fusion by multiple exposures according to the present invention can be executed 10 by a computer, and the method comprises: the computer providing step S01, the original image input step S02, the multi-exposure image generation step S03, the human eye region-of-interest information extraction step S04, the human eye region-of-interest weight calculation step S05, the base layer and detail layer extraction step S06, the fused image step S07, and the output step S08.
Referring to fig. 2 and 3, the computer providing step S01 includes providing the computer 10. The computer 10 may be a computer, a smart phone, a tablet computer, a smart watch, etc. In addition, the computer 10 at least includes a CPU 11, a memory 12, a storage 13, an input interface 14 and an output interface 15 electrically connected to each other. The Memory 12 may be a Random access Dynamic Memory (DRAM) 12. The storage 13 may be a Hard Disk Drive (HDD) or a Solid State Drive (SSD). The input interface 14 may be an electrical connector, such as a Universal Serial Bus (USB) electrical connector. The output Interface 15 may be an electrical connector, such as a High Definition Multimedia Interface (HDMI) connector, for outputting images to an external electronic device, such as a liquid crystal display panel.
Referring to fig. 5, the original image input step S02 includes inputting an original image OG to the computer 10. In the embodiment of the present invention, the original image OG can be any picture, and in the embodiment of the present invention as shown in fig. 3, the original image OG is a landscape image having features of sky, distant view, close view, etc.
Referring to fig. 6 to 8, the multi-exposure image generating step S03 includes generating a plurality of exposure images L1, L2 and L3 by the computer 10 using an appropriate S-shaped function, as shown in fig. 2. The plurality of exposure images L1, L2, and L3 are gray scale (gray scale) images. The exposure values of the plurality of exposure images L1, L2, L3 are different, and the plurality of exposure images L1, L2, L3 may be a low exposure image L1, a medium exposure image L2, and a high exposure image L3, respectively.
The human eye region-of-interest information extracting step S04 includes extracting, by the computer 10, a plurality of human eye regions-of-interest R1, R2, R3 in each of the exposure images L1, L2, L3 through a Graph-based Visual salience-based model GBVS.
The minimum exposure human eye region of interest R1 is a sky bright area; the highest exposure human eye region of interest R3 is the darkest area in the close range; the middle-exposed human eye region of interest R2 is a perspective region.
The human eye region-of-interest weight calculating step S05 includes calculating the region mean values Lmed 1,2,3,4 of the exposure images L1, L2, L3, respectively, establishing a gaussian weight function k equal to 1,2,3, and finally calculating the weight value Wk (i, j) of the human eye region-of-interest in the exposure images L1, L2, L3, respectively, as follows:
Figure BDA0001860405750000061
the weighting values W1, W2, W3 for the plurality of exposure images L1, L2, L3 are obtained from the above calculation formula.
The base layer and detail layer extracting step S06 includes extracting, by the computer 10, image base layers B1, B2, B3 and image detail layers D1, D2, D3 from the respective exposure images L1, L2, L3 by using a Principal Component Analysis (PCA) method, as shown in fig. 2; wherein the image detail layers D1, D2, D3 are obtained by subtracting the original image OG from the image basis layers B1, B2, B3. The PCA method is well known to those skilled in the art and will not be described herein.
The base layer and detail layer extracting step S06 includes a luminance extracting step, an image base layer generating step, and an image detail layer generating step.
The luminance extracting step includes extracting luminance values from the low-exposure image L1, the middle-exposure image L2, and the high-exposure image L3 generated from the original image OG, respectively, and performing PCA dimension reduction to obtain an accumulated contribution ratio of the first k feature values of > 95% by the operation of the computer 10.
The image base layer generating step includes reconstructing the original images, i.e., the image base layers B1, B2, and B3, by inverse PCA transformation through the operation of the computer 10.
The image detail layer generation step includes subtracting the image base layers B1, B2, and B3 from the exposure images L1, L2, and L3 by the operation of the computer 10, and obtaining image detail layers D1, D2, and D3.
Fig. 9 to 11 are schematic diagrams of related images according to another embodiment of the invention, wherein fig. 9 is an exposure image L of another original image OG, fig. 10 is a base layer B of the exposure image L, and fig. 11 is a detail layer D of the exposure image L.
Referring to fig. 4, the step S07 of fusing images includes fusing all image base layers B1, B2 and B3 to generate a fused base layer CB, fusing all image detail layers D1, D2 and D3 to generate a fused detail layer CD, and finally fusing the fused base layer CB and the fused detail layer CD to generate a fused image CG. In the preferred embodiment of the present invention, the fused image step S07 includes an image base layer fusing step S07a, an image detail layer fusing step S07b, and a fused image generating step S07 c.
The image base layer fusing step S07a includes performing, by the computer 10, weighted fusion on the plurality of image base layers B1, B2, and B3 according to the weight value Wk (i, j) of each of the regions of interest of human eyes in each of the exposure images L1, L2, and L3 to generate a fused base layer CB. In the preferred embodiment of the present invention, the image base layer fusion step S07a is performed by a base layer fusion calculation formula, which is as follows:
L=B1 x W1+B2 x W2+B3 x W3。
the image detail layer fusion step S07b includes fusing, by the computer 10, a plurality of image detail layers D1, D2, D3 to generate a fused detail layer CD. In the preferred embodiment of the present invention, the image detail layer fusion step S07b includes generating a plurality of detail enhancement coefficients a, b, c, and then performing fusion of the image detail layers D1, D2, D3 by a detail layer fusion calculation formula.
The detail enhancement coefficients a, B, and c are determined by the differences between the exposure images L1, L2, and L3 of the original image OG and the image foundation layers B1, B2, and B3, and the greater the difference, the greater the detail enhancement. The formula applied is as follows:
the detail enhancement factor a, b, or c ═ 1+ sqrt (std (l)/255-std (b)/255); wherein L represents the gray scale values of the exposure images L1, L2, L3 of the original image OG, B represents the gray scale value of the base layer, sqrt represents the Square Root (Square Root), and Std represents the Standard Deviation (Standard development). Further, the detail enhancement coefficient a, b, or c may also be referred to as a detail adaptive coefficient.
The detail layer fusion calculation formula is as follows:
d ═ (a x D1+ b x D2+ c x D3)/(a + b + c); wherein D1, D2 and D3 respectively represent the gray values of the image detail layers D1, D2 and D3, and a, b and c are detail enhancement coefficients.
The fused image generating step S07c includes fusing, by the computer 10, the fused base layer CB and the fused detail layer CD to generate a fused image CG. In a preferred embodiment of the present invention, the merged base layer CB and the merged segment layer CD are merged by a linear superposition method.
The outputting step S08 includes outputting, by the computer 10, the fused image CG to an external electronic device, for example, to a display.
Compared with the prior art, the high-dynamic display method of image multiple exposure fusion generates a plurality of exposure images L1, L2 and L3 with different exposure values from an original image OG, extracts image base layers B1, B2 and B3 and image detail layers D1, D2 and D3 from the exposure images L1, L2 and L3 respectively, fuses the image base layers B1, B2 and B3 with specific weight values, fuses the image detail layers D1, D2 and D3 with specific detail enhancement details, and finally performs second fusion on the fusion base layer CB and the fusion detail layer CD to obtain a fusion image CG. The fused image obtained by the method of the present invention is already subjected to detail enhancement for the regions of interest R1, R2, and R3 of human eyes, so the present invention has better display effect compared to the prior art image enhancement method that is optimized according to the average value of the image. In addition, the method of the invention has the following advantages:
1. the invention carries out regional enhancement based on an HVS human eye perception system, respectively enhances the human eye interested regions of different exposure images and reserves more image details.
2. The PCA-based method of the invention can generate the basic layer and the detail layer more quickly than the guiding/bilateral filtering calculation, and the effect is basically the same.
3. Compared with the traditional method, the method for self-adapting the detail enhancement coefficient has the advantages that an empirical amplification coefficient is used, the difference between the original image OG and the image detail layers D1, D2 and D3 can be balanced, and the influence of insufficient noise amplification and detail enhancement on the image detail layers D1, D2 and D3 is effectively avoided.

Claims (5)

1. A multi-exposure fusion high dynamic display method is characterized by comprising the following steps:
a multi-exposure image generation step including generating a plurality of exposure images from an original image using an appropriate sigmoid function;
extracting human eye region-of-interest information, including extracting a plurality of human eye regions-of-interest in each exposure image through an image significance-based model;
a human eye region-of-interest weight calculation step, including calculating the weight value of each human eye region-of-interest in each exposure image;
a basic layer and detail layer extraction step, including extracting an image basic layer and an image detail layer from each of the exposure images; and
fusing images, including fusing all image base layers to generate a fused base layer, fusing all image detail layers to generate a fused detail layer, and finally fusing the fused base layer and the fused detail layer to generate a fused image;
the step of calculating the weight of the human eye region of interest includes calculating a region mean value Lmed, k of each exposure image L (i, j), establishing a gaussian weight function k, and finally calculating a weight value Wk (i, j) of each human eye region of interest in each exposure image L (i, j) as follows:
Figure FDA0002753178210000011
the method further comprises the following steps:
a computer providing step including providing a computer;
an original image input step including inputting the original image to the computer; and
the multi-exposure image generation step, the human eye region-of-interest information extraction step, the human eye region-of-interest weight calculation step, the base layer and detail layer extraction step and the image fusion step are executed through the computer operation;
the step of fusing the images includes: an image base layer fusion step, which includes performing weighted fusion on the plurality of image base layers according to the weight value of each human eye region of interest in each exposure image through the computer operation to generate a fusion base layer;
the step of fusing images further comprises: an image detail layer fusion step, which includes fusing a plurality of image detail layers through the computer operation to generate the fused detail layer, wherein the image detail layer fusion step includes generating a plurality of detail enhancement coefficients, and then fusing the plurality of image detail layers through a detail layer fusion calculation formula;
the detail enhancement coefficient is 1+ sqrt (std (l)/255-std (b))/255; wherein L represents the gray value of the exposure image of the original image, B represents the gray value of the base layer, sqrt represents the square root, and Std represents the standard deviation;
the detail layer fusion calculation formula is as follows: d ═ (a x D1+ b x D2+ c x D3)/(a + b + c); wherein D1, D2, D3 respectively represent gray-scale values of the plurality of image detail layers, and a, b, c are the detail enhancement coefficients.
2. The multi-exposure fused high dynamic display method according to claim 1, wherein: the multi-exposure fusion high-dynamic display method further comprises an output step, wherein the output step comprises outputting the fusion image to an external electronic device through the computer.
3. The multi-exposure fused high dynamic display method according to claim 1, wherein: the exposure values of the plurality of exposure images are different.
4. The multi-exposure fused high dynamic display method according to claim 1, wherein: the multiple exposure images are gray-scale images.
5. The multi-exposure fused high dynamic display method according to claim 1, wherein: the base layer and detail layer extracting step extracts the image base layer and the image detail layer from each of the exposure images using a principal component analysis method.
CN201811332568.8A 2018-11-09 2018-11-09 High dynamic display method based on image multiple exposure fusion Active CN109636765B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811332568.8A CN109636765B (en) 2018-11-09 2018-11-09 High dynamic display method based on image multiple exposure fusion
PCT/CN2019/072435 WO2020093600A1 (en) 2018-11-09 2019-01-18 Highly dynamic display method based on multi-exposure fusion of images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811332568.8A CN109636765B (en) 2018-11-09 2018-11-09 High dynamic display method based on image multiple exposure fusion

Publications (2)

Publication Number Publication Date
CN109636765A CN109636765A (en) 2019-04-16
CN109636765B true CN109636765B (en) 2021-04-02

Family

ID=66067647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811332568.8A Active CN109636765B (en) 2018-11-09 2018-11-09 High dynamic display method based on image multiple exposure fusion

Country Status (2)

Country Link
CN (1) CN109636765B (en)
WO (1) WO2020093600A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110087003B (en) * 2019-04-30 2021-03-23 Tcl华星光电技术有限公司 Multi-exposure image fusion method
CN110602384B (en) * 2019-08-27 2022-03-29 维沃移动通信有限公司 Exposure control method and electronic device
CN111898532A (en) * 2020-07-30 2020-11-06 杭州海康威视数字技术股份有限公司 An image processing method, device, electronic equipment and monitoring system
CN112288664B (en) * 2020-09-25 2025-03-07 原力图新(重庆)科技有限公司 High dynamic range image fusion method, device and electronic device
CN113610861B (en) * 2021-06-21 2023-11-14 重庆海尔制冷电器有限公司 Image processing method of food ingredients in refrigeration equipment, refrigeration equipment and readable storage medium
CN113628141B (en) * 2021-08-18 2023-11-28 上海磐启微电子有限公司 HDR detail enhancement method based on high-low exposure image fusion
CN117061841B (en) * 2023-06-12 2024-06-25 深圳市博盛医疗科技有限公司 Dual-wafer endoscope imaging method and imaging device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101633893B1 (en) * 2010-01-15 2016-06-28 삼성전자주식회사 Apparatus and Method for Image Fusion
KR20130031574A (en) * 2011-09-21 2013-03-29 삼성전자주식회사 Image processing method and image processing apparatus
CN103247036B (en) * 2012-02-10 2016-05-18 株式会社理光 Many exposure images fusion method and device
CN104077759A (en) * 2014-02-28 2014-10-01 西安电子科技大学 Multi-exposure image fusion method based on color perception and local quality factors
CN105279746B (en) * 2014-05-30 2018-01-26 西安电子科技大学 A Multi-exposure Image Fusion Method Based on Bilateral Filtering
CN105809641B (en) * 2016-03-09 2018-02-16 北京理工大学 The exposure compensating and edge enhancing method of a kind of mist elimination image
CN106815827A (en) * 2017-01-18 2017-06-09 聚龙智瞳科技有限公司 Image interfusion method and image fusion device based on Bayer format
CN107220956A (en) * 2017-04-18 2017-09-29 天津大学 A kind of HDR image fusion method of the LDR image based on several with different exposures

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
数字摄像机的高动态范围曝光算法及实现;杨镔等;《传感技术学报》;20110131;第24卷(第1期);第68-72页 *

Also Published As

Publication number Publication date
CN109636765A (en) 2019-04-16
WO2020093600A1 (en) 2020-05-14

Similar Documents

Publication Publication Date Title
CN109636765B (en) High dynamic display method based on image multiple exposure fusion
Lv et al. Attention guided low-light image enhancement with a large scale low-light simulation dataset
Lee et al. Deep chain hdri: Reconstructing a high dynamic range image from a single low dynamic range image
Zhang et al. Dual illumination estimation for robust exposure correction
Wang et al. Pseudo-multiple-exposure-based tone fusion with local region adjustment
CN111669514B (en) High dynamic range imaging method and apparatus
CN112602088B (en) Methods, systems and computer-readable media for improving the quality of low-light images
US11127117B2 (en) Information processing method, information processing apparatus, and recording medium
WO2021063341A1 (en) Image enhancement method and apparatus
CN113688907B (en) A model training and video processing method, which comprises the following steps, apparatus, device, and storage medium
CN116051428B (en) Deep learning-based combined denoising and superdivision low-illumination image enhancement method
CN111047543B (en) Image enhancement method, device and storage medium
CN110766153A (en) Neural network model training method and device and terminal equipment
Feng et al. Low-light image enhancement algorithm based on an atmospheric physical model
CN115578286A (en) High dynamic range hybrid exposure imaging method and apparatus
Singh et al. Weighted least squares based detail enhanced exposure fusion
Masood et al. Automatic Correction of Saturated Regions in Photographs using Cross‐Channel Correlation
CN110351489B (en) Method and device for generating HDR image and mobile terminal
CN112651911A (en) High dynamic range imaging generation method based on polarization image
CN116468636A (en) Low illumination enhancement method, device, electronic device and readable storage medium
CN116071279A (en) Image processing method, device, computer equipment and storage medium
CN105450943B (en) Method for generating image bokeh effect and image acquisition device
CN115482159A (en) Image enhancement method and apparatus
CN103595933B (en) A kind of noise-reduction method of image
CN111126568B (en) Image processing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 9-2 Tangming Avenue, Guangming New District, Shenzhen City, Guangdong Province

Applicant after: TCL China Star Optoelectronics Technology Co.,Ltd.

Address before: 9-2 Tangming Avenue, Guangming New District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen China Star Optoelectronics Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant