CN110930341A - Low-illumination image enhancement method based on image fusion - Google Patents
Low-illumination image enhancement method based on image fusion Download PDFInfo
- Publication number
- CN110930341A CN110930341A CN201910988092.1A CN201910988092A CN110930341A CN 110930341 A CN110930341 A CN 110930341A CN 201910988092 A CN201910988092 A CN 201910988092A CN 110930341 A CN110930341 A CN 110930341A
- Authority
- CN
- China
- Prior art keywords
- image
- map
- illumination
- fusion
- low
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 64
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000004927 fusion Effects 0.000 title claims abstract description 25
- 238000012937 correction Methods 0.000 claims abstract description 12
- 238000001914 filtration Methods 0.000 claims abstract description 7
- 239000013598 vector Substances 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000010586 diagram Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 4
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 9
- 238000012800 visualization Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002354 daily effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The invention discloses a low-illumination image enhancement method based on image fusion. Firstly, converting an image from an RGB space to an HSV space, and then estimating an initial illumination map I from a V channel; filtering the initial illumination map I to obtain a further accurate illumination map II; carrying out gamma correction on the light map II to obtain a plurality of intermediate maps; determining the weight of the intermediate image, and performing image fusion to obtain a final illumination image III; and (5) solving a reflection map, and converting the HSV space back to the RGB space to obtain a final enhancement map. The invention provides a fusion-based low-illumination image enhancement algorithm, which can effectively enhance an image to obtain an image with strong contrast, natural color and good visual effect under the condition of low illumination. The algorithm can be applied to the mobile phone, only one-key operation is needed, the quality of the low-illumination image is effectively improved, and the photographing experience of a user is improved.
Description
Technical Field
The invention relates to the field of image enhancement, in particular to a low-illumination image enhancement method based on image fusion. A method for enhancing low-light images and determining an optimal reflection map.
Background
With the popularization of advanced mobile equipment, especially the application of cameras on mobile phones, people can take pictures almost anytime and anywhere. Due to the vigorous development of the social media platform, a large number of photos are made and shared every day, and important big data is formed. However, the quality of these visualization data is not guaranteed because the source of generation of these visualization data is fairly open. On the one hand, most people are amateurs or have little knowledge of the camera skills when taking pictures, and they often choose sub-optimal shooting parameters. On the other hand, there are many challenging shooting conditions that result in poor quality photographs, such as bad weather, moving objects, and low light conditions. Low light images degrade the visual quality of the user experience and hinder understanding of the industrial application content.
The low-light image is one of the main factors causing poor visual quality of the image, and the image is distorted in dark or under the condition of uneven illumination. While software exists that allows a user to interactively adjust a photograph, it is tedious and difficult for a non-professional person because it requires the simultaneous manipulation of controls for color and contrast, while finely adjusting various objects and details in the photograph.
In order to make daily photographing easy to use, a one-click algorithm needs to be proposed. Image enhancement techniques provide a possible solution because image enhancement not only meets the needs for a better visual experience, but also improves the reliability and robustness of the visual system, making it easier for the image processing system to analyze and process images.
The main enhancement of the low-light image is to perform adaptive processing on the obtained unsatisfactory image, so that the dark area part of the image is lightened, and the bright area is not overexposed. The obtained enhanced image has good contrast, natural illumination color and good visual effect.
Disclosure of Invention
The invention mainly provides a low-illumination image enhancement method based on image fusion, which can be used for enhancing the obtained low-illumination image to obtain an image with obvious contrast, natural color, good visual effect and bright dark area.
Firstly, converting an image from an RGB space to an HSV space, and then estimating an initial illumination map I from a V channel; filtering the initial illumination map I to obtain a further accurate illumination map II; carrying out gamma correction on the light map II to obtain a plurality of intermediate maps; determining the weight of the intermediate image, and performing image fusion to obtain a final illumination image III; and (5) solving a reflection map, and converting the HSV space back to the RGB space to obtain a final enhancement map. The method specifically comprises the following steps:
step 1: converting the image from an RGB space to an HSV space to obtain an initial illumination map I;
adopting a Retinex-based theoretical model, and directly defaulting a V channel as an initial illumination map by using a simple illumination prediction method;
step 2: filtering the initial illumination map I to obtain a further accurate illumination map II;
processing the initial illumination map I obtained in the step 1 by adopting a ready-made defined filter to obtain an illumination map II;
and step 3: carrying out gamma correction on the light map II to obtain a plurality of intermediate maps;
generating a plurality of intermediate images as a fusion source according to the illumination map II; correcting by using gamma, and obtaining a plurality of different intermediate graphs by adopting different gamma values;
and 4, step 4: determining the weight of the intermediate image, and fusing the intermediate image;
obtaining a fused image after the intermediate image is fused by adopting a superposition method with different weights, namely a final illumination map III; obtaining a fusion graph by adopting a weight determination method based on a PCA theory;
and 5: finding a reflection map, converting from HSV space back to RGB space
Obtaining a reflection diagram corresponding to the illumination diagram III according to the Retinex theory, and taking the reflection diagram as the value of the final V channel; finally, converting the HSV space into an RGB space; the resulting image is the final enhancement map.
The correction formula for correcting by using gamma is as follows:
T'=T1/λ。
the N intermediate images T obtained in the step 31,T2,…TNConverting the N column vectors into one-dimensional vectors to obtain a covariance matrix C of a matrix of the N column vector combinations, and solving an eigenvalue and a corresponding eigenvector of the obtained covariance matrix; selecting a feature vector corresponding to a larger feature value as a direction for defining the weight according to the obtained feature value;
the specific formula is as follows:
where ω is the weight of the corresponding intermediate image, and ξ is the eigenvector corresponding to the large eigenvalue;
the finally obtained fusion image is recorded as T:
T=ω1T1+ω2T2+…+ωNTN。
the invention has the following beneficial effects:
the invention provides a fusion-based low-illumination image enhancement algorithm, which can effectively enhance an image to obtain an image with strong contrast, natural color and good visual effect under the condition of low illumination. The algorithm can be applied to the mobile phone, only one-key operation is needed, the quality of the low-illumination image is effectively improved, and the photographing experience of a user is improved.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention
Detailed Description
The present invention will be described in further detail below with reference to the accompanying drawings and examples.
Firstly, converting an image from an RGB space to an HSV space, and then estimating an initial illumination map from a V channel; filtering the initial illumination map to obtain a further accurate illumination map; carrying out gamma correction on the light map to obtain a plurality of intermediate maps; determining the weight of the middle image, and performing image fusion to obtain a final illumination image; and (5) solving a reflection map, and converting the HSV space back to the RGB space to obtain a final enhancement map.
The implementation flow is shown in figure 1. The method comprises the following steps:
step 1: converting the image from an RGB space to an HSV space to obtain an initial illumination map;
the human eye is more sensitive to brightness than to color. Therefore, the correction of the illumination component is crucial for an algorithm that corrects for images acquired under uneven illumination. For color images, if the red (R), green (G), and blue (B) channels are directly corrected, it is difficult to ensure that all the channels are scaled up or down in a proper ratio, which often results in color distortion of the corrected image. So that the H (hue) S (saturation) V (brightness) space which is friendly to the human visual sense can be considered.
The conversion formula is as follows:
S=1-min(R,G,B)/V
V=max(R,G,B)
the invention adopts a Retinex-based theoretical model, uses a simple illumination prediction method, directly defaults a V channel as an initial illumination map, and is marked as T1.
Step 2: filtering the initial illumination map to obtain a further accurate illumination map;
the present invention adopts the existing defined filter to process the initial illumination map obtained in step 1, and the obtained result is taken as a further accurate illumination map and is marked as T2.
And step 3: gamma correction is carried out on the light map to obtain a plurality of intermediate maps
At this time, we only obtain one illumination pattern, and the error caused by adopting the above method is relatively large, so the illumination pattern is further improved. For the case of only one image, there is a better method to generate a plurality of intermediate images as a fusion source based on that one image. A simpler method is to use gamma to correct, and adopt different gamma values to obtain several different intermediate images to prepare for the next image fusion.
The correction formula is as follows:
T'=T1/λ
two different gamma values are adopted in the description, and two different intermediate illumination graphs are obtained and are marked as T3 and T4.
And 4, step 4: determining the weight of the intermediate image for image fusion
In this case, several different intermediate maps are obtained in step 3, and the present description adopts a superposition method with different weights to obtain a fusion map. One of the key points is the problem of determining the weight of the image, and the simplest method is that the weights are all equal. The weight determination method based on the PCA theory is adopted in the description, and a good fusion effect is obtained.
The intermediate images T3 and T4 obtained in step 3 are converted into one-dimensional vectors, which are simply recombined by column expansion.
At this time, a covariance matrix C of a matrix in which these two column vectors are combined is obtained, and the size is 2 × 2. And (4) solving the eigenvalue and the corresponding eigenvector of the obtained covariance matrix. At this time, two feature values can be obtained, and the feature vector corresponding to the feature value with a larger value is selected as the direction for defining the weight.
The specific formula is as follows:
where ω is the weight of the corresponding intermediate image and ξ is the eigenvector corresponding to the large eigenvalue.
The finally obtained fusion image is recorded as T:
T=ω1T3+ω2T4
and 5: finding a reflection map, converting from HSV space back to RGB space
At this time, a final map is obtained in step 4, and we can further obtain a corresponding reflection map according to Retinex theory, and use the reflection map as a final V-channel value. And finally converting the HSV space into an RGB space. The resulting image is the final enhancement map.
Claims (4)
1. A low-illumination image enhancement method based on image fusion is characterized in that the method comprises the steps of firstly converting an image from an RGB space to an HSV space, and then estimating an initial illumination pattern I from a V channel; filtering the initial illumination map I to obtain a further accurate illumination map II; carrying out gamma correction on the light map II to obtain a plurality of intermediate maps; determining the weight of the intermediate image, and fusing the intermediate image to obtain a final illumination map III; and (5) solving a reflection map, and converting the HSV space back to the RGB space to obtain a final enhancement map.
2. The low-illumination image enhancement method based on image fusion according to claim 1, characterized by comprising the following steps:
step 1: converting the image from an RGB space to an HSV space to obtain an initial illumination map I;
adopting a Retinex-based theoretical model, and directly defaulting a V channel as an initial illumination map by using a simple illumination prediction method;
step 2: filtering the initial illumination map I to obtain a further accurate illumination map II;
processing the initial illumination map I obtained in the step 1 by adopting a ready-made defined filter to obtain an illumination map II;
and step 3: carrying out gamma correction on the light map II to obtain a plurality of intermediate maps;
generating a plurality of intermediate images as a fusion source according to the illumination map II; correcting by using gamma, and obtaining a plurality of different intermediate graphs by adopting different gamma values;
and 4, step 4: determining the weight of the intermediate image, and fusing the intermediate image;
obtaining a fused image after the intermediate image is fused by adopting a superposition method with different weights, namely a final illumination map III; obtaining a fusion graph by adopting a weight determination method based on a PCA theory;
and 5: finding a reflection map, converting from HSV space back to RGB space
Obtaining a reflection diagram corresponding to the illumination diagram III according to the Retinex theory, and taking the reflection diagram as the value of the final V channel; finally, converting the HSV space into an RGB space; the resulting image is the final enhancement map.
3. The method for enhancing low-light image based on image fusion according to claim 2, wherein the correction formula for correction using gamma is as follows:
T'=T1/λ。
4. the method according to claim 3, wherein the N intermediate images T obtained in step 3 are processed1,T2,…TNAll converted into one-dimensional vectors to obtain a covariance matrix C of the matrix of the N column vector combinations, and for the obtained covariance matrix, its eigenvalues and corresponding ones are determinedA feature vector; selecting a feature vector corresponding to a larger feature value as a direction for defining the weight according to the obtained feature value;
the specific formula is as follows:
where ω is the weight of the corresponding intermediate image, and ξ is the eigenvector corresponding to the large eigenvalue;
the finally obtained fusion image is recorded as T:
T=ω1T1+ω2T2+…+ωNTN。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910988092.1A CN110930341A (en) | 2019-10-17 | 2019-10-17 | Low-illumination image enhancement method based on image fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910988092.1A CN110930341A (en) | 2019-10-17 | 2019-10-17 | Low-illumination image enhancement method based on image fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110930341A true CN110930341A (en) | 2020-03-27 |
Family
ID=69849249
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910988092.1A Pending CN110930341A (en) | 2019-10-17 | 2019-10-17 | Low-illumination image enhancement method based on image fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110930341A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111932471A (en) * | 2020-07-24 | 2020-11-13 | 山西大学 | Double-path exposure degree fusion network model and method for low-illumination image enhancement |
CN111968188A (en) * | 2020-07-08 | 2020-11-20 | 华南理工大学 | Low-illumination image enhancement processing method, system, device and storage medium |
CN112541859A (en) * | 2019-09-23 | 2021-03-23 | 武汉科技大学 | Illumination self-adaptive face image enhancement method |
CN113129236A (en) * | 2021-04-25 | 2021-07-16 | 中国石油大学(华东) | Single low-light image enhancement method and system based on Retinex and convolutional neural network |
CN113744151A (en) * | 2021-08-31 | 2021-12-03 | 平安科技(深圳)有限公司 | Method, device and equipment for processing images to be diagnosed and storage medium |
CN114372941A (en) * | 2021-12-16 | 2022-04-19 | 佳源科技股份有限公司 | Low-illumination image enhancement method, device, equipment and medium |
WO2023272506A1 (en) * | 2021-06-29 | 2023-01-05 | 深圳市大疆创新科技有限公司 | Image processing method and apparatus, movable platform and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107507138A (en) * | 2017-07-27 | 2017-12-22 | 北京大学深圳研究生院 | A kind of underwater picture Enhancement Method based on Retinex model |
CN108053374A (en) * | 2017-12-05 | 2018-05-18 | 天津大学 | A kind of underwater picture Enhancement Method of combination bilateral filtering and Retinex |
CN109191390A (en) * | 2018-08-03 | 2019-01-11 | 湘潭大学 | A kind of algorithm for image enhancement based on the more algorithm fusions in different colours space |
-
2019
- 2019-10-17 CN CN201910988092.1A patent/CN110930341A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107507138A (en) * | 2017-07-27 | 2017-12-22 | 北京大学深圳研究生院 | A kind of underwater picture Enhancement Method based on Retinex model |
CN108053374A (en) * | 2017-12-05 | 2018-05-18 | 天津大学 | A kind of underwater picture Enhancement Method of combination bilateral filtering and Retinex |
CN109191390A (en) * | 2018-08-03 | 2019-01-11 | 湘潭大学 | A kind of algorithm for image enhancement based on the more algorithm fusions in different colours space |
Non-Patent Citations (1)
Title |
---|
张雷等: "差异特征指数测度的红外偏振与光强图像多算法融合", 《火力与指挥控制》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112541859A (en) * | 2019-09-23 | 2021-03-23 | 武汉科技大学 | Illumination self-adaptive face image enhancement method |
CN112541859B (en) * | 2019-09-23 | 2022-11-25 | 武汉科技大学 | Illumination self-adaptive face image enhancement method |
CN111968188A (en) * | 2020-07-08 | 2020-11-20 | 华南理工大学 | Low-illumination image enhancement processing method, system, device and storage medium |
CN111968188B (en) * | 2020-07-08 | 2023-08-22 | 华南理工大学 | Low-light image enhancement processing method, system, device and storage medium |
CN111932471A (en) * | 2020-07-24 | 2020-11-13 | 山西大学 | Double-path exposure degree fusion network model and method for low-illumination image enhancement |
CN111932471B (en) * | 2020-07-24 | 2022-07-19 | 山西大学 | Double-path exposure degree fusion network model and method for low-illumination image enhancement |
CN113129236A (en) * | 2021-04-25 | 2021-07-16 | 中国石油大学(华东) | Single low-light image enhancement method and system based on Retinex and convolutional neural network |
WO2023272506A1 (en) * | 2021-06-29 | 2023-01-05 | 深圳市大疆创新科技有限公司 | Image processing method and apparatus, movable platform and storage medium |
CN113744151A (en) * | 2021-08-31 | 2021-12-03 | 平安科技(深圳)有限公司 | Method, device and equipment for processing images to be diagnosed and storage medium |
CN114372941A (en) * | 2021-12-16 | 2022-04-19 | 佳源科技股份有限公司 | Low-illumination image enhancement method, device, equipment and medium |
CN114372941B (en) * | 2021-12-16 | 2024-04-26 | 佳源科技股份有限公司 | Low-light image enhancement method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110930341A (en) | Low-illumination image enhancement method based on image fusion | |
US10237527B2 (en) | Convolutional color correction in digital images | |
US9386288B2 (en) | Compensating for sensor saturation and microlens modulation during light-field image processing | |
US9723285B2 (en) | Multi-area white-balance control device, multi-area white-balance control method, multi-area white-balance control program, computer in which multi-area white-balance control program is recorded, multi-area white-balance image-processing device, multi-area white-balance image-processing method, multi-area white-balance image-processing program, computer in which multi-area white-balance image-processing program is recorded, and image-capture apparatus | |
US9635332B2 (en) | Saturated pixel recovery in light-field images | |
CN108668093B (en) | HDR image generation method and device | |
US10949958B2 (en) | Fast fourier color constancy | |
JP6395810B2 (en) | Reference image selection for motion ghost filtering | |
CN111292246B (en) | Image color correction method, storage medium, and endoscope | |
JP5897776B2 (en) | Image processing apparatus, imaging apparatus, image processing method, and program | |
CN103716547A (en) | Smart mode photographing method | |
US11044450B2 (en) | Image white balancing | |
US8369654B2 (en) | Developing apparatus, developing method and computer program for developing processing for an undeveloped image | |
Kwok et al. | Gray world based color correction and intensity preservation for image enhancement | |
US20160240166A1 (en) | Image processing apparatus, image processing method, and computer-readable recording medium | |
CN108540716A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
Vazquez-Corral et al. | Color stabilization along time and across shots of the same scene, for one or several cameras of unknown specifications | |
CN108550106B (en) | Color correction method and device for panoramic image and electronic equipment | |
CN108024105A (en) | Image color adjusting method, device, electronic equipment and storage medium | |
CN114862698A (en) | Method and device for correcting real overexposure image based on channel guidance | |
WO2016026072A1 (en) | Method, apparatus and computer program product for generation of extended dynamic range color images | |
JP5327766B2 (en) | Memory color correction in digital images | |
Brown | Color processing for digital cameras | |
KR101903428B1 (en) | System and Method of Color Correction for Related Images | |
KR101005625B1 (en) | A method for color compensation based on color characteristic curve of a camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200327 |