CN115937024A - Multi-frame fusion low-illumination image enhancement method based on Retinex theory - Google Patents
Multi-frame fusion low-illumination image enhancement method based on Retinex theory Download PDFInfo
- Publication number
- CN115937024A CN115937024A CN202211519235.2A CN202211519235A CN115937024A CN 115937024 A CN115937024 A CN 115937024A CN 202211519235 A CN202211519235 A CN 202211519235A CN 115937024 A CN115937024 A CN 115937024A
- Authority
- CN
- China
- Prior art keywords
- image
- component
- frame
- illumination
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention discloses a method for enhancing a multiframe fusion low-illumination image based on a Retinex theory, which comprises the following steps of: converting an input low-illumination image from an RGB space to an HSV space to obtain a hue component H, a brightness component V and a saturation component S; v is used as an initial illumination component, and an image brightness estimation method is adopted to generate an illumination component L i Dividing each frame of input image by L i Obtaining a reflection component R i (ii) a Optimizing L with S-curves that preserve image detail i Obtaining a final irradiation component L by using a weighted average method; using non-local mean filters for R i Denoising, namely optimizing the denoised reflection component by using an unsharp mask method, and obtaining a final reflection component R by using a weighted average method; multiplying L and R to obtain enhanced brightness component, converting HSV color space to RGB color space to obtain enhanced image, and making tone and colourAnd carrying out correction and white balance processing to obtain a final image. The invention can effectively solve the problems of detail characteristic loss and color distortion in the multi-frame image fusion method, and enables the image to have better visual effect.
Description
Technical Field
The invention relates to the field of image fusion and enhancement, in particular to a multi-frame fusion low-illumination image enhancement method based on Retinex theory.
Background
The low-illumination image has low overall brightness, small gray value change range between adjacent pixels, low dynamic range, and usually shows that the image is dark overall in visual perception, detail information is difficult to identify and contains a large amount of noise, and color expression is insufficient. The image enhancement is to perform specific processing on a given image, purposefully emphasize the overall or local characteristics of the image, change the original unclear image into clear or emphasize some interesting features, enlarge the difference between different object features in the image, inhibit the uninteresting features, further improve the image quality and enrich the information content so as to meet some special application requirements. At present, scholars deeply research the characteristics of images in a low-brightness environment and provide a low-illumination image enhancement algorithm with pertinence of a large measuring tool. The existing image enhancement algorithms can be roughly divided into five types of methods based on histograms, gray level transformation, defogging theory, retinex theory and image fusion according to the difference of basic principles, wherein the enhancement algorithms based on image fusion are currently important development directions. The multi-frame fusion refers to combining a plurality of frame image sequences into a single image according to a certain rule and mode to improve the quality of the image. The technology is widely applied to the fields of public security skynet, military reconnaissance, social media and the like. The simplest method is to directly weight a plurality of frames of images, but the quality of the fused images is not ideal, and the situations of loss of image details, color distortion, local overexposure or underexposure and the like are usually caused.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a method for enhancing a multi-frame fusion low-illumination image based on a Retinex theory, and can effectively solve the problems of loss of detail characteristics and color distortion in the multi-frame image fusion method, so that the obtained image has a better visual effect.
The purpose of the invention is realized by the following technical scheme.
The invention relates to a multiframe fusion low-illumination image enhancement method based on Retinex theory, which comprises the following processes:
the first step is as follows: converting an input low-illumination image from an RGB color space to an HSV color space to obtain a hue component H, a brightness component V and a saturation component S of the input image;
the second step: using the brightness component V obtained in the first step as the initial illumination componentGenerating illumination component L of each frame image by adopting image brightness estimation method i Then input the image S with each frame i Divided by its corresponding illumination component L i Obtaining the reflection component R of each frame image i ;
The third step: using S curve capable of retaining image details to irradiate component L of every frame image obtained in second step i Optimizing, and obtaining a final irradiation component L of the input image by a weighted average method for the optimized irradiation component of each frame of image;
the fourth step: adopting a non-local mean filter to perform the reflection component R of each frame of image obtained in the second step i Denoising, optimizing the reflection component of each frame of image subjected to denoising by using a unsharp masking method, and obtaining the final reflection component R of the input image by using the reflection component of each frame of image obtained after optimization through a weighted average method;
the fifth step: and multiplying the illumination component L obtained by the optimization in the third step and the reflection component R obtained by the optimization in the fourth step to obtain an enhanced brightness component V', simultaneously combining the hue component H and the saturation component S in the first step, converting the image from an HSV (hue saturation value) color space to an RGB (red green blue) color space to obtain an enhanced image, performing hue and color correction and white balance processing on the enhanced image, enhancing the visual effect of the image, and obtaining a final image.
The illumination component of each frame of image in the second stepL i Obtained by solving the following equation:
wherein the index i represents the i-th frame image, | | 1 Andrespectively 1 norm and 2 norm, lambda being a factor for a trade-off constraint>For the gradient operator, M is a weight matrix for edge smoothing, defined as:
wherein d represents the gradient direction, x and y represent the horizontal and vertical directions, respectively, ω (n) represents the sliding window at the pixel n, m is the pixel point in the sliding window ω (n), and ε is a very small number, preventing the denominator from being 0 in the formula.
The definition of the S-curve in the third step that can preserve image details is as follows:
I output(i) =f(I input(i) )+[2×f(I input(i) )(1-f(I input(i) ))×ΔI (i) ]
wherein, I input For the input image, i.e. for the illumination component L of each frame of image i ,I output To output an image,. DELTA.I (i) =I input(i) -I F(i) ,I F(i) For the guided filter, f (α) is an S-curve defined as follows:
wherein the content of the first and second substances,and &>Adjustment factors, f, representing underexposed and overexposed areas, respectively Δ (α) is an incremental function, defined asUsually taken as beta 1 =5,β 2 =14,β 3 =1.6。
In the fourth step, a non-local mean filter is adopted to carry out the reflection component R of each frame of image i The process of denoising is as follows:
each pixel point a in each frame of reflection component is represented by the following formula through non-local mean filtering:
wherein Q is i Representing a collection of pixels, R, within a sub-block of an image i (a) Is a reflection component R i Pixel point a, w in i (a, b) is a weight value obtained by calculating the Euclidean distance between sub-image blocks taking two pixel points of a and b as the center in each frame of image, and the weight value satisfies that w is more than or equal to 0 i (a, b) is less than or equal to 1 and sigma-w i (a, b) =1, and the weight is represented by the following formula:
wherein, the first and the second end of the pipe are connected with each other,is the square of the Euclidean distance of two image sub-blocks, N a And N b Set of pixels in a domain centered on a and b, R i (N a )、R i (N b ) Respectively, a reflection component R i A set of pixels in a region centered on a and b, based on the measured value>Is a normalized coefficient and h represents a filter coefficient.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
the invention adopts a new fusion method, the low illumination image is converted from RGB color space to HSV color space, the multi-frame fusion (the second step to the fourth step) is carried out on the combined brightness component V by using an improved Retinex algorithm to achieve the purpose of enhancement, after the brightness component V is enhanced, the image is converted back to the RGB color space to obtain an enhanced image, and the color tone and color correction and white balance processing are carried out on the enhanced image to obtain a final image. The invention improves the brightness of the image, reserves more image details, avoids the distortion of image colors and ensures that the enhanced image is more in line with the visual effect.
Drawings
Fig. 1 is a flowchart of a multiframe fusion low-illumination image enhancement method based on Retinex theory.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Aiming at the problems of image detail loss and color distortion caused by the traditional multi-frame image fusion technology, the invention provides a multi-frame fusion low-illumination image enhancement method based on Retinex theory. The method can well improve the image contrast, retain more detailed characteristics and solve the problem of color distortion generated in the enhancement process.
The invention relates to a method for enhancing a multiframe fusion low-illumination image based on Retinex theory, which is shown in figure 1 and specifically comprises the following implementation processes:
the first step is as follows: converting the input low-illumination image from an RGB color space to an HSV color space to obtain a hue component H, a brightness component V and a saturation component S of the input image.
The second step is that: using the brightness component V obtained in the first step as the initial illumination componentGenerating the Illumination component L of each frame of Image by using the existing Image brightness Estimation method (LIME) i Then input the image S with each frame i Divided by its corresponding illumination component L i Obtaining the reflection component R of each frame image i 。
Illumination component L of each frame image i Obtained by solving the following equation:
wherein the index i represents the i-th frame image, | | 1 Andis 1 norm and 2 norm respectively, lambda is a factor of a trade-off constraint term, and->For the gradient operator, M is a weight matrix for edge-preserving smoothing, which is used to ensure local consistency of the similar structure region of the illumination component, so as to smooth the non-edge region in the illumination component image, and meanwhile, to keep the edge not smoothed, and is defined as follows:
wherein d represents a gradient direction, x and y represent a horizontal direction and a vertical direction, respectively, ω (n) represents a sliding window at the pixel n, m is a pixel point in the sliding window ω (n), and ε is a very small number, thereby preventing the denominator from being 0 in equation (2).
The third step: illumination component L i Optimization
Of images per frame obtained according to the above processingIllumination component L i There will still be many over-exposed or under-exposed areas, so the S-curve that can correct the exposure is used to correct the illumination component L of each frame of image obtained in the second step i Optimization is performed, so that the S-curve that can retain image details is defined as the following formula (3):
I output(i) =f(I input(i) )+[2×f(I input(i) )(1-f(I input(i) ))×ΔI (i) ] (3)
wherein, I input For the input image, i.e. for the illumination component L of each frame of image i ,I output To output an image,. DELTA.I (i) =I input(i) -I F(i) ,I F(i) For the purpose of suppressing the halo effect, obtained by guiding the filter, f (α) is an S-curve defined as shown in equation (4):
wherein the content of the first and second substances,and &>Adjustment factors, f, representing underexposed and overexposed areas, respectively Δ (α) is an incremental function, defined asUsually taken as beta 1 =5,β 2 =14,β 3 =1.6。/>
Finally, the obtained optimized illumination component I of each frame image output(i) The final illumination component L of the input image is obtained by a weighted average method.
The fourth step: reflection component R optimization
The reflection component R of each frame image is then i And optimizing to obtain the final reflection component R of the input image. Due to the reflection component R i Is large in sizeNoise of magnitude, so the present invention is mainly directed to the reflected component R i And (5) denoising. Edge details are easy to lose after the traditional filter is denoised, so the invention adopts a non-local mean filter with better edge retention capability to reflect the component R of each frame of image obtained in the second step i Performing denoising, and then each pixel point a in each frame of reflection component can be represented by the following formula (5) through non-local mean filtering:
wherein Q is i Representing a collection of pixels, R, within a sub-block of an image i (a) Is a reflection component R i Pixel point a, w in i (a, b) is a weight value obtained by calculating the Euclidean distance between sub-image blocks taking two pixel points of a and b as the center in each frame of image, and the weight value satisfies that w is more than or equal to 0 i (a, b) is less than or equal to 1 and sigma-w i (a, b) =1, and the weight may be represented by the following formula (6):
wherein the content of the first and second substances,is the square of the Euclidean distance of two image sub-blocks, N a And N b Set of pixels in a domain centered on a and b, R i (N a )、R i (N b ) Respectively, a reflection component R i A set of pixels in a region centered on a and b, based on the measured value>Is a normalized coefficient, and h represents a filter coefficient, generally taking the value of 10.
And then, optimizing the reflection component of each frame of image subjected to denoising by using a unsharp masking method, improving high-frequency components in the denoised image, enhancing the edge of the image, and obtaining the final reflection component R of the input image by using the reflection component of each frame of image obtained after optimization through a weighted average method.
The fifth step: and multiplying the illumination component L obtained by the optimization in the third step and the reflection component R obtained by the optimization in the fourth step to obtain an enhanced brightness component V', simultaneously converting the image from an HSV color space to an RGB color space by combining the hue component H and the saturation component S in the first step to obtain an enhanced image, carrying out hue and color correction and white balance processing on the enhanced image, enhancing the visual effect of the image and obtaining a final image.
While the present invention has been described in terms of its functions and operations with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise functions and operations described above, and that the above-described embodiments are illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope or spirit of the invention as defined by the appended claims.
Claims (4)
1. A multiframe fusion low-illumination image enhancement method based on Retinex theory is characterized by comprising the following processes:
the first step is as follows: converting an input low-illumination image from an RGB color space to an HSV color space to obtain a hue component H, a brightness component V and a saturation component S of the input image;
the second step is that: using the brightness component V obtained in the first step as the initial illumination componentGenerating illumination component L of each frame image by adopting image brightness estimation method i Then input the image S with each frame i Divided by its corresponding illumination component L i Obtaining the reflection component R of each frame image i ;
The third step: using S curve capable of retaining image details to irradiate component L of every frame image obtained in second step i Optimizing by weighted averaging the illumination components of each frame of the optimized imageThe method obtains the final irradiation component L of the input image;
the fourth step: adopting a non-local mean filter to perform the reflection component R of each frame of image obtained in the second step i Denoising, optimizing the reflection component of each frame of image subjected to denoising by using a unsharp masking method, and obtaining the final reflection component R of the input image by using the reflection component of each frame of image obtained after optimization through a weighted average method;
the fifth step: and multiplying the illumination component L obtained by the optimization in the third step and the reflection component R obtained by the optimization in the fourth step to obtain an enhanced brightness component V', simultaneously converting the image from an HSV color space to an RGB color space by combining the hue component H and the saturation component S in the first step to obtain an enhanced image, carrying out hue and color correction and white balance processing on the enhanced image, enhancing the visual effect of the image and obtaining a final image.
2. The method of claim 1, wherein the illumination component L of each frame of image in the second step is i Obtained by solving the following equation:
wherein the index i represents the i-th frame image, | | 1 Andrespectively 1 norm and 2 norm, lambda being a factor for a trade-off constraint>For the gradient operator, M is a weight matrix for edge smoothing defined as follows:
wherein d represents the gradient direction, x and y represent the horizontal and vertical directions, respectively, ω (n) represents the sliding window at the pixel n, m is the pixel point in the sliding window ω (n), and ε is a very small number, preventing the denominator from being 0 in the formula.
3. The method for enhancing multi-frame fusion low-illumination images based on Retinex theory according to claim 1, wherein the S-curve for preserving image details in the third step is defined as follows:
I output(i) =f(I input(i) )+[2×f(I input(i) )(1-f(I input(i) ))×ΔI (i) ]
wherein, I input For the input image, i.e. for the illumination component L of each frame of image i ,I output To output an image,. DELTA.I (i) =I input(i) -I F(i) ,I F(i) For the guided filter, f (α) is the S-curve defined as follows:
4. Retinex theory-based multi-frame fusion low illumination according to claim 1The image enhancement method is characterized in that the fourth step adopts a non-local mean filter to carry out image enhancement on the reflection component R of each frame of image i The denoising process is performed as follows:
each pixel point a in each frame of reflection component is represented by the following formula through non-local mean filtering:
wherein Q is i Representing a set of pixels, R, within a sub-block of an image i (a) Is a reflection component R i Pixel point a, w in i (a, b) is a weight value obtained by calculating the Euclidean distance between sub-image blocks taking two pixel points of a and b as the center in each frame of image, and the weight value satisfies that w is more than or equal to 0 i (a, b) is less than or equal to 1 and sigma-w i (a, b) =1, and the weight is represented by the following formula:
wherein the content of the first and second substances,is the square of the Euclidean distance of the two image sub-blocks, na and N b Set of pixels in a domain centered on a and b, R i (N a )、R i (N b ) Respectively, a reflection component R i A set of pixels in a region centered on a and b, based on the measured value>Is a normalized coefficient and h represents a filter coefficient. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211519235.2A CN115937024A (en) | 2022-11-30 | 2022-11-30 | Multi-frame fusion low-illumination image enhancement method based on Retinex theory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211519235.2A CN115937024A (en) | 2022-11-30 | 2022-11-30 | Multi-frame fusion low-illumination image enhancement method based on Retinex theory |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115937024A true CN115937024A (en) | 2023-04-07 |
Family
ID=86698562
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211519235.2A Pending CN115937024A (en) | 2022-11-30 | 2022-11-30 | Multi-frame fusion low-illumination image enhancement method based on Retinex theory |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115937024A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116703794A (en) * | 2023-06-06 | 2023-09-05 | 深圳市歌华智能科技有限公司 | Multi-image fusion method in HSV color space |
CN116703794B (en) * | 2023-06-06 | 2024-04-30 | 深圳市歌华智能科技有限公司 | Multi-image fusion method in HSV color space |
-
2022
- 2022-11-30 CN CN202211519235.2A patent/CN115937024A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116703794A (en) * | 2023-06-06 | 2023-09-05 | 深圳市歌华智能科技有限公司 | Multi-image fusion method in HSV color space |
CN116703794B (en) * | 2023-06-06 | 2024-04-30 | 深圳市歌华智能科技有限公司 | Multi-image fusion method in HSV color space |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110148095B (en) | Underwater image enhancement method and enhancement device | |
Wang et al. | Adaptive image enhancement method for correcting low-illumination images | |
RU2400815C2 (en) | Method of enhancing digital image quality | |
CN106897981A (en) | A kind of enhancement method of low-illumination image based on guiding filtering | |
CN112734650B (en) | Virtual multi-exposure fusion based uneven illumination image enhancement method | |
KR20070046010A (en) | Method of digital image enhancement | |
CN110163807B (en) | Low-illumination image enhancement method based on expected bright channel | |
CN109816608A (en) | A kind of low-light (level) image adaptive brightness enhancement based on noise suppressed | |
CN108133462B (en) | Single image restoration method based on gradient field region segmentation | |
CN110827225A (en) | Non-uniform illumination underwater image enhancement method based on double exposure frame | |
RU2298223C2 (en) | System and method for correcting dark tones on digital photographs | |
CN103295206A (en) | low-light-level image enhancement method and device based on Retinex | |
CN115578297A (en) | Generalized attenuation image enhancement method for self-adaptive color compensation and detail optimization | |
CN111968065A (en) | Self-adaptive enhancement method for image with uneven brightness | |
CN116188339A (en) | Retinex and image fusion-based scotopic vision image enhancement method | |
CN113628141B (en) | HDR detail enhancement method based on high-low exposure image fusion | |
CN108550124B (en) | Illumination compensation and image enhancement method based on bionic spiral | |
Yang et al. | Improved retinex image enhancement algorithm based on bilateral filtering | |
CN115937024A (en) | Multi-frame fusion low-illumination image enhancement method based on Retinex theory | |
CN116309152A (en) | Detail enhancement method, system, equipment and storage medium for low-illumination image | |
CN115293989A (en) | Image enhancement method integrating MsRcR and automatic color gradation | |
CN113284058B (en) | Underwater image enhancement method based on migration theory | |
CN110148188B (en) | Method for estimating low-illumination image illumination distribution based on maximum difference image | |
Fan et al. | Underwater image enhancement algorithm combining color correction and multi-scale fusion | |
CN113160066A (en) | Low-illumination image efficient enhancement method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |