CN113160066A - Low-illumination image efficient enhancement method - Google Patents
Low-illumination image efficient enhancement method Download PDFInfo
- Publication number
- CN113160066A CN113160066A CN202110029801.0A CN202110029801A CN113160066A CN 113160066 A CN113160066 A CN 113160066A CN 202110029801 A CN202110029801 A CN 202110029801A CN 113160066 A CN113160066 A CN 113160066A
- Authority
- CN
- China
- Prior art keywords
- image
- low
- illumination
- images
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000005286 illumination Methods 0.000 title claims abstract description 28
- 230000009466 transformation Effects 0.000 claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000010606 normalization Methods 0.000 claims description 11
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 abstract description 6
- 230000000007 visual effect Effects 0.000 abstract description 4
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000001914 filtration Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 238000007499 fusion processing Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a high-efficiency enhancement method of a low-illumination image, which comprises the steps of firstly, adjusting each pixel value of the image by using two designed different transformation functions to obtain two enhanced images; simply fusing the two enhanced images to obtain an enhanced image; denoising the enhanced image by using a guide filter; and then sharpening the denoised image to enhance the details of the image. The method of the invention can improve the brightness and enhance the contrast of the image, and the algorithm is simple and has small calculation amount, thereby being beneficial to the real-time processing of the image. Compared with the traditional contrast enhancement algorithm, the method can enhance the brightness and the contrast of the low-illumination image without excessive enhancement, and can make the detail part of the image more obvious, so that the image has good visual effect.
Description
Technical Field
The invention relates to a low-illumination image efficient enhancement method, and belongs to the technical field of image processing.
Background
Generally, images or videos are acquired under the condition of sufficient light, but in practical use situations, images or videos do not always have to be acquired under low-illumination environments, for example, under low-illumination conditions such as indoor and night, due to insufficient illumination of an unnatural light source, reflected light of a target surface is weak, and light entering an imaging sensor is insufficient. The image of shooing under the low light level environment receives ambient light, the influence of equipment of shooing, and the photo visibility of shooing is relatively poor, and image identifiability is very low, is difficult to distinguish the detail in the image, influences visual effect, still influences subsequent further treatment moreover. Therefore, there is a need for brightness and contrast enhancement of low-illumination images to provide better visibility of the processed images.
Conventional methods for enhancing image brightness and contrast include methods based on linear stretching, logarithmic transformation, and gamma transformation. The gray level set of the image can be linearly stretched to 0-255 by adopting a linear stretching method, and the linear stretching can also stretch the brightness area of the image while enhancing the contrast of the image, so that the image is partially overexposed; by adopting the methods of logarithmic transformation and gamma transformation, the pixel values of dark areas of the image can be expanded, but the pixel values of bright areas of the image can be compressed, so that the details of the bright areas of the image are lost.
Guided filtering was first proposed by He et al at 2010 computer vision conference, an edge-preserving filter based on a local linear model. Fundamental principles of guided filtering: the output image is obtained by converting the input image and the guide image respectively in different linear relations, has a structural organization similar to the whole input image, and retains the edge information and the texture characteristics of the guide image. The guiding filtering is mainly applied to image smooth noise reduction, image defogging and the like. The literature applies the guiding filtering to the low-illumination image enhancement algorithm, and a good effect is achieved.
Disclosure of Invention
In order to solve the problems, the invention provides a low-illumination image efficient enhancement method.
The invention adopts the following technical scheme for solving the technical problems:
a low-illumination image efficient enhancement method comprises the following specific steps:
s1, carrying out normalization processing on the low-illumination image to obtain an image I;
s2, respectively adopting the following two transformation functions to transform the image I to obtain two enhanced images I1、I2Wherein the two transformation functions are:
wherein, I (x, y), I1(x, y) and I2(x, y) are images I, I, respectively1And I2The pixel value of the middle coordinate (x, y), gamma is an index of gamma conversion;
s3, two enhanced images I1、I2Fusing to obtain an image out;
s4, denoising the image out by using a guide filter to obtain an image q1;
S5, for image q1Sharpening is carried out, and the sharpening result is the final output image Iout。
Further, the normalization in S1 is specifically to normalize the pixel values of the pixels in the low-illuminance image from [0, 255] to within a [0, 1] interval.
Further, the fusion formula in S3 is out (x, y) ═ 1-a) I1(x,y)+aI2(x, y), where out (x, y) is the pixel value of the coordinate (x, y) in the image out, a is a weighting coefficient, a is mean { I }, and mean { I } represents the pixel mean value of the image I.
Further, S3 includes inverse normalizing the fused image.
Further, the inverse normalization specifically includes: and multiplying the pixel value of the pixel point in the fused image by 255 and rounding down.
Further, the input image is image out in S4, and the guide image is image I.
Further, the formula for sharpening in S5 is: i isout=λ(q1-q2)+q2Wherein the parameter lambda > 1, q2For using a guided filter on an image q1And (5) denoising a result.
Further, the guide image and the input image are both images q1。
Compared with the prior art, the invention adopting the technical scheme has the following technical effects: the invention uses two different transformation functions to perform low-illumination image fusion enhancement. Then denoising the enhanced image by using a guide filter; and finally sharpening the denoised image. The method of the invention can improve the brightness and enhance the contrast of the image, and the algorithm is simple and has small calculation amount, thereby being beneficial to the real-time processing of the image. Compared with the traditional contrast enhancement algorithm, the method can enhance the brightness and the contrast of the low-illumination image without excessive enhancement, and can make the detail part of the image more obvious, so that the image has good visual effect.
Drawings
FIG. 1 is a flow chart of a method for enhancing low-illumination image efficiency according to the present invention;
fig. 2(a) is a low illumination image;
FIG. 2(b) is the image of FIG. 2(a) after undergoing the PIE algorithm;
FIG. 2(c) is the image of FIG. 2(a) after the NPE _ TIP algorithm;
FIG. 2(d) is the image of FIG. 2(a) obtained by the algorithm of the present invention;
FIG. 3(a) is an old photograph;
FIG. 3(b) is the image of FIG. 3(a) after the PIE algorithm;
FIG. 3(c) is the image of FIG. 3(a) after the NPE _ TIP algorithm;
FIG. 3(d) is the image of FIG. 3(a) obtained by the algorithm of the present invention.
Detailed Description
The invention and its advantageous effects are explained in detail below with reference to the drawings and the respective embodiments.
As shown in fig. 1, the low-illuminance image enhancement method of the present invention includes:
firstly, acquiring a low-illumination image, and performing normalization operation on all pixel values to obtain an image I.
The low-illumination image can be shot by a digital camera in an environment with insufficient illumination to obtain an original low-illumination image, and the image obtained by the digital camera is generally a digital image. The digital camera can be installed on a digital camera, an intelligent mobile phone, a tablet computer, various monitoring cameras and the like. Besides taking images by a digital camera to obtain original digital images, digital images can be obtained from others by copying or network transmission.
The normalization process here can be implemented using known techniques in order to normalize all pixel values of the low-illuminance image from [0, 255] to within the [0, 1] interval. All pixel values of the obtained low-illuminance image are normalized, for example, using an im2double () function, and the pixel values can be normalized from [0, 255] to within the [0, 1] interval. Other well-known methods for normalizing the pixel values from 0, 255 to 0, 1 intervals may be used, and are not illustrated here.
And secondly, processing each normalized pixel value by using two different designed transformation functions respectively, thereby obtaining two enhanced images.
Two different transformation functions are shown in equation 1 below:
where (x, y) is the specific location of the pixel, I (x, y) is the pixel value before transformation1(x,y)、I2(x, y) are the pixel values after the transform function 1 and transform function 2 processing, respectively. γ is an exponent of the gamma transformation, similar to the conventional gamma transformation in the present invention we set γ to 0.4545;
by the above conversion processing, the dark pixel value can be increased and the bright pixel value can be decreased, while the pixel values of 0 and 1 are kept unchanged, so that the pixel values between (0, 1) are subjected to nonlinear conversion, and the brightness and the contrast of the image can be improved. In the invention, the pixel value after normalization is in the interval of [0, 1], so that the pixel value is compared with 0.5, the pixel value with the value less than 0.5 is a dark pixel value, and the pixel value with the value more than 0.5 is a bright pixel value.
And thirdly, carrying out fusion processing and inverse normalization on the two enhanced images to obtain an enhanced image out.
The fusion mode is as follows:
out(x,y)=(1-a)I1(x,y)+aI2(x,y) (2)
where out (x, y) represents a pixel value after the fusion process, where a ═ mean { I } represents the pixel mean value of the image I.
Based on formula 2, all pixel values on the two enhanced images are traversed to perform calculation, so that all pixel values on the fused image obtained after fusion processing are obtained.
The method for performing inverse normalization on all pixel values on the fusion image obtained by the fusion processing comprises the following steps: each pixel value on the fused image is multiplied by 255 and rounded down, thus resulting in a new enhanced image.
The rounding-down operation may be performed by using a well-known rounding-down function, which refers to taking an integer smaller than the calculation result when the calculation result is not an integer, and directly taking an integer when the calculation result is an integer.
And fourthly, denoising the image out by using the guide filtering.
The input image is image out and the guide image is image I, taking a filter radius of 8 and an epsilon of 0.0004. Suppose an output image q1And a window omega with radius r and taking a pixel point k as a center with the guide image IrThe following linear transformation relationship exists:
wherein I represents the ith pixel point, IiThe pixel value of the ith pixel point in the representative image I, and the linear coefficient (a)k,bk) Is a constant, q1I.e. the de-noised image, q1iRepresentative image q1The pixel value of the ith pixel point;
linear coefficient (a)k,bk) The gradient is calculated for equation 3:that is, the edge information of the output image and the guide image has a linear relationship, and a linear coefficient is obtained by minimizing a cost function, wherein the cost function is defined as:
the linear coefficient (a) is obtained from equation 4k,bk) In order to realize the purpose,
bk=μout,r(k)-akμI,r(k) (6)
where μ is the mean value, representing the matrix multiplication. Mu.sI·out,r(k) Is the mean, μ, of the image I multiplied by the image out within Ω rout,r(k) Is at omegarThe mean value of the intra-picture I,is at omegarVariance of intra image I.
Due to the linear coefficients (a) solved in each window containing pixel ik,bk) Different values, so that the windows are solved firstk,bk) Average value of the values, and then the result is used as (a) of the pixel point ik,bk) The value, therefore equation 3 is redefined as:
wherein | Ωr(i) L is window omegar(i) Number of pixels (2)
The fifth step, to the image q1Sharpening is performed in order to enhance the details of the image and increase the contrast.
Computing the base layer with a guided filter, the detail layer being the image q1The base layer is subtracted.
Specifically, the guide image and the input image are both the image q when the base layer is calculated1The formula used for guided filtering is the same as the above formula for guided filtering, with the result that q is obtained2。
Specifically, the formula for sharpening is:
Iout=λ(q1-q2)+q2,λ>1 (8)
specifically, the best results are obtained when λ ∈ [2, 3 ].
Image enhancement time versus quality assessment:
in order to verify that the speed of the method of the invention for low-illumination images is faster than that of the PIE algorithm and the NPE _ TIP algorithm, the experiment searches 500 pictures from the network, processes the 500 pictures by the three algorithms respectively, calculates the running time of each algorithm, and compares the average value of the 500 pictures processed by each algorithm conveniently, as shown in Table 1.
TABLE 1 average run time of the three algorithms
PIE | NPE_TIP | The invention | |
Mean value (seconds) | 1.064405 | 11.09669 | 0.87303 |
In order to verify that the enhancement effect of the method of the invention on the low-illumination image is better than that of the PIE algorithm and the NPE _ TIP algorithm, the three enhancement algorithms are used for respectively enhancing the two low-illumination images, and the enhanced images are compared.
The effect contrast after the enhancement of several algorithms is given, the brightness and the color of the low-illumination image shown in fig. 2(a) are greatly improved after the enhancement of the several algorithms, and the visual effect is better. As shown in fig. 2(b), the brightness of the PIE enhanced is improved, but not enough; as shown in fig. 2(c), the image brightness enhanced by the NPE _ TIP algorithm is improved, and the part of the column in the picture is brighter than the image processed by the PIE, but this method also brings noise; as shown in fig. 2(d), the image enhanced by the algorithm of the present invention is better than PIE and NPE _ TIP in brightness, color, and detail.
The old photo shown in fig. 3(a) is enhanced in quality by PIE as shown in fig. 3(b), but the image is still dark in part; the image after enhancement by the NPE _ TIP algorithm is improved in brightness compared to PIE as shown in fig. 3 (c); the overall brightness after the enhancement by the algorithm of the present invention is better and more detailed than the two algorithms as shown in fig. 3 (d).
In conclusion, the image quality of the enhanced algorithm provided by the invention is better than that of PIE and NPE _ TIP.
Finally, it should be pointed out that: the above examples are only for illustrating the technical solutions of the present invention, and are not limited thereto. Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (8)
1. A low-illumination image efficient enhancement method is characterized by comprising the following specific steps:
s1, carrying out normalization processing on the low-illumination image to obtain an image I;
s2, respectively adopting the following two transformation functions to transform the image I to obtain two enhanced images I1、I2Wherein the two transformation functions are:
wherein, I (x, y), I1(x, y) and I2(x, y) are images I, I, respectively1And I2A pixel value of middle coordinate (x, y), γ being an index of gamma transformation;
s3, two enhanced images I1、I2Fusing to obtain an image out;
s4, denoising the image out by using a guide filter to obtain an image q1;
S5, for image q1Sharpening is carried out, and the sharpening result is the final output image Iout。
2. The method as claimed in claim 1, wherein the normalization in S1 is specifically to normalize the pixel values of the pixels in the low-luminance image from [0, 255] to the range of [0, 1 ].
3. The method of claim 1, wherein the fusion formula in S3 is out (x, y) -I (1-a)1(x,y)+aI2(x, y), where out (x, y) is the pixel value of the coordinate (x, y) in the image out, a is a weighting coefficient, a is mean { I }, and mean { I } represents the pixel mean value of the image I.
4. The method of claim 1, wherein the step of S3 further comprises inverse normalizing the fused image.
5. The method of claim 4, wherein the inverse normalization specifically comprises: and multiplying the pixel value of the pixel point in the fused image by 255 and rounding down.
6. The method of claim 1, wherein the input image in S4 is image out, and the guide image is image I.
7. The method for efficiently enhancing the low-illumination image according to claim 1, wherein the formula for sharpening in S5 is as follows: i isout=λ(q1-q2)+q2Wherein the parameter lambda > 1, q2For using a guided filter on an image q1And (5) denoising a result.
8. The method of claim 7, wherein the guide image and the input image are both images q1。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110029801.0A CN113160066A (en) | 2021-01-11 | 2021-01-11 | Low-illumination image efficient enhancement method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110029801.0A CN113160066A (en) | 2021-01-11 | 2021-01-11 | Low-illumination image efficient enhancement method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113160066A true CN113160066A (en) | 2021-07-23 |
Family
ID=76878304
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110029801.0A Withdrawn CN113160066A (en) | 2021-01-11 | 2021-01-11 | Low-illumination image efficient enhancement method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113160066A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114298916A (en) * | 2021-11-11 | 2022-04-08 | 电子科技大学 | X-Ray image enhancement method based on gray stretching and local enhancement |
CN118658194A (en) * | 2024-08-20 | 2024-09-17 | 成都贝尔通讯实业有限公司 | Smart community management method, system and program product |
CN118658194B (en) * | 2024-08-20 | 2024-10-29 | 成都贝尔通讯实业有限公司 | Smart community management method, system and program product |
-
2021
- 2021-01-11 CN CN202110029801.0A patent/CN113160066A/en not_active Withdrawn
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114298916A (en) * | 2021-11-11 | 2022-04-08 | 电子科技大学 | X-Ray image enhancement method based on gray stretching and local enhancement |
CN118658194A (en) * | 2024-08-20 | 2024-09-17 | 成都贝尔通讯实业有限公司 | Smart community management method, system and program product |
CN118658194B (en) * | 2024-08-20 | 2024-10-29 | 成都贝尔通讯实业有限公司 | Smart community management method, system and program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11127122B2 (en) | Image enhancement method and system | |
Wang et al. | Simple low-light image enhancement based on Weber–Fechner law in logarithmic space | |
CN111986120A (en) | Low-illumination image enhancement optimization method based on frame accumulation and multi-scale Retinex | |
CN106897981A (en) | A kind of enhancement method of low-illumination image based on guiding filtering | |
CN110232670B (en) | Method for enhancing visual effect of image based on high-low frequency separation | |
CN116823686B (en) | Night infrared and visible light image fusion method based on image enhancement | |
CN110163807B (en) | Low-illumination image enhancement method based on expected bright channel | |
CN108564544A (en) | Image Blind deblurring based on edge perception combines sparse optimization method | |
Mu et al. | Low and non-uniform illumination color image enhancement using weighted guided image filtering | |
CN115578297A (en) | Generalized attenuation image enhancement method for self-adaptive color compensation and detail optimization | |
CN117252773A (en) | Image enhancement method and system based on self-adaptive color correction and guided filtering | |
CN116993616A (en) | Single low-illumination scene image enhancement method and enhancement system | |
CN110298796B (en) | Low-illumination image enhancement method based on improved Retinex and logarithmic image processing | |
CN114693548A (en) | Dark channel defogging method based on bright area detection | |
CN112614063B (en) | Image enhancement and noise self-adaptive removal method for low-illumination environment in building | |
Mi et al. | A generalized enhancement framework for hazy images with complex illumination | |
CN107610072B (en) | Adaptive noise reduction method for low-light-level video image based on gradient guided filtering | |
CN113160066A (en) | Low-illumination image efficient enhancement method | |
CN117830134A (en) | Infrared image enhancement method and system based on mixed filtering decomposition and image fusion | |
CN117876233A (en) | Mapping image enhancement method based on unmanned aerial vehicle remote sensing technology | |
CN117611467A (en) | Low-light image enhancement method capable of balancing details and brightness of different areas simultaneously | |
CN117391987A (en) | Dim light image processing method based on multi-stage joint enhancement mechanism | |
CN112308793A (en) | Novel method for enhancing contrast and detail of non-uniform illumination image | |
CN116029916A (en) | Low-illumination image enhancement method based on dual-branch network combined with dense wavelet | |
CN114429426B (en) | Low-illumination image quality improvement method based on Retinex model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210723 |
|
WW01 | Invention patent application withdrawn after publication |