CN107527332A - Enhancement Method is kept based on the low-light (level) image color for improving Retinex - Google Patents
Enhancement Method is kept based on the low-light (level) image color for improving Retinex Download PDFInfo
- Publication number
- CN107527332A CN107527332A CN201710944257.6A CN201710944257A CN107527332A CN 107527332 A CN107527332 A CN 107527332A CN 201710944257 A CN201710944257 A CN 201710944257A CN 107527332 A CN107527332 A CN 107527332A
- Authority
- CN
- China
- Prior art keywords
- image
- color
- component
- enhanced
- enhancement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000001914 filtration Methods 0.000 claims abstract description 42
- 230000002708 enhancing effect Effects 0.000 claims abstract description 14
- 238000005286 illumination Methods 0.000 claims description 66
- 238000005070 sampling Methods 0.000 claims description 28
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000014759 maintenance of location Effects 0.000 claims description 7
- 238000011084 recovery Methods 0.000 claims description 5
- 150000001875 compounds Chemical class 0.000 claims description 3
- 238000006467 substitution reaction Methods 0.000 claims description 2
- 238000005728 strengthening Methods 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 19
- 230000002146 bilateral effect Effects 0.000 description 16
- 125000001475 halogen functional group Chemical group 0.000 description 14
- 238000010586 diagram Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 9
- 238000002474 experimental method Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000004321 preservation Methods 0.000 description 7
- 238000012549 training Methods 0.000 description 5
- 230000006872 improvement Effects 0.000 description 4
- 230000015556 catabolic process Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000006731 degradation reaction Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000035772 mutation Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002087 whitening effect Effects 0.000 description 2
- 235000002566 Capsicum Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
Enhancement Method is kept based on the low-light (level) image color for improving Retinex the invention discloses a kind of, specifically followed the steps below:The filtering input figure I of low-light (level) is transformed into YUV color spaces from rgb color space, filtering input figure I uses gray-scale map;Obtain strengthening the reflecting component R of image Y-component R using improved Retinex methodsY,FGF,i;Calculate the UV components of enhancing image;Enhancing image is transformed into rgb color space from YUV color spaces, feedback-type enhancing color component, the low-luminance color figure strengthened are carried out to the pixel that crosses the border.The holding enhancing of quick wave filter, single scale Retinex and multichannel color is blended, image color enriches after enhancing, brightness and details strengthen substantially, solve in the prior art that time cost is big, the problem of fuzzy halation, details, color distortion easily occur.
Description
Technical Field
The invention belongs to the technical field of image processing, and relates to a low-illumination image color retention enhancement method based on improved Retinex.
Background
In the fields of video monitoring, intelligent transportation, all-weather combat and the like, low-illumination images with poor imaging quality can be inevitably acquired. The degradation phenomena of the image in terms of brightness, contrast and detail representation are very severe due to uneven illumination distribution or lack of light sources. These degraded images not only make it difficult for people to obtain enough effective information from them, affecting the visual effect, but also indirectly affect the efficiency and accuracy of some low-illumination-based image enhancement methods. Therefore, enhancement processing for nighttime low-illuminance images has been an important point in the field of image processing.
At present, for the multi-aspect degradation of the night low-illumination image, many enhancement methods have appeared, such as an image enhancement method based on wavelet transform, an image enhancement method based on artificial neural network, an adaptive histogram equalization algorithm, and an enhancement method based on Retinex.
The image enhancement method based on wavelet transformation needs to perform wavelet decomposition on an image, estimate illumination at low frequency, enhance details and remove noise at high frequency, and reconstruct the image finally, but although the method has good effect, the wavelet decomposition calculation amount is large.
The image enhancement method based on the artificial neural network firstly needs to design a training model, then provides night images and corresponding enhanced images or images under the condition of sufficient illumination in the daytime for a large amount of training, generally the night images similar to a training set can be well enhanced, but if the difference between the night images and the images in the training set is too large, the effect is not ideal, and the early training of the model is time-consuming.
The adaptive histogram equalization algorithm is different from the traditional histogram equalization algorithm, the brightness is redistributed through the data of the local histogram of the image, the processed image is prominent in the aspects of contrast and detail, but the enhancement method can weaken the layering of the image and easily cause over-enhancement.
The Retinex theory is a scientific hypothesis made by Edwin Land on the basis of a large number of scientific experiments, the theory imitates the imaging principle of an object in human eyes, and through development for many years, various improvement methods such as single-scale Retinex (SSR), multi-scale Retinex (MSR), multi-scale Retinex (MSRCR) with color recovery, retinex with bilateral filter kernel improvement and the like are developed successively.
The small-scale SSR can obtain good image details, but the image definition is not good, while the large-scale SSR has good overall effect but insufficient details. The MSR integrates the advantages of various scales, such as large, medium and small, but the color is easy to distort, and the halo at the sudden illumination change is obvious. MSRCR, while better than SSR and MSR in color preservation and halo handling, still performs less than ideally and is prone to over-enhancement. And the Retinex improved by the bilateral filtering kernel can obtain better edge details, but the time complexity is too high, and the noise enhancement is obvious.
Through the analysis, the phenomena of halo, detail blurring and color distortion are found to easily appear in the image enhanced by the conventional enhancement algorithm, and although some algorithms have good effects, the complexity is too high. In view of these problems, a method for enhancing color preservation of low-illumination images, which has low time cost, effectively enhanced brightness and detail, and no color distortion, is needed.
Disclosure of Invention
In order to achieve the purpose, the invention provides a low-illumination image color retention enhancement method based on improved Retinex, which fuses a fast guide filter, single-scale Retinex and multi-channel color retention enhancement, the enhanced image has rich colors and obvious brightness and detail enhancement, and the problems of high time cost, easiness in occurrence of halo, detail blurring and color distortion in the prior art are solved.
The technical scheme adopted by the invention is that a low-illumination image color retention enhancement method based on improved Retinex specifically comprises the following steps:
step 1, converting a filtering input image I with low illumination from an RGB color space to a YUV color space, wherein the filtering input image I adopts a gray level image;
step 2, obtaining Y component R of reflection component R of the enhanced image by using an improved Retinex method Y,FGF,i ;
Step 3, calculating the UV component of the enhanced image;
and 4, converting the enhanced image from a YUV color space to an RGB color space, and performing feedback type enhanced color component on the border-crossing pixel points to obtain an enhanced low-illumination color image.
The invention is further characterized in that, in said step 2, the Y component R of the reflection component R of the enhanced image is obtained by using the modified Retinex method Y,FGF,i The method specifically comprises the following steps:
step A, using the filtering input image I as a guide image p, namely filtering the input image I i And a guide amount p i For equal, opposite filtering of the input image I i And the filter radius r is down-sampled:
I′ d =f downsample (I i ,s) (9)
r′=f downsample (r,s) (10)
I′ d the method comprises the steps of obtaining an input image after down sampling, wherein d is an image pixel after down sampling, r' is a filtering radius after down sampling, s is a sampling multiple, and the sampling multiple s is 4 or 8; downsampled optimized Linear coefficient a' k And b' k The calculation formula of (2):
b′ k =m(I′ d ,r′)-a′ k m(I′ d ,r′) (12)
wherein the content of the first and second substances,m(I′ d r ') represents the mean of all pixels d within a window of radius r' centered on pixel k, ε is a regularization parameter, ω k (r ') is a window of radius r' centered on pixel k; | ω | is window ω | k (r') number of pixels;
because the pixel points of the image are reduced after down-sampling, the image I is input for ensuring filtering i All pixel points have corresponding mean value parameter m (a' k R ') and m (b' k R ') to the mean parameter m (a' k R ') and m (b' k R') performs a bilinear interpolation upsampling recovery:
wherein s is a sampling multiple, i is an up-sampled image pixel,andthe linear coefficients are up-sampled and restored, which are two key parameters between the original image and the filtered image;
obtaining a filtered output image q according to equation (15) i I.e. enhancing the image;
step B, inputting the filtering into Y component I of the image I Y Enhancement of the Y component L of the background illumination L of an image by substitution of the formula (15) Y , FGF,i :
Step C, solving Y component R of reflection component R of the enhanced image by adopting a classic Retinex relational expression Y,FGF,i :
R Y,FGF,i =log(I Y,i )-log(L Y,FGF,i ) (17)。
Further, in step 3, calculating the UV component of the enhanced image specifically includes:
in the YUV color space, the Y component R of the reflection component R of the enhanced image is calculated Y,FGF,i The enhancement ratio of (2):
prop i =R Y,FGF,i /I Y,i (18)
in the formula, prop i For the Y component of pixel I, the enhancement ratio, I Y,i Inputting the Y component of the image I for the filtering of the pixel I;
multiplying the UV component of the filtered input map I by the new enhancement ratio prop' yields the UV component of the enhanced image, wherein,
in the formula (I), the compound is shown in the specification,is the mean of the Y component of the filtered input image I.
Further, in step 4, the enhanced image is converted from YUV color space to RGB color space, and the cross-border pixel point is subjected to feedback type color component enhancement, specifically:
the enhanced image is color space converted from YUV to RGB using color space conversion formula (20):
wherein I is a pixel, I Y,i ,I U,i ,I V,i To filter the YUV three-component, R 'of input graph I' i ,G′ i ,B′ i The RGB image after brightness and color enhancement is obtained;
the pixels of which any one of the three components of the RGB map has a value not in the normal region [0,255] are called boundary-crossing pixels, and the new enhancement proportion prop' of the UV component corresponding to the boundary-crossing pixels is as follows:
prop″=0.3-0.03prop′ (21)
substituting prop "into the position of prop' in equation (20) results in an enhanced low-illumination color map with feedback.
The method has the advantages that the method for maintaining and enhancing the low-illumination image color based on the improved Retinex is provided on the basis of the traditional Retinex, and because human eyes are sensitive to edge details and the background illumination is estimated, the method adopts the rapid guide filtering, so that the halo phenomenon which is easy to appear at the illumination mutation position in the traditional method is overcome. In addition, aiming at the problem of color distortion easily caused by color image enhancement, the invention calculates the enhanced UV component after the brightness is enhanced by an improved Retinex method in a YUV color space, and then carries out YUV-to-RGB color space conversion and feedback type enhancement of the UV color component; color is enhanced while preventing over-enhancement. The multi-group contrast test shows that the method has good robustness, compared with other low-illumination enhancement methods, the enhanced low-illumination color image obtained by the method has rich image colors, obviously enhanced brightness and details, no phenomena of halation, whitening and distortion, and low algorithm complexity, and is suitable for the field of real-time night video enhancement.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an embodiment of the present invention.
Fig. 2a shows the original luminance component.
Fig. 2b is a grey map guided filtering.
Fig. 2c is a color map guided filtering.
Fig. 2d is a gaussian filter.
Fig. 3a is an indoor low illumination original drawing.
Fig. 3b is a diagram obtained by indoor low illuminance original without enhancing color.
Fig. 3c is a diagram of the indoor low-illumination original image obtained after color enhancement by the color space conversion formula.
Fig. 3d is a diagram of an indoor low illuminance original image obtained after color enhancement by a feedback type.
Fig. 3e is a diagram of the indoor low illuminance original image obtained by the MSRCR algorithm.
Fig. 3f is a diagram of the indoor low-illumination original image obtained by the Retinex algorithm of the color bilateral filter.
Fig. 3g is a diagram of an indoor low-illumination original image obtained by the color enhancement method of the present invention.
Fig. 4a is the cell night artwork.
Fig. 4b is a diagram obtained by the MSRCR algorithm of the cell night original image.
Fig. 4c is a diagram obtained by the Retinex algorithm of the original image at night of the cell through the color bilateral filter.
Fig. 4d is a diagram of the cell night original image obtained by the color enhancement method of the present invention.
Fig. 5a is a night learning artwork.
Fig. 5b is a graph of the original image obtained by the MSRCR algorithm at school and religion night.
Fig. 5c is a graph obtained by teaching the night artwork through the Retinex algorithm with a color bilateral filter.
Fig. 5d is a diagram of an original image obtained by learning night through the color enhancement method of the present invention.
Detailed Description
The invention is explained in further detail below with reference to the figures and the embodiments. It should be understood that the examples are only for illustrating the present invention and are not intended to limit the scope of the present invention. Furthermore, it should be understood that various changes or modifications can be made by those skilled in the art after reading the description of the present invention, and such equivalents also fall within the scope of the protection defined by the present application.
The design idea of the invention is as follows: low-light color images generally have the characteristics of low brightness, low contrast, blurred details, and high salt and pepper noise, which greatly affect image recognition and information extraction at a later stage. Therefore, an improved method of the conventional Retinex is proposed for the degradation phenomenon of the nighttime image. The method comprises the following steps: after an original RGB low illumination map is converted into a YUV color space (Y represents brightness, and UV represents chroma), background illumination is estimated for a Y component by utilizing guide filtering of sampling acceleration, a reflection component is calculated through a classical Retinex formula, a front-back enhancement ratio of the brightness is obtained, and then YUV-to-RGB color space conversion and feedback type enhancement of the UV color component are carried out. Compared with other low-illumination enhancement methods, the method has the advantages that the brightness of the enhanced image is obviously improved, the phenomena of color distortion and halation are solved, the details are clear, and the calculation complexity is low.
The invention relates to a low-illumination image color retention enhancement method based on improved Retinex, which specifically comprises the following steps of:
step 1, converting a filtering input image I with low illumination from an RGB color space to a YUV color space, wherein the filtering input image I adopts a gray level image;
step 2, obtaining Y component R of reflection component R of enhanced image by using improved Retinex method Y,FGF,i ;
Step 3, calculating the UV component of the enhanced image;
and 4, converting the enhanced image from a YUV color space to an RGB color space, and performing feedback type enhanced color component on the border-crossing pixel points to obtain an enhanced low-illumination color image.
1 classical Retinex theory
The basic idea of the Retinex theory is that an image acquired by human eyes is a result of interaction between background illumination and object reflection information, and a classical Retinex relation is as shown in formula (1):
R(x,y)=log(I(x,y))-log(L(x,y)) (1)
wherein, R (x, y) is a reflection component which is irrelevant to illumination and carries image detail information, I (x, y) is an image observed by human eyes, and L (x, y) is ambient background light.
It can be seen that the key of the Retinex method is how to accurately estimate the background light. In the classical Retinex theory, this background illumination is estimated by passing the original image through a gaussian filter. In practice, this gaussian filter may be replaced by a gaussian filter kernel of fixed window radius. The closer the distance to the center of the Gaussian filter kernel function window is, the larger the weight value of the gray value distribution of the corresponding pixel is.
The mathematical expression for estimating the background illumination by using the kernel function is shown in formula (2):
h(i,j)=exp(-((i-r) 2 +(j-r) 2 )/(2*σ 2 )) (4)
in the formula, h (i, j) is a non-normalized gaussian weight in the window, k (i, j) is a weight of a corresponding pixel in the normalized window, r is a window radius, r (x, y) is all pixel points in the window with the radius r and the pixel (x, y) as the center, and sigma is a gaussian function standard deviation, and the larger the sigma is, the smoother the image is.
For a gaussian template with a certain radius, when the standard deviation value is large (for example, when r =5, σ > 10), the weight values allocated to each pixel in the window are basically consistent, and the final filtering effect is equivalent to mean blurring with the same radius. It should be noted that the larger the template, the larger the standard deviation is required to get the final blurring effect close to the mean filter.
While SSRs are discussed above, MSR is equivalent to a weighted sum of multiple SSRs, and the mathematical representation is shown in equation (5):
wherein N is the number of radii with different dimensions, omega j The weighting coefficients corresponding to different scales. In general, to obtain a good enhancement effect, N =3 is sufficient, corresponding to three of large, medium and smallDifferent dimensions. The large scale highlights the overall sharpness and brightness enhancement, and the small scale highlights local details and contours. In practical applications, the weighting coefficients ω are often the same j And =1/3, so that the final enhanced image can be ensured to take the advantages of three dimensions, namely large, medium and small.
2 improved single-scale Retinex method
The SSR, the MSR and the MSRCR are easy to generate a halo phenomenon at the illumination mutation position, and the enhanced image has certain color distortion.
2.1 fast optimization of guided filters
The traditional Retinex uses a Gaussian kernel without edge-preserving capability, so that the enhanced image is easy to generate a halo phenomenon. The guiding filter is an excellent edge holding operator, compared with a bilateral filter with the same edge holding characteristic, the guiding filter has better edge holding capacity, and the processing speed is greatly superior to that of the bilateral filter. The guided filter is one of the methods for edge preservation.
The Guided Filter (GF) calculates the output of the filter from the information of a guided image, the guiding quantity p i And a filtered output q i The local linear model in between is:
q i =m(a i ,r)p i +m(b i ,r) (6)
where i represents a pixel point, r is a window radius for guiding filtering (i.e., a filtering radius), m(a i r) is a window omega i All linear coefficients a in (r) k Average value of m (b) i And r) is omega i All linear coefficients b in (r) k Mean value of (a), ω i (r) is a window with radius r (i.e. window radius r of guided filtering) with pixel point i as center; | ω | is window ω | i (r) number of pixels in;
coefficient of linearity a k Structural formula (7), linear coefficient b k The structural formula (8) is shown in the specification, wherein j ∈ ω k (r);
b k =m(I j ,r)-a k m(p j ,r) (8)
In the formula (I), the compound is shown in the specification,and m (p) j R) represents the mean of all pixels j in a window with radius r centered on pixel k in filtered input map I and guide map p, respectively, epsilon is the regularization parameter, I is the filtered input map, p is the guide map, m (p) j 2 ,r)-m 2 (p j And r) denotes the guide map p at ω i (r) variance within a window;
as can be seen from equations (6), (7) and (8), the main calculation amount of the pilot filter is the averaging function m in the window, and this averaging operation is implemented by means of a box filter. The filtering output q in the formula (6) i Mainly depends on the pilot map p, but the main computation is concentrated on m (a) i R) and m (b) i R), the two quantities do not necessarily require a full resolution image, so that sampling optimization efficiency is possible.
When I = p, for high variance regions, i.e.Guidance diagram p at ω i The variance within the (r) window is much larger than the regularization parameter ε, when a k ≈1,b k The value is approximately equal to 0, which means that the pixels in the area are output as they are without processing; for smooth areas, i.e.At this time a k ≈0,b k ≈mean(I j R), meaning that the region pixels are mean smoothed.
As is clear from the above principle analysis, the filter input map I can be used as the pilot map p in order for the pilot filter to function as an edge-preserving filter. To improve computational efficiency, nearest neighbor downsampling is used to filter the input image I i And a filter radius r, see equations (9) and (10):
I′ d =f downsample (I i ,s) (9)
r′=f downsample (r,s) (10)
I′ d d is an index of a pixel after downsampling, r ' is a radius after downsampling, s is a sampling multiple, and a linear coefficient a ' optimized by downsampling ' k And b' k See formulas (11) and (12):
b′ k =m(I′ d ,r′)-a′ k m(I′ d ,r′) (12)
wherein d ∈ ω k (r'); because the number of the image pixels is reduced after down-sampling, all the pixels of the original image have the corresponding mean value parameter m (a' k R ') and m (b' k R ') to the mean value parameter m (a' k R ') and m (b' k R') performs one bilinear interpolation upsampling recovery:
wherein s is a sampling multiple, i is an up-sampled image pixel,andall are linear coefficients of up-sampling recovery; the corresponding relation between the pixel i of the image after the up-sampling and the pixel d after the down-sampling is determined by a sampling multiple s; input image I by filtering according to equation (15) i Obtaining a filtered output image q i Filtering the output image q i I.e. the enhanced image;
in practical application, the sampling multiple s is usually 4 or 8, so that the filtered image is not deteriorated visually, and at least 2 times of acceleration effect is ensured.
2.2 Retinex in combination with fast steering Filter
Before Retinex enhancement, firstly, color space conversion is carried out, a low-illumination filter input image I is converted into a YUV color space from an RGB color space, and a Y component I of the filter input image I is subjected to Y Estimating the background illumination by using a fast guided filter (FGF is used in the formula to represent the result of the method), and converting I into I Y The Y component L of the background illumination L of the image can be enhanced by substituting the formula (15) Y,FGF,i :
After the background illumination map with rich edge details is estimated, the Y component R of the reflection component R of the enhanced image is obtained by adopting the classical Retinex relational expression of the expression (1) and the expression (17) Y,FGF,i Y component R of the reflected component R Y,FGF,i The system is independent of illumination and contains rich detail information;
R Y,FGF,i =log(I Y,i )-log(L Y,FGF,i ) (17)
it should be noted that when estimating the background illumination using the fast steering filter, there are two options for the steering map: color map guidance and grayscale map guidance are used. Fig. 2a is an original image, and fig. 2b-2d are graphs comparing the effect of estimating Retinex background illumination by using gray scale map guidance, color map guidance and gaussian kernel, respectively. Wherein the parameters are set as: r =4, e =0.005, s =4, σ =20 (gaussian kernel standard deviation). It can be seen that, under the condition of the same window radius r, regularization parameter epsilon and sampling multiple s, when the background illumination is estimated, the color map guided filtering effect in fig. 2c is optimal, so that the small details of the image are blurred, and the large detail part of the image is maintained; the grey map guided filter in fig. 2b is slightly less effective than fig. 2c, the street lamp large details are not obscured; while gaussian filtering in fig. 2d has no difference and blur to the image, has the worst effect, and is not beneficial to accurately estimating the background illumination. The use of color map guidance at the roof is more effective than grey map guidance for edge preservation, and the edge preservation characteristics of the rest part are basically consistent because the sky of the original color image is blue and the house is grey, the color map guidance uses the mean and variance of RGB three channels, and the grey map guidance uses only the mean and variance of the brightness component, and the information used is obviously relatively less.
It can be seen that the background illumination map guided by the color map has the best edge preservation, the gray map is guided by the second time, and the classical gaussian kernel blurs the image completely without distinction, which is also the root cause of the strong halo phenomenon generated at the sudden illumination change position of the image after the classical Retinex enhancement.
The run time of the background illumination was estimated in different ways as shown in table 1. It is easy to see that both the running time of the grayscale map guide and the gaussian kernel estimation background illumination is about 6.6 times faster than the color map guide, while the grayscale map guide has better effect of estimating background illumination than the original gaussian map guide under the same processing time. Therefore, considering the enhancement effect of the running time and the presence or absence of halo, in formula (17), we choose gray map guide to be very suitable for the improved type of halo-free Retinex enhancement.
TABLE 1 run time comparison (unit: s)
Gray scale map guided filter | Color map guided guiding filter | Primitive Gaussian filter | |
FIGS. 2b-2d | 0.075 | 0.497 | 0.077 |
2.3 conversion of enhanced images from YUV color space to RGB color space and color enhancement
The traditional Retinex method (such as SSR, MSR and MSRCR) directly performs Retinex enhancement on RGB three channels independently without considering the relevance between the three channels, and finally causes serious color distortion.
Not only the brightness of the image needs to be enhanced, but also the color of the image needs to be kept undistorted, and Retinex enhancement can be directly performed on the Y component (namely, the brightness) in the YUV color space. However, if the UV component of the original image is used directly, the color of the RGB image after conversion will be weak due to the enhancement of the brightness.
The indoor low-illumination original is shown in fig. 3a, and the indoor low-illumination original uses the fast-guiding filter of the present invention to enhance only brightness without enhancing color and saturation, and the obtained effect is shown in fig. 3 b.
As can be seen from fig. 3b, although there is no distortion in the color, the overall color sensation is poor, so it is necessary to enhance the color as well.
The color enhancement is specifically performed according to the following steps:
1) Calculating a brightness enhancement ratio
After enhancement using fast guided filtering Retinex, the Y component R of the reflected component R of the enhanced image is calculated in YUV color space Y,FGF,i As a reference for color enhancement. The enhancement ratio of the Y component in YUV color space is shown in equation (18):
prop i =R Y,FGF,i /I Y,i (18)
in the formula prop i Is an enhanced proportion of pixel i, R Y,FGF,i For enhancing the Y component of the reflected component R of the image, I Y,i The Y component of map I is input for filtering of pixel I.
2) Enhancing UV component of filtered input map I
The UV color and saturation components are multiplied by a ratio. Experiments show that in order to enhance the color and prevent the noise from being excessively enhanced, the overall effect is good, and the UV component enhancement proportion prop' is as follows:
wherein the content of the first and second substances,is to filter the average value of the Y component of the input image I, i.e. the average luminance of the input image. The enhanced image is color space converted from YUV to RGB using color space conversion formula (20):
wherein I is a pixel, I Y,i ,I U,i ,I V,i To filter the YUV three-component, R 'of input graph I' i ,G′ i ,B′ i The RGB image after brightness and color enhancement is obtained;the enhanced graph is shown in fig. 3 c:
it has been found that artwork highlights or color vibrancy areas are overly enhanced, resulting in blooming. It is necessary to limit it.
3) Defining enhanced out-of-range color components
Under the normal condition, UV weight also has the negativity value in the YUV color space, and the size is not good to be restricted, considers using the mode of feedback, to arbitrary pixel point, carries out the following steps to handle:
a: firstly, calculating an enhanced YUV component value;
b: then converting into RGB color space by formula (20);
c: the value of any one of the RGB three components is not in the pixels of the normal area (referring to the [0,255] interval), and experiments find that on the basis of ensuring the brightness and the color enhancement, the new enhancement proportion prop' of the UV component of the boundary-crossing pixel is as follows:
prop″=0.3-0.03prop′ (21)
substituting prop "into the position of prop' in equation (20) for correction results in an enhanced low-illumination color map with feedback, see fig. 3d.
As can be seen from fig. 3c and 3d, the enhancement pattern with feedback for the UV component has better control on color overflow than the unlimited enhancement pattern of the UV component, and the details of the original color vividness, such as wires, are clearer in the enhancement pattern. Therefore, the invention adopts the formula (21) to enhance the color.
3 results and analysis of the experiment
In order to verify the effect of the invention, experiments are carried out on a Matlab platform, a group of night color images with high dynamic range is selected in the experiments, and after the night color images are enhanced by different methods, the night color images are evaluated from the subjective aspect and the objective aspect respectively. The methods compared with the method of the invention are respectively as follows: classical mstcr and xiaoquan with color preserving function propose Retinex color enhancement method based on color bilateral filter. The parameter settings for the various methods are as follows:
1) Invention (sampling optimized fast boot filter parameters): r =4,s =4, epsilon =0.005;
2) MSRCR algorithm: the three scale radiuses r are respectively selected from 5, 40 and 120, and the contrast control factor is 2;
3) Color bilateral filter Retinex algorithm: r =22, σ 1 =15,σ 2 =0.3,σ 3 =0.04。
In order to ensure the robustness of the color enhancement method, 3 groups of different images selected in the experiment are subjected to the experiment, and the image resolution is 640 multiplied by 480.
The original image at night of the community is shown in figure 4a, wherein the image is provided with high-brightness street lamps, and the rest parts are darker so as to observe the brightness enhancement effect and whether halos are easy to generate; it can be seen that fig. 4b obtained by the MSRCR enhancement algorithm makes the image brightest, but the image brightness enhancement is not adjusted, there is a strong halo phenomenon at the abrupt illumination spot such as street lamp and window with light turned on, and the sky color is distorted. And in fig. 4c obtained by the Retinex color enhancement algorithm based on the color bilateral filter, because RGB three-channel unity-ratio enhancement is adopted, the color distortion phenomenon does not occur, but the luminance enhancement is the worst, the street lamp has a halo phenomenon, the details of the house edge are lost, and the noise is serious. The image 4d obtained by the invention has no halo phenomenon at the street lamp, the denoising level is low, the enhanced image has no color distortion, and the details of branches and house edges are clear.
School night artwork, see fig. 5a, which is overall darker than fig. 4a and contains distant view buildings; it can be seen that, in fig. 5b obtained by the MSRCR enhancement algorithm, the over-white phenomenon is still obvious, and the sudden window illumination change has a severe halo phenomenon; in the graph 5c obtained by the Retinex color enhancement algorithm based on the color bilateral filter, the teaching building shows more details than MSRCR, but the originally highlighted window is degraded at this time, which shows that the algorithm has a certain over-enhancement phenomenon; the color of the graph 5d obtained by the invention is best kept below the teaching floor, the light of the original image highlight area is not excessively enhanced, and the floor detail of the distant high-rise building is clearly visible and is most excellent in three methods, so that the effectiveness and the reliability of the rapid guiding filtering Retinex enhancing method are illustrated.
The indoor low-illumination original drawing, as shown in fig. 3a, is extremely low in illumination, and is mainly used to verify the effectiveness of enhancement effects of various methods under the extremely low illumination condition. It can be seen that the map obtained by the MSRCR enhancement algorithm, see fig. 3e; a graph obtained by a Retinex color enhancement algorithm based on a color bilateral filter is shown in fig. 3f; the image 3g obtained by the invention is excellent in color preservation, brightness enhancement and denoising. Meanwhile, the defects are found, the picture contrast is not as strong as MSRCR, and the oversaturation phenomenon is easily caused after the enhancement at the bright color position of the original low-illumination image, so that the improvement is needed.
The above subjectively contrasts the visual effects enhanced by various methods. Regarding the qualitative comparison, first, the processing times of the respective methods were compared, and the comparison results are shown in table 2.
TABLE 2 time comparison of three nighttime enhancement methods (unit: s)
MSRCR algorithm | Color bilateral filter Retinex algorithm | The enhancement method of the invention | |
FIG. 4a | 0.61 | 55.43 | 0.14 |
FIG. 5a | 0.59 | 50.76 | 0.16 |
FIG. 3a | 0.99 | 57.86 | 0.25 |
As can be seen from table 2, the MSRCR algorithm has a shorter running time because it adds a function of recovering color by mean and variance to the original Retinex algorithm; the running time of the color bilateral filtering Retinex algorithm is longest, and the calculation amount is particularly large because the original Gaussian kernel is replaced by the color bilateral filtering kernel; the invention uses the guide filter with improved sampling, so the processing time is shortest, which shows that the invention is optimal in time complexity and can meet the requirement of real-time processing of video images.
Qualitative assessment enhanced image quality will be analyzed both in terms of luminance mean and Feature Similarity (FSIM). The brightness mean value can represent the integral enhancement degree of the brightness of the low-illumination image, and the FSIM is used as an image full-reference evaluation algorithm, the structural similarity of the two images is considered, and the smaller the image distortion is, the larger the value of the FSIM is (the value range is [0,1 ]). The FSIM calculations of table 3 are based on the original image, which allows a better evaluation of the similarity of the enhanced image in the structure independent of the brightness of the image.
TABLE 3 enhancement Effect Objective Performance comparison
As can be seen from Table 3, in the three methods, although the mean value of the image after the MSRCR algorithm is enhanced is the maximum, the mean value is mainly influenced by the phenomenon of image whitening, and FSIM is poor and represents that distortion is serious; the color bilateral filtering Retinex algorithm has the lowest mean value, the overall brightness is not enhanced sufficiently, and meanwhile, the enhanced graph has serious distortion due to the defects of noise and the algorithm;
the enhancement method of the invention has good brightness enhancement, simultaneously has the most excellent FSIM performance, shows that the image quality is best, the distortion is minimum, and is consistent with the subjective feeling.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (4)
1. A low-illumination image color retention enhancement method based on improved Retinex is characterized by comprising the following steps:
step 1, converting a filtering input image I with low illumination from an RGB color space to a YUV color space, wherein the filtering input image I adopts a gray level image;
step 2, obtaining Y component R of reflection component R of the enhanced image by using an improved Retinex method Y,FGF,i ;
Step 3, calculating the UV component of the enhanced image;
and 4, converting the enhanced image from a YUV color space to an RGB color space, and performing feedback type enhanced color component on the border-crossing pixel points to obtain an enhanced low-illumination color image.
2. The method for color-preserving enhancement of low-illumination images based on modified Retinex as claimed in claim 1, wherein in step 2, the Y component R of the reflection component R of the enhanced image is obtained by using the modified Retinex method Y,FGF,i The method specifically comprises the following steps:
step A, using the filtering input image I as a guide image p, namely filtering the input image I i And a guide amount p i For equal, opposite filtering of the input image I i And filtering radius r for down-sampling:
I′ d =f downsample (I i ,s) (9)
r′=f downsample (r,s) (10)
I′ d the method comprises the steps of obtaining an input image after down sampling, wherein d is an image pixel after down sampling, r' is a filtering radius after down sampling, s is a sampling multiple, and the sampling multiple s is 4 or 8; downsampled optimized Linear coefficient a' k And b' k The calculation formula of (c):
b′ k =m(I′ d ,r′)-a′ k m(I′ d ,r′) (12)
wherein, the first and the second end of the pipe are connected with each other,m(I′ d r ') represents the mean of all pixels d within a window of radius r' centered on pixel k, ε is a regularization parameter, ω k (r ') is a window of radius r' centered on pixel k; | ω | is window ω | k (r') number of pixels;
because the number of image pixels is reduced after down-sampling, the input image I is filtered i All pixel points have corresponding mean value parameter m (a' k R ') and m (b' k R ') to the mean value parameter m (a' k R ') and m (b' k R') performs one bilinear interpolation upsampling recovery:
wherein s is a sampling multiple, i is an up-sampled image pixel,andthe linear coefficients are all up-sampled and restored, which are two key parameters between the original image and the filtered image;
obtaining a filtered output image q according to equation (15) i I.e. enhancing the image;
step B, inputting the filtering into Y component I of the image I Y Enhancement of the Y component L of the background illumination L of an image by substitution of the formula (15) Y , FGF,i :
Step C, solving a Y component R of the reflection component R of the enhanced image by adopting a classical Retinex relational expression Y,FGF,i :
R Y,FGF,i =log(I Y,i )-log(L Y,FGF,i ) (17)。
3. The method as claimed in claim 1, wherein in step 3, the UV component of the enhanced image is calculated by:
in the YUV color space, the Y component R of the reflection component R of the enhanced image is calculated Y,FGF,i The enhancement ratio of (2):
prop i =R Y,FGF,i /I Y,i (18)
in the formula, prop i For the Y component of pixel I, the ratio is enhanced, I Y,i The Y component of the filtered input map I for pixel I;
multiplying the UV component of the filtered input map I by the new enhancement ratio prop' yields the UV component of the enhanced image, wherein,
in the formula (I), the compound is shown in the specification,is the mean of the Y component of the filtered input image I.
4. The method as claimed in claim 1, wherein in step 4, the enhanced image is converted from YUV color space to RGB color space, and the color components of the border-crossing pixels are enhanced in a feedback manner, specifically:
the enhanced image is color space converted from YUV to RGB using color space conversion formula (20):
wherein I is a pixel, I Y,i ,I U,i ,I V,i For filtering YUV three-component, R 'of input graph I' i ,G′ i ,B′ i The RGB image after brightness and color enhancement is obtained;
the pixels of the RGB map in which the value of any one of the three components is not in the normal region [0,255] are called border-crossing pixels, and the new enhancement ratio prop "of the UV component corresponding to the border-crossing pixels is as follows:
prop″=0.3-0.03prop′ (21)
substituting prop "into the position of prop' in equation (20) results in an enhanced low-illumination color map with feedback.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710944257.6A CN107527332B (en) | 2017-10-12 | 2017-10-12 | Low-illumination image color retention enhancement method based on improved Retinex |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710944257.6A CN107527332B (en) | 2017-10-12 | 2017-10-12 | Low-illumination image color retention enhancement method based on improved Retinex |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107527332A true CN107527332A (en) | 2017-12-29 |
CN107527332B CN107527332B (en) | 2020-07-31 |
Family
ID=60684709
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710944257.6A Active CN107527332B (en) | 2017-10-12 | 2017-10-12 | Low-illumination image color retention enhancement method based on improved Retinex |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107527332B (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108364270A (en) * | 2018-05-22 | 2018-08-03 | 北京理工大学 | Colour cast color of image restoring method and device |
CN109886906A (en) * | 2019-01-25 | 2019-06-14 | 武汉大学 | A kind of real-time dim light video enhancement method and system of details sensitivity |
CN109978789A (en) * | 2019-03-26 | 2019-07-05 | 电子科技大学 | A kind of image enchancing method based on Retinex algorithm and guiding filtering |
CN110009551A (en) * | 2019-04-09 | 2019-07-12 | 浙江大学 | A kind of real-time blood vessel Enhancement Method of CPUGPU collaboration processing |
CN110211070A (en) * | 2019-06-05 | 2019-09-06 | 电子科技大学 | A kind of low-luminance color image enchancing method based on local extremum |
CN110211080A (en) * | 2019-05-24 | 2019-09-06 | 南昌航空大学 | It is a kind of to dissect and functional medicine image interfusion method |
CN110232661A (en) * | 2019-05-03 | 2019-09-13 | 天津大学 | Low illumination colour-image reinforcing method based on Retinex and convolutional neural networks |
CN110298792A (en) * | 2018-03-23 | 2019-10-01 | 北京大学 | Low light image enhancing and denoising method, system and computer equipment |
CN110473152A (en) * | 2019-07-30 | 2019-11-19 | 南京理工大学 | Based on the image enchancing method for improving Retinex algorithm |
CN110570381A (en) * | 2019-09-17 | 2019-12-13 | 合肥工业大学 | semi-decoupling image decomposition dark light image enhancement method based on Gaussian total variation |
CN110675351A (en) * | 2019-09-30 | 2020-01-10 | 集美大学 | Marine image processing method based on global brightness adaptive equalization |
CN111612700A (en) * | 2019-02-26 | 2020-09-01 | 杭州海康威视数字技术股份有限公司 | Image enhancement method |
CN111918095A (en) * | 2020-08-05 | 2020-11-10 | 广州市百果园信息技术有限公司 | Dim light enhancement method and device, mobile terminal and storage medium |
CN111986120A (en) * | 2020-09-15 | 2020-11-24 | 天津师范大学 | Low-illumination image enhancement optimization method based on frame accumulation and multi-scale Retinex |
CN112132749A (en) * | 2020-09-24 | 2020-12-25 | 合肥学院 | Image processing method and device applying parameterized Thiele continuous fractional interpolation |
CN112184588A (en) * | 2020-09-29 | 2021-01-05 | 哈尔滨市科佳通用机电股份有限公司 | Image enhancement system and method for fault detection |
CN112288652A (en) * | 2020-10-30 | 2021-01-29 | 西安科技大学 | PSO optimization-based guide filtering-Retinex low-illumination image enhancement method |
CN112308803A (en) * | 2020-11-25 | 2021-02-02 | 哈尔滨工业大学 | Self-supervision low-illumination image enhancement and denoising method based on deep learning |
CN112712470A (en) * | 2019-10-25 | 2021-04-27 | 华为技术有限公司 | Image enhancement method and device |
CN113096033A (en) * | 2021-03-22 | 2021-07-09 | 北京工业大学 | Low-illumination image enhancement method based on Retinex model self-adaptive structure |
CN113256533A (en) * | 2021-06-15 | 2021-08-13 | 北方民族大学 | Self-adaptive low-illumination image enhancement method and system based on MSRCR |
WO2021218364A1 (en) * | 2020-04-27 | 2021-11-04 | 华为技术有限公司 | Image enhancement method and electronic device |
CN113947535A (en) * | 2020-07-17 | 2022-01-18 | 四川大学 | Low-illumination image enhancement method based on illumination component optimization |
CN114757854A (en) * | 2022-06-15 | 2022-07-15 | 深圳市安星数字系统有限公司 | Night vision image quality improving method, device and equipment based on multispectral analysis |
CN115830459A (en) * | 2023-02-14 | 2023-03-21 | 山东省国土空间生态修复中心(山东省地质灾害防治技术指导中心、山东省土地储备中心) | Method for detecting damage degree of mountain forest and grass life community based on neural network |
CN116012378A (en) * | 2023-03-24 | 2023-04-25 | 湖南东方钪业股份有限公司 | Quality detection method for alloy wire used for additive manufacturing |
CN116132652A (en) * | 2023-01-31 | 2023-05-16 | 格兰菲智能科技有限公司 | Text image processing method, device, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101459782B1 (en) * | 2008-09-08 | 2014-11-10 | 현대자동차주식회사 | A system for enhancing a night time image for a vehicle camera |
CN104796682A (en) * | 2015-04-22 | 2015-07-22 | 福州瑞芯微电子有限公司 | Image signal color enhancement method and image signal color enhancement device |
CN106897981A (en) * | 2017-04-12 | 2017-06-27 | 湖南源信光电科技股份有限公司 | A kind of enhancement method of low-illumination image based on guiding filtering |
-
2017
- 2017-10-12 CN CN201710944257.6A patent/CN107527332B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101459782B1 (en) * | 2008-09-08 | 2014-11-10 | 현대자동차주식회사 | A system for enhancing a night time image for a vehicle camera |
CN104796682A (en) * | 2015-04-22 | 2015-07-22 | 福州瑞芯微电子有限公司 | Image signal color enhancement method and image signal color enhancement device |
CN106897981A (en) * | 2017-04-12 | 2017-06-27 | 湖南源信光电科技股份有限公司 | A kind of enhancement method of low-illumination image based on guiding filtering |
Non-Patent Citations (1)
Title |
---|
KAIMING HE: "Fast Guided Filter", 《COMPUTER SCIENCE》 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110298792B (en) * | 2018-03-23 | 2021-08-17 | 北京大学 | Low-illumination image enhancement and denoising method, system and computer equipment |
CN110298792A (en) * | 2018-03-23 | 2019-10-01 | 北京大学 | Low light image enhancing and denoising method, system and computer equipment |
CN108364270B (en) * | 2018-05-22 | 2020-11-06 | 北京理工大学 | Color reduction method and device for color cast image |
CN108364270A (en) * | 2018-05-22 | 2018-08-03 | 北京理工大学 | Colour cast color of image restoring method and device |
CN109886906A (en) * | 2019-01-25 | 2019-06-14 | 武汉大学 | A kind of real-time dim light video enhancement method and system of details sensitivity |
CN109886906B (en) * | 2019-01-25 | 2020-09-08 | 武汉大学 | Detail-sensitive real-time low-light video enhancement method and system |
CN111612700B (en) * | 2019-02-26 | 2023-05-26 | 杭州海康威视数字技术股份有限公司 | Image enhancement method |
CN111612700A (en) * | 2019-02-26 | 2020-09-01 | 杭州海康威视数字技术股份有限公司 | Image enhancement method |
CN109978789A (en) * | 2019-03-26 | 2019-07-05 | 电子科技大学 | A kind of image enchancing method based on Retinex algorithm and guiding filtering |
CN110009551A (en) * | 2019-04-09 | 2019-07-12 | 浙江大学 | A kind of real-time blood vessel Enhancement Method of CPUGPU collaboration processing |
CN110232661A (en) * | 2019-05-03 | 2019-09-13 | 天津大学 | Low illumination colour-image reinforcing method based on Retinex and convolutional neural networks |
CN110232661B (en) * | 2019-05-03 | 2023-01-06 | 天津大学 | Low-illumination color image enhancement method based on Retinex and convolutional neural network |
CN110211080B (en) * | 2019-05-24 | 2023-07-07 | 南昌航空大学 | Anatomical and functional medical image fusion method |
CN110211080A (en) * | 2019-05-24 | 2019-09-06 | 南昌航空大学 | It is a kind of to dissect and functional medicine image interfusion method |
CN110211070A (en) * | 2019-06-05 | 2019-09-06 | 电子科技大学 | A kind of low-luminance color image enchancing method based on local extremum |
CN110211070B (en) * | 2019-06-05 | 2023-04-14 | 电子科技大学 | Low-illumination color image enhancement method based on local extreme value |
CN110473152A (en) * | 2019-07-30 | 2019-11-19 | 南京理工大学 | Based on the image enchancing method for improving Retinex algorithm |
CN110570381B (en) * | 2019-09-17 | 2022-04-29 | 合肥工业大学 | Semi-decoupling image decomposition dark light image enhancement method based on Gaussian total variation |
CN110570381A (en) * | 2019-09-17 | 2019-12-13 | 合肥工业大学 | semi-decoupling image decomposition dark light image enhancement method based on Gaussian total variation |
CN110675351B (en) * | 2019-09-30 | 2022-03-11 | 集美大学 | Marine image processing method based on global brightness adaptive equalization |
CN110675351A (en) * | 2019-09-30 | 2020-01-10 | 集美大学 | Marine image processing method based on global brightness adaptive equalization |
CN112712470A (en) * | 2019-10-25 | 2021-04-27 | 华为技术有限公司 | Image enhancement method and device |
WO2021218364A1 (en) * | 2020-04-27 | 2021-11-04 | 华为技术有限公司 | Image enhancement method and electronic device |
CN113947535B (en) * | 2020-07-17 | 2023-10-13 | 四川大学 | Low-illumination image enhancement method based on illumination component optimization |
CN113947535A (en) * | 2020-07-17 | 2022-01-18 | 四川大学 | Low-illumination image enhancement method based on illumination component optimization |
CN111918095A (en) * | 2020-08-05 | 2020-11-10 | 广州市百果园信息技术有限公司 | Dim light enhancement method and device, mobile terminal and storage medium |
CN111986120A (en) * | 2020-09-15 | 2020-11-24 | 天津师范大学 | Low-illumination image enhancement optimization method based on frame accumulation and multi-scale Retinex |
CN112132749A (en) * | 2020-09-24 | 2020-12-25 | 合肥学院 | Image processing method and device applying parameterized Thiele continuous fractional interpolation |
CN112184588A (en) * | 2020-09-29 | 2021-01-05 | 哈尔滨市科佳通用机电股份有限公司 | Image enhancement system and method for fault detection |
CN112288652A (en) * | 2020-10-30 | 2021-01-29 | 西安科技大学 | PSO optimization-based guide filtering-Retinex low-illumination image enhancement method |
CN112308803A (en) * | 2020-11-25 | 2021-02-02 | 哈尔滨工业大学 | Self-supervision low-illumination image enhancement and denoising method based on deep learning |
CN113096033B (en) * | 2021-03-22 | 2024-05-28 | 北京工业大学 | Low-light image enhancement method based on Retinex model self-adaptive structure |
CN113096033A (en) * | 2021-03-22 | 2021-07-09 | 北京工业大学 | Low-illumination image enhancement method based on Retinex model self-adaptive structure |
CN113256533A (en) * | 2021-06-15 | 2021-08-13 | 北方民族大学 | Self-adaptive low-illumination image enhancement method and system based on MSRCR |
CN114757854A (en) * | 2022-06-15 | 2022-07-15 | 深圳市安星数字系统有限公司 | Night vision image quality improving method, device and equipment based on multispectral analysis |
CN114757854B (en) * | 2022-06-15 | 2022-09-02 | 深圳市安星数字系统有限公司 | Night vision image quality improving method, device and equipment based on multispectral analysis |
CN116132652A (en) * | 2023-01-31 | 2023-05-16 | 格兰菲智能科技有限公司 | Text image processing method, device, equipment and storage medium |
CN115830459A (en) * | 2023-02-14 | 2023-03-21 | 山东省国土空间生态修复中心(山东省地质灾害防治技术指导中心、山东省土地储备中心) | Method for detecting damage degree of mountain forest and grass life community based on neural network |
CN116012378A (en) * | 2023-03-24 | 2023-04-25 | 湖南东方钪业股份有限公司 | Quality detection method for alloy wire used for additive manufacturing |
Also Published As
Publication number | Publication date |
---|---|
CN107527332B (en) | 2020-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107527332B (en) | Low-illumination image color retention enhancement method based on improved Retinex | |
CN109064426B (en) | Method and device for suppressing glare in low-illumination image and enhancing image | |
CN111986120A (en) | Low-illumination image enhancement optimization method based on frame accumulation and multi-scale Retinex | |
CN105976330B (en) | A kind of embedded greasy weather real time video image stabilization | |
Gao et al. | Sand-dust image restoration based on reversing the blue channel prior | |
CN108537756B (en) | Single image defogging method based on image fusion | |
Ma et al. | An effective fusion defogging approach for single sea fog image | |
Wang et al. | Variational single nighttime image haze removal with a gray haze-line prior | |
CN114331873B (en) | Non-uniform illumination color image correction method based on region division | |
CN108765336A (en) | Image defogging method based on dark bright primary colors priori with auto-adaptive parameter optimization | |
CN110298792B (en) | Low-illumination image enhancement and denoising method, system and computer equipment | |
CN112200746B (en) | Defogging method and equipment for foggy-day traffic scene image | |
CN110675351B (en) | Marine image processing method based on global brightness adaptive equalization | |
CN104318529A (en) | Method for processing low-illumination images shot in severe environment | |
CN109919859A (en) | A kind of Outdoor Scene image defogging Enhancement Method calculates equipment and its storage medium | |
CN112991222A (en) | Image haze removal processing method and system, computer equipment, terminal and application | |
Gao et al. | Sandstorm image enhancement based on YUV space | |
CN108648160B (en) | Underwater sea cucumber image defogging enhancement method and system | |
Xue et al. | Video image dehazing algorithm based on multi-scale retinex with color restoration | |
CN111127377B (en) | Weak light enhancement method based on multi-image fusion Retinex | |
CN115456905A (en) | Single image defogging method based on bright and dark region segmentation | |
CN115587945A (en) | High dynamic infrared image detail enhancement method, system and computer storage medium | |
Wen et al. | Autonomous robot navigation using Retinex algorithm for multiscale image adaptability in low-light environment | |
Xu et al. | A novel variational model for detail-preserving low-illumination image enhancement | |
CN109345479B (en) | Real-time preprocessing method and storage medium for video monitoring data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |