CN117274085A - Low-illumination image enhancement method and device - Google Patents

Low-illumination image enhancement method and device Download PDF

Info

Publication number
CN117274085A
CN117274085A CN202311204220.1A CN202311204220A CN117274085A CN 117274085 A CN117274085 A CN 117274085A CN 202311204220 A CN202311204220 A CN 202311204220A CN 117274085 A CN117274085 A CN 117274085A
Authority
CN
China
Prior art keywords
image
component
brightness
enhancement
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311204220.1A
Other languages
Chinese (zh)
Inventor
王文韫
舒晨洋
李寿科
李水生
贺雄英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University of Science and Technology
Original Assignee
Hunan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University of Science and Technology filed Critical Hunan University of Science and Technology
Priority to CN202311204220.1A priority Critical patent/CN117274085A/en
Publication of CN117274085A publication Critical patent/CN117274085A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a low-illumination image enhancement method and a device, which relate to the technical field of computer vision and comprise the following steps: separating under HSV space to obtain tone component, brightness component and saturation component of original image; adopting a BM3D filtering algorithm as a center surrounding function of a Retinex algorithm, estimating an illumination component, and obtaining a corresponding reflection component; enhancing the brightness component according to the illumination component; sharpening the reflected component using a laplace convolution kernel; extracting the edge texture of the original image, and recording the pixel coordinates corresponding to the edge texture; performing a sharpening operation using the laplace convolution kernel to edge locations in the reflected component; weighting and fusing the reflection component and the brightness component to form a new brightness component; the hue component, the brightness component and the saturation component are channel fused into a new image and converted back into RGB space. The method and the device can effectively improve the conditions of detail loss, color distortion and the like of the image while enhancing the brightness of the low-illumination image.

Description

Low-illumination image enhancement method and device
Technical Field
The application relates to the technical field of computer vision, in particular to a low-illumination image enhancement method and device.
Background
With the rapid development of computer vision technology, the application fields based on image technology, such as face recognition, monitoring, industrial production, medical examination and the like, are more and more widespread, and the image is used as the bottom data in the field of vision research, and the high-definition and high-quality image data is one of the most critical factors for realizing various algorithms. However, due to the influence of factors such as ambient illuminance and image acquisition equipment, the acquired image cannot avoid quality problems such as fuzzy details, uneven illumination, low contrast and the like, so that the calculation, analysis and application related to the subsequent vision field are directly restricted. Therefore, research on a low-illumination image enhancement method is carried out, and development of an image information mining theory is promoted. The low-illumination image is enhanced to cause problems such as detail loss, color distortion and noise increase.
Disclosure of Invention
The technical problem to be solved by the application is to provide a low-illumination image enhancement method and device aiming at the defects in the prior art.
A low-light image enhancement method, comprising:
separating under HSV space to obtain tone component, brightness component and saturation component of original image;
Adopting a BM3D filtering algorithm as a center surrounding function of a Retinex algorithm, estimating an illumination component, and obtaining a corresponding reflection component;
enhancing the brightness component according to the illumination component;
sharpening the reflected component using a laplace convolution kernel;
extracting the edge texture of the original image, and recording the pixel coordinates corresponding to the edge texture; performing a sharpening operation using the laplace convolution kernel to edge locations in the reflected component;
weighting and fusing the reflection component and the brightness component to form a new brightness component;
channel fusion is carried out on the tone component, the brightness component and the saturation component to form a new image, and the new image is converted back to an RGB space;
noise of the new image is eliminated.
Optionally, before the BM3D filtering algorithm is adopted as the center-surround function of the Retinex algorithm, the method further includes the step of: the brightness component is optimized using a constrained contrast adaptive histogram equalization (CLAHE) algorithm.
Optionally, the BM3D filtering algorithm is used as a center-surround function of the Retinex algorithm, to estimate the illumination component, including the steps of:
in the first stage, a certain same pixel x exists in a plurality of similar blocks, and repeated pixel points x are weighted and averaged and then aggregated, so that a basic estimation image required by the second stage can be obtained, and the basic estimation formula is as follows:
Wherein y is basic (x) A base estimate image for the base estimate filtering stage,for the estimated value of the similar group, a similar three-dimensional array which can be obtained by a similar block measurement formula is obtained by a series of transformation; gamma is a threshold filtering operation; />And->Three-dimensional transformation and three-dimensional inverse transformation respectively, +.>As a characteristic function of similar blocks omega h Number N of nonzero coefficients after hard threshold shrinkage for similar groups h The formula of the estimated value weight is as follows:
in the second stage, the basic estimated graph is partitioned again and estimated block by block, and finally all the reference blocks in the previous stage are weighted and aggregated to obtain the final estimated image y final (x) The specific formula is as follows:
wherein T' h And T h Respectively a three-dimensional matrix of the basic estimation and a three-dimensional matrix of the original picture,similarity group estimate for wiener filtering, < >>And->For three-dimensional linear transformation and three-dimensional inverse transformation, +.>For wiener filter contraction coefficient, < >>The weight coefficient of wiener filtering is obtained through the noise standard deviation sigma and the wiener filtering shrinkage coefficient.
Optionally, enhancing the brightness component in accordance with the illumination component comprises the steps of:
judging whether a bright area and a dark area exist in the image according to the illumination component;
When two kinds of information of a bright part area and a dark part area exist in the image at the same time, the enhancement of the dark part of the brightness component is realized by using an improved Gamma transformation function, and the brightness of the bright part of the brightness component is kept or pulled down according to the bright part information, so that the brightness of the dark part is enhanced, and meanwhile, the detail of the bright part is kept completely; wherein, the improved Gamma transformation function is:
in the above formula, when δ is a value of f (x) =0, a and b are adjusting parameters of the function, and the adjusting parameters can be used together to adjust and improve the enhancement amplitude and range of the Gamma function on the pixel point, and the solving formula of a and b is as follows:
where m is the normalized mean value of pixels with pixel values below 97 in the illumination component.
Optionally, the method further comprises the steps of:
when the image only has dark part areas, the brightness component is enhanced by using the adaptive Gamma transformation;
the enhancement formula of the adaptive Gamma transformation is as follows: f (x) =x γ Wherein f (x) is an output image, x is an image subjected to normalization processing, and gamma is an image enhancement parameter;
the automatic calculation and selection formula of the gamma parameter is as follows:
wherein N is the average value of the gray level image of the image to be enhanced, N is the average value after normalization, and gamma is the image enhancement parameter.
Optionally, the judging whether the bright area and the dark area exist in the image according to the illumination component specifically includes:
Dividing gray values of all pixels of the illumination component into 16 levels;
when the duty ratio of the current level four pixels is more than 10%, judging that a dark part area exists in the image;
when the duty ratio of the last four-stage pixel is more than 10%, it is determined that the image has a bright area.
Optionally, the extracting the edge texture of the original image specifically includes:
the original texture is extracted using Gabor filters and Canny algorithm.
Optionally, the noise of the new image is eliminated, specifically:
noise in the new image is removed using the laplace convolution kernel sharpened image as a guide map for guide filtering.
Optionally, before channel fusing the hue component, the brightness component, and the saturation component into the new image, the method further includes:
correcting the saturation component by using the adaptive Gamma; the enhancement parameter γ value is γ=0.85+0.2g; the enhancement formula of the adaptive Gamma transformation is as follows: f (x) =x γ Wherein f (x) is an output image, x is an image subjected to normalization processing, gamma is an image enhancement parameter, and g is a gamma conversion adjustment parameter.
On the other hand, the application also provides a low-illumination image enhancement device, which comprises:
the separation module is used for separating and obtaining tone components, brightness components and saturation components of the original image under HSV space;
The estimating module is used for estimating the illumination component by adopting a BM3D filtering algorithm as a center surrounding function of the Retinex algorithm and obtaining a corresponding reflection component;
the enhancement module is used for enhancing the brightness component according to the illumination component;
the sharpening module is used for sharpening the reflection component by using the Laplace convolution check;
the extraction module is used for extracting the edge texture of the original image and recording the pixel coordinates corresponding to the edge texture; performing a sharpening operation using the laplace convolution kernel to edge locations in the reflected component;
the fusion module is used for carrying out weighted fusion on the reflection component and the brightness component to form a new brightness component;
the conversion module is used for fusing the hue component, the brightness component and the saturation component into a new image through channels and converting the new image back to an RGB space;
and the elimination module is used for eliminating noise of the new image.
The low-illumination image enhancement method is based on an improved Retinex algorithm of an HSV color space, adopts a BM3D filtering algorithm as a center surrounding function of the Retinex algorithm, achieves extraction of illumination components of a low-illumination image, obtains reflection components from the illumination components, enhances brightness components according to the illumination components, sharpens the reflection components and textures of the reflection components, and performs weighted fusion on the reflection components and the brightness components, so that the brightness of the low-illumination image is enhanced, and meanwhile, the conditions of detail loss, color distortion and the like of the image are effectively improved.
Drawings
Fig. 1 is one of flowcharts of a low-luminance image enhancement method in an embodiment of the present application.
Fig. 2 is a second flowchart of a low-luminance image enhancement method according to an embodiment of the present application.
Fig. 3 is a third flowchart of a low-luminance image enhancement method according to an embodiment of the present application.
Fig. 4 is an image component diagram in an embodiment of the present application.
Fig. 5 is an overall diagram of Gamma transformation in an embodiment of the present application.
FIG. 6 is a partial diagram of Gamma transformation in an embodiment of the present application.
Fig. 7 is a schematic diagram of an enhanced brightness component of an original image in an embodiment of the present application.
Fig. 8 is a reflection component after sharpening in an embodiment of the present application.
Fig. 9 is a brightness component diagram after edge detection and weighted fusion in an embodiment of the present application.
Fig. 10 is an overall flowchart of the modified Retinex algorithm in an embodiment of the present application.
Fig. 11 is a comparison of four sets of experimental images in the examples of the present application.
Fig. 12 is an index analysis chart of an experiment in the example of the present application.
Fig. 13 is a block diagram of a low-illuminance image enhancement apparatus in the embodiment of the present application.
Detailed Description
The following are specific embodiments of the present application and the technical solutions of the present application are further described with reference to the accompanying drawings, but the present application is not limited to these embodiments. In the following description, specific details such as specific configurations and components are provided merely to facilitate a thorough understanding of embodiments of the present application. It will therefore be apparent to those skilled in the art that various changes and modifications can be made to the embodiments described herein without departing from the scope and spirit of the application. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
In addition, embodiments and features of embodiments in the present application may be combined with each other without conflict.
Referring to fig. 1, an embodiment of the present application provides a low-illumination image enhancement method, which is applied to a low-illumination image, and is used for enhancing the brightness of the low-illumination image and simultaneously effectively improving the details loss, color distortion and other conditions of the image. The low-illuminance image enhancement method includes steps S101 to S108, and is specifically described below with reference to the drawings.
Step S101, obtaining the tone component, the brightness component, and the saturation component of the original image by separation in HSV space.
Specifically, each color in the HSV space is represented by hue, saturation and brightness, and parameters of the color in the HSV space are hue H, saturation S and brightness V, respectively. The three channels of hue H, saturation S and brightness V of the original image are separated in HSV space.
Step S102, a BM3D filtering algorithm is adopted as a center surrounding function of the Retinex algorithm, the illumination component is estimated, and the corresponding reflection component is obtained.
In particular, the Retinex theory, which is proposed based on both color constancy and illumination invariance, considers that the color and brightness of an object perceived by the human eye vision system (Human Visual System, HVS) depend on the reflective properties of the object surface, and an image can be considered to consist of the reflective component L and the illumination component R of the object itself. The mathematical model constructed based on this is as follows:
S(x,y)=R(x,y)×L(x,y)
Where S (x, y) represents the original image, R (x, y) is the reflection component, and L (x, y) is the illumination component.
logR(x,y)=logS(x,y)-log[G(x,y)·S(x,y)]
In the above formula, G (x, y) is a Gaussian function, sigma is a scale parameter, a convolution operation is performed on S (x, y) by using G (x, y) to estimate L (x, y), then logarithms are taken on both sides of the equation to separate R (x, y) from L (x, y), and the result is quantized to a pixel value in the range of [0,255] according to the formula (3):
R(x,y)=(Value-Min)/(Max-Min)*255
value is the current pixel value, max is the maximum pixel value in the image, min is the minimum pixel value in the image, R (x, y) is used as an enhanced image by the Retinex algorithm, and the enhancement effect cannot be simultaneously considered on the details and the color information of the image.
In an embodiment of the present application, a BM3D filtering algorithm is used as a center-surround function of the Retinex algorithm to estimate the illumination component, and the method includes a first stage and a second stage.
Specifically, a three-dimensional block matching algorithm (b lock-matching and 3D fi lter ing,BM3D) is a three-dimensional filtering algorithm proposed based on a non-local mean filter (NLM). In the image, a plurality of similar repeated structures generally exist, the BM3D algorithm stacks similar structural blocks into a three-dimensional matrix in a block matching mode, then performs collaborative filtering processing, and then aggregates the processing result to the position of the original image block. The metric formula for finding similar blocks is as follows:
Wherein Z is x Representing a sliding window during a search, Z xR For a certain reference point X R Where gamma is the hard threshold filtering operation, would set the value below the threshold to 0,for two-dimensional discrete cosine transform, M is the image block size.
The BM3D implementation process mainly comprises two stages of basic estimation filtering and final estimation filtering, wherein the two stages comprise similar block matching estimation, grouping, 3D collaborative filtering, aggregation weighting and other steps, and the difference is that the first stage collaborative filtering uses 3D collaborative hard threshold filtering, and the second stage uses 3D collaborative wiener filtering.
In the first stage, a certain same pixel x exists in a plurality of similar blocks, and repeated pixel points x are weighted and averaged and then aggregated, so that a basic estimation image required by the second stage can be obtained, and the basic estimation formula is as follows:
wherein y is basic (x) A base estimate image for the base estimate filtering stage,for the estimated value of the similar group, a similar three-dimensional array which can be obtained by a similar block measurement formula is obtained by a series of transformation; gamma is a threshold filtering operation; />Andthree-dimensional transformation and three-dimensional inverse transformation respectively, +.>As a characteristic function of similar blocks omega h Number N of nonzero coefficients after hard threshold shrinkage for similar groups h The formula of the estimated value weight is as follows:
in the second stage, the basic estimated graph is partitioned again and estimated block by block, and finally all the reference blocks in the previous stage are weighted and aggregated to obtain the final estimated image y final (x) The specific formula is as follows:
wherein T' h And T h Respectively a three-dimensional matrix of the basic estimation and a three-dimensional matrix of the original picture,similarity group estimate for wiener filtering, < >>And->For three-dimensional linear transformation and three-dimensional inverse transformation, +.>For wiener filter contraction coefficient, < >>The weight coefficient of wiener filtering is obtained through the noise standard deviation sigma and the wiener filtering shrinkage coefficient.
Compared with Gaussian filtering used by the Retinex algorithm, the method is only calculated in an image space domain, the BM3D algorithm also adopts 3D orthogonal transformation in the implementation process, performs filtering operation on images in a transformation domain, fully utilizes correlation inside image blocks and correlation among the image blocks, and can fully retain unique structure and detail information of the images while denoising the images.
In an embodiment of the present application, before the BM3D filtering algorithm is used as the center-surround function of the Retinex algorithm, the method further includes the steps of: the brightness component is optimized using a constrained contrast adaptive histogram equalization (CLAHE) algorithm.
Specifically, the HSV color space model is a nonlinear transformation of the RGB model, converts the original RGB image into HSV space, and separates the three channels H, S, V thereof to obtain the brightness component V of the original image. And (3) performing equalization processing on the brightness components by using the CLAHE to improve the contrast of the image, so that the BM3D algorithm can retain more detail information when extracting the illumination components, and the brightness components before and after correction are respectively shown in fig. 4b and 4 c. Then, setting the noise standard deviation sigma of the BM3D algorithm to be 9, setting the hard threshold to be 24.3, setting the block similarity threshold of the first two phases to be 1500 and 800 respectively, calculating the reflection component in the logarithmic domain according to the Retinex algorithm principle after extracting the illumination component, and quantizing the reflection component to be within the pixel range of [0,255] to obtain a reflection component diagram as shown in fig. 4 (D).
Step S103, enhancing the brightness component according to the illumination component.
Referring to fig. 2, in a specific embodiment, step S103, enhancing the brightness component according to the illumination component includes steps S1031 to S1033.
Step S1031, judging whether a bright area and a dark area exist in the image according to the illumination component.
Referring to fig. 3, in a specific embodiment, step S1031 includes steps S1031a to S1031c.
In step S1031a, the gradation values of all pixels of the illumination component are divided into 16 levels.
In step S1031b, when the duty ratio of the current four-stage pixel is greater than 10%, it is determined that the image has a dark portion region.
In step S1031c, when the duty ratio of the next four-stage pixel is greater than 10%, it is determined that the image has a bright area.
Specifically, the gradation value of the illumination component L is divided into 16 levels, and when the first four-level pixel of L is more than 10%, it is determined that the image has a dark portion region, and when the number of the second four-level pixels is more than 10%, it is determined that the image has a bright portion.
Step S1032, when two kinds of information of a bright part area and a dark part area exist in the image at the same time, the enhancement of the dark part of the brightness component is realized by using the improved Gamma transformation function, and the brightness of the bright part of the brightness component is kept or pulled down according to the bright part information, so that the brightness of the dark part is enhanced, and meanwhile, the detail of the bright part is kept completely; wherein, the improved Gamma transformation function is:
in the above formula, when δ is a value of f (x) =0, a and b are adjusting parameters of the function, and the adjusting parameters can be used together to adjust and improve the enhancement amplitude and range of the Gamma function on the pixel point, and the solving formula of a and b is as follows:
where m is the normalized mean value of pixels with pixel values below 97 in the illumination component.
Step S1033, when the image only has dark part area, using adaptive Gamma transformation to enhance brightness component;
the enhancement formula of the adaptive Gamma transformation is as follows: f (x) =x γ Wherein f (x) is an output image, x is an image subjected to normalization processing, and gamma is an image enhancement parameter;
the automatic calculation and selection formula of the gamma parameter is as follows:
wherein N is the average value of the gray level image of the image to be enhanced, N is the average value after normalization, and gamma is the image enhancement parameter.
Specifically, judging the category of the image according to the brightness information of L, calculating a parameter g value, and when the image only has a dark part, enhancing the brightness component by using the self-adaptive Gamma by using the algorithm, wherein the g value is a Gamma value; when the image has light and dark information at the same time, the values of a and b are calculated according to g, at the moment, if the image does not have overexposure, the V dark part is enhanced by using improved Gamma, the detail information of the bright part is maintained, and if the image has overexposure, the brightness of the bright part is reduced.
The Gamma transformation has a remarkable enhancement effect on the low-illumination image, however, when a high-light part and a low-light part exist in one image caused by backlight, the traditional Gamma transformation can only select to enhance the details of the high-light part or reduce the details of the high-light part, and make the image of the low-light part darker, and the details of the image are lost. The self-adaptive Gamma transformation is based on the theory that the average value of all normalized pixels of a reasonable image is about 0.5, and a method for automatically calculating and selecting Gamma parameters is provided. Although adaptive Gamma achieves automatic selection of enhancement parameters, the problem of Gamma transformation is not essentially solved.
Further, the Gamma transformation method is improved, and the highlight part of the image can be adjusted while the dark part of the image is enhanced, so that the low light part and the highlight part of the image are enhanced at the same time as much as possible. When two kinds of information of light and dark exist in the image at the same time, the improvement Gamma transformation function can be used for enhancing the dark part of the image, and the brightness of the brightness component is kept or pulled down according to the bright part information of the image, so that the brightness of the dark part of the image is enhanced, and meanwhile, the details of the bright part of the image are kept completely.
The backlight low-illuminance image in the SYN data set is enhanced using a conventional Gamma transformation and an improved Gamma transformation, the correction cases and the gray histograms of which are shown in fig. 5. It can be seen from fig. 5 that the enhancement of the image dark portion by the improved Gamma transformation and the conventional Gamma transformation effect are very remarkable, and the pixels in the image dark portion area are approximately uniformly distributed on a higher pixel level by both methods, so that the main peak of the gray value in the image dark portion area is shifted from about 10 to about 50. However, since the Gamma transformation can only perform unidirectional enhancement on the image, as is obvious from fig. 5 (e, f), the original image bright portion pixel concentrated region is concentrated on a higher-level pixel level after the traditional Gamma transformation enhancement, so that the image bright portion is brighter.
In order to better see the detail change of the image bright part, a part of the image bright part is cut out as shown in fig. 6 (a, b, c, d), and the corresponding gray histogram graph is calculated (e, f, g, h), and as can be seen from direct observation of the cut-out bright part picture, the traditional Gamma transformation method can cause the image bright part to lose detail, namely the gray value concentrated area changes from 210 to 255 to 230 to 255 on the histogram, and the improved Gamma transformation only carries out smaller correction on the image bright part, the gray value concentrated area almost has no change, and the image detail remains intact. Comparing fig. 5 (a, c, e, g) shows that the adaptive Gamma transformation only adds very little enhancement to the original image, which is related to the automatic selection of Gamma values, which makes the automatic calculation of the selected Gamma values unsuitable for the image because the pixels of the dark and bright parts of the image are more concentrated and the pixel mean is closer to 0.5. From fig. 5 (d, h) and fig. 6 (d, h), it can be analyzed that the improvement of Gamma transformation enhances both the dark portion and the bright portion of the image, wherein the enhancement amplitude of the dark portion is larger, the brightness of the dark portion of the image is greatly improved, the pixels are uniformly distributed from 0 to 50 after enhancement to 50 to 150, and only the bright portion is slightly enhanced, so that the original pixel distribution is more balanced. From the above analysis, compared with the traditional Gamma and the enhancement mode of the adaptive Gamma transformation, the enhancement effect of the improved Gamma transformation on the image with dark parts and bright parts at the same time is more balanced, and the detail aspect is more complete.
Further, the brightness component of the original image is enhanced using the modified Gamma transform according to the g value calculated from the illumination component, and the enhancement result is shown in fig. 7.
Step S104, the reflection component is sharpened by using the Laplace convolution kernel.
Specifically, image sharpening is a way to compensate for the image contour, and can achieve enhancement of edges, textures, certain linear target elements and gray jump parts in the image. Image sharpening can generally be divided into two steps: edge detection and edge enhancement, wherein the edge detection result directly influences the image sharpening quality, and if the edge detection result contains a large number of false edges, the enhancement of textures in the image cannot be realized, a large number of noise points can be caused, and the image quality is reduced. The unsharpened Masking algorithm (UM) is a common image sharpening method, which combines gaussian blur and Laplacian (Laplacian) operators, filters the low-frequency part in the image by using a gaussian smoother, and then uses Laplacian convolution to check and enhance the high-frequency image, so as to achieve the aim of image sharpening. The method uses BM3D algorithm to extract low-frequency part of brightness component of image, calculates high-frequency image in logarithmic domain, namely improves reflection component of image solved by Retinex algorithm, then uses Laplace convolution to check reflection component to make sharpening solution, and the reflection component after sharpening by Laplace convolution kernel is shown in figure 8. As is apparent from fig. 8, the edges of the image are greatly enhanced, but since the laplace operator sharpens the whole image and a large number of false edges exist on the edge detection result, more noise points appear in the image, the negative effects caused by image sharpening are reduced by adopting a multi-weight fusion mode.
Step S105, extracting the edge texture of the original image, and recording the pixel coordinates corresponding to the edge texture; the edge positions in the reflected components are verified using a laplace convolution to perform a sharpening operation.
Step S106, the reflection component and the brightness component are weighted and fused to form a new brightness component.
In an embodiment of the present application, in step S105, the extracting the edge texture of the original image specifically includes: the original texture is extracted using Gabor filters and Canny algorithm.
Because the sharpened image with low weight weakens the enhancement effect of the edge texture, the Gabor filter and the Canny edge detection algorithm are combined to extract the edge of the original image, and the Gabor transformation is a windowed short-time Fourier transformation, has extremely strong robustness and can extract the edge characteristics of the image in a frequency domain; the Canny edge detection algorithm is a multi-level detection algorithm, and the method has a very good effect on image edge detection. After the edge texture of the image is extracted, the Laplace convolution kernel is used for sharpening the extracted edge position only, and finally, the reflection component and the brightness component are subjected to weighted fusion. The edge detection map and the enhanced reflection component are shown in fig. 9.
Step S107, the hue component, the brightness component, and the saturation component are channel-fused into a new image, and converted back to the RGB space.
Step S108, eliminating noise of the new image.
In an embodiment of the present application, noise of a new image is eliminated, specifically: noise in the new image is removed using the laplace convolution kernel sharpened image as a guide map for guide filtering.
Specifically, the reflection component and the illumination component after the image enhancement are recombined into a brightness component V, two channels of saturation S and hue H are fused, and then the image is converted back to an RGB channel. Subsequently, noise of the image is eliminated using the laplacian convolution kernel sharpened image as a guide map for guide filtering. The guided filtering is a self-adaptive weight filtering method, has outstanding effects in the aspects of smoothing images, keeping boundaries and the like, and can effectively eliminate noise influence on the images caused by image enhancement and sharpening.
In an embodiment of the present application, before channel fusing the hue component, the brightness component, and the saturation component into the new image, the method further includes: correcting the saturation component by using the adaptive Gamma; the enhancement parameter γ value is γ=0.85+0.2g; the enhancement formula of the adaptive Gamma transformation is as follows: f (x) =x γ Wherein f (x) is an output image, x is an image subjected to normalization processing, gamma is an image enhancement parameter, and g is a gamma conversion adjustment parameter.
In the embodiments of the present application, the Retinex algorithm has a good effect in terms of color feel retention, and is therefore favored in low-quality image enhancement, and various improvement methods have been developed. The main direction of the Retinex algorithm improvement is mainly to combine S-curve functions, color space conversion, improved filtering. The flow of the improved algorithm for enhancing the low-illumination image is shown in fig. 10, and three channels of hue H, saturation S and brightness V are separated in HSV space; and (3) carrying out equalization processing on V by using the CLAHE, then estimating the illumination component L of the image by adopting a BM3D algorithm, and obtaining a reflection component R. In order to fully exert the advantages of the improved algorithm, after the L component is extracted, the class of the image is analyzed, the gray value of L is firstly divided into 16 levels, when the front four-level pixel of L is more than 10%, the dark part area of the image is judged, when the number of the rear four-level pixels is more than 10%, the bright part of the image is judged, then, the standard deviation of the number of the rear four-level gray values is solved, when the standard deviation of the number of the rear four-level gray values is twice as that of the front one, the overexposure phenomenon of the bright part of the image is judged, and different methods are selected to strengthen the image in the face of different conditions. Judging the type of the image according to the brightness information of L, calculating a parameter g value, and enhancing the brightness component by using the self-adaptive Gamma when the image only has a dark part, wherein the g value is a Gamma value; when the image has light and dark information at the same time, the values of a and b are calculated according to g, at the moment, if the image does not have overexposure, the V dark part is enhanced by using improved Gamma, the detail information of the bright part is maintained, and if the image has overexposure, the brightness of the bright part is reduced. Meanwhile, extracting original image textures by using a Gabor filter and a Canny algorithm, recording the texture pixel coordinates of the original image textures, sharpening a reflection component by using a Laplacian operator, sharpening pixels at texture positions in the reflection component again, and fusing the enhanced V and R according to weights of 0.7 and 0.3 to form a new V. The application designs an adaptive Gamma enhancement method for the saturation component S, wherein the enhancement parameter Gamma value is 0.85 plus 0.2g, and the saturation enhancement is stronger when the image brightness is lower. Finally, the hue component H, the enhanced brightness component V, and the corrected saturation component S are channel fused and converted back into RGB space, and image noise is eliminated using guided filtering.
Therefore, aiming at the problems of detail loss, color distortion, noise increase and the like caused by enhancing the low-illumination image, the application provides an improved Retinex algorithm based on an HSV color space. The algorithm adopts a BM3D filtering algorithm as a center surrounding function of a Retinex algorithm, so that the extraction of illumination components of a low-illumination image is realized, reflection components are obtained, correction parameters are calculated according to the illumination components, brightness components and saturation components are corrected by using improved Gamma transformation, the reflection components and textures thereof are sharpened, and the conditions of detail loss, color distortion and the like of the image are effectively improved while the brightness of the low-illumination image is enhanced are realized.
In order to verify the effectiveness and practicability of the method, the enhancement experiment is performed by adopting image data in low-illumination image SYN and LOL data sets, wherein the data sets comprise scene images rich in reality, and the experiment requirements are completely met, wherein the LOL data set comprises 500 low-light/normal-light image pairs. The original image was adjusted to 400 x 600 and converted to a portable network graphic format, produced by the university of Beijing, SYN being a composite dataset produced by a team at the university of Beijing using 1000 images randomly extracted in the RAISE dataset. 200 multi-scene low-illumination images are randomly extracted from a data set to serve as experimental data, the types of the images are one hundred images with dark information only, one hundred images with light and dark information coexist, the image scenes comprise indoor, outdoor, natural, urban landscapes and other scenes, the effectiveness of the improvement method can be verified from different angles, the extracted low-illumination images are enhanced by using an ACE algorithm, a LIME algorithm and an MSRCR algorithm with the method, the experiment is carried out on a Pycharm 2021, windows10, a CPU Intel Core i9-10900 and a RAM 32G test platform, and the enhancement effect of the experimental data is shown in FIG. 11.
Specifically, the five algorithms have certain improvement on the brightness of images of different scenes, the ACE algorithm has good performance in the first scene, the ACE algorithm has poor performance in the second scene, the brightness of dark parts of the images is improved to a limited extent, the bright parts of the images are excessively enhanced, the images show a dark effect, and in the fourth scene, the enhanced colors of the images have a slight distortion phenomenon; the ALTM and LIME algorithm enhancement effect is similar, the enhancement effect is better for the whole dark image, when two kinds of information of light and dark exist in the image to be enhanced, the enhancement effect of the ALTM image is not obvious like that of the second scene, the LIME can excessively enhance the bright part information of the image, so that detail loss is caused, like the second scene, the image contrast is improved, and the enhancement of some dark part information is not obvious; the MSRCR algorithm has good effect of improving the brightness of the image, and has small loss of image details, but has poor color recovery, and the whole image is gray. The algorithm has the advantages that the enhancement effect is stable, the overall brightness distribution is uniform, the detail storage of the image is complete, the image is clear, the color of the image is comfortable and natural, and the characteristic of human eyes is met.
The advantage of selecting the low-illumination dataset images is that the datasets not only have rich low-illumination images, but also have normal-illumination images matched with the low-illumination images, so that powerful support can be provided for evaluation of enhanced images. In order to objectively evaluate the enhancement effect of each method on the low-illumination image, calculating two hundred image data after enhancement, wherein Information Entropy (IE), peak signal-to-noise ratio (PSNR), structural Similarity Index (SSIM), image quality index (UQI), average Gradient (AG) and Root Mean Square Error (RMSE) are adopted as measurement indexes, wherein the larger the IE, PSNR, SSIM, AG value is, the richer the enhanced image information is, and the more similar to the normal illumination image is; a larger UQI indicates a higher enhanced image quality; RMSE reflects the difference between the enhanced image and the normal illumination image, with smaller values and smaller differences. In order to clearly present the indices, the values of the IE will remain unchanged by adjusting all of them to between 0 and 10, the PSNR divided by 2, the SSIM and UQI values multiplied by 10, and AG and RMSE divided by 10 and 1000, respectively. Meanwhile, in order to verify the enhancement effect of the improved Gamma method and the texture enhancement method on the light-dark information coexisting image, an experiment using original Gamma and no texture enhancement was set as a control group, and experimental data were extracted 100 light-dark coexisting images, each of which index data is shown in fig. 12.
In fig. 12 (a), a solid line of circular dots is image evaluation data calculated by the method proposed in the present application, and a line of LOW is data calculated by the original LOW-illuminance image and the corresponding normal-illuminance image. As can be seen from fig. 12 (a), the method of the present application is significantly stronger than other methods in PSNR, SSIM, RMSE, UQI, which indicates that the enhanced image has extremely high similarity to the normal illumination image, has better image quality, and is more suitable for human judgment on visual quality, and the IE and AG data are also at higher level in each method. The IE average value is as high as 7.285, the PSNR average value is 16.175dB, compared with the original image line average value, the PSNR average value is improved by 6.475dB, the SSIM average value is 0.580, the QUI average value is 0.8318, compared with the original image line average value, the QUI average value is improved by 0.593, and the AG average value is as high as 91.407 after the image is enhanced. In fig. 12 (b), the rounded points are the data calculated by using the improved method to enhance the image, the square points are the data calculated by using the adaptive Gamma to enhance the image, and as can be seen from the graph, the entropy average of the image information enhanced by the improved algorithm reaches 7.462, compared with 5.217% enhancement by using the adaptive Gamma, 16.816dB of PSNR average value, 0.567dB enhancement, slightly lower than the adaptive Gamma, 0.839 enhancement by UQI average value of ssim, 5.534% enhancement, 96.907 of AG average value, and significantly better than 36.911 enhancement by using the adaptive Gamma, which means that the image enhanced by the improved method is closer to the normal illuminance image in RMSE. In conclusion, the algorithm provided by the application has excellent effect on the aspect of improving the brightness of the low-illumination image, can effectively inhibit the phenomena of supersaturation of brightness and the like, saves the edge details of the image completely, and has comfortable and natural color recovery.
In summary, aiming at the problem of low quality of characteristic information such as low-illumination image brightness, color, detail and the like, the application provides a low-illumination image enhancement algorithm for improving Retinex. The algorithm is based on HSV color space, respectively corrects a saturation component S and a brightness component V, extracts an illumination component by using a BM3D algorithm, and calculates a reflection component; an improved Gamma transformation function is proposed to enhance the brightness component; an image texture enhancement method is provided, and the image and the texture thereof are sharpened; finally, the images are subjected to multi-weight fusion, and guided filtering is used for denoising. Experimental results show that the algorithm has a good enhancement effect on various low-illumination scene images, the details of the edges of the images are completely stored, and the colors are restored naturally; the IE mean value of the enhanced image is improved by 1.602 and the PSNR mean value is improved by 6.470 dB compared with the original image mean value, the SSIM mean value is improved by 0.291, the UQI mean value and the AG mean value are also improved greatly, the RMSE mean value is obviously reduced, and compared with other image enhancement algorithms, the enhanced image is obviously improved.
Referring to fig. 13, an embodiment of the present application further provides a low-illumination image enhancement apparatus, including: separation module 1301, estimation module 1302, enhancement module 1303, sharpening module 1304, extraction module 1305, fusion module 1306, conversion module 1307, and cancellation module 1308.
The separation module 1301 is configured to separate and obtain a hue component, a brightness component, and a saturation component of the original image in HSV space.
The estimating module 1302 is configured to estimate the illumination component using the BM3D filtering algorithm as a center-surround function of the Retinex algorithm, and obtain a corresponding reflection component.
The enhancing module 1303 is configured to enhance the brightness component according to the illumination component.
A sharpening module 1304 for sharpening the reflection component using a laplacian convolution kernel.
The extracting module 1305 is configured to extract an edge texture of the original image, and record a pixel coordinate corresponding to the edge texture; the edge positions in the reflected components are verified using a laplace convolution to perform a sharpening operation.
A fusion module 1306, configured to perform weighted fusion on the reflection component and the brightness component to form a new brightness component.
The conversion module 1307 is configured to channel-fuse the hue component, the brightness component, and the saturation component into a new image, and convert the new image back to the RGB space.
A cancellation module 1308 for canceling noise of the new image.
In an embodiment, the method further comprises an optimization module for optimizing the brightness component using a limited contrast adaptive histogram equalization (CLAHE) algorithm before estimating the illumination component using the BM3D filter algorithm as a center surround function of the Retinex algorithm.
In an embodiment, the estimating module 1302 is further configured to estimate the illumination component using the BM3D filtering algorithm as a center-surround function of the Retinex algorithm, and includes the steps of:
in the first stage, a certain same pixel x exists in a plurality of similar blocks, and repeated pixel points x are weighted and averaged and then aggregated, so that a basic estimation image required by the second stage can be obtained, and the basic estimation formula is as follows:
wherein y is basic (x) A base estimate image for the base estimate filtering stage,for the estimated value of the similar group, a similar three-dimensional array which can be obtained by a similar block measurement formula is obtained by a series of transformation; gamma is a threshold filtering operation; />Andthree-dimensional transformation and three-dimensional inverse transformation respectively, +.>As a characteristic function of similar blocks omega h Number N of nonzero coefficients after hard threshold shrinkage for similar groups h The formula of the estimated value weight is as follows:
in the second stage, the basic estimated graph is partitioned again and estimated block by block, and finally all the reference blocks in the previous stage are weighted and aggregated to obtain the final estimated image y final (x) The specific formula is as follows:
wherein T' h And T h Respectively a three-dimensional matrix of the basic estimation and a three-dimensional matrix of the original picture, Similarity group estimate for wiener filtering, < >>And->For three-dimensional linear transformation and three-dimensional inverse transformation, +.>For wiener filter contraction coefficient, < >>The weight coefficient of wiener filtering is obtained through the noise standard deviation sigma and the wiener filtering shrinkage coefficient.
In an embodiment, the enhancing module 1303 is further configured to:
judging whether a bright area and a dark area exist in the image according to the illumination component;
when two kinds of information of a bright part area and a dark part area exist in the image at the same time, the enhancement of the dark part of the brightness component is realized by using an improved Gamma transformation function, and the brightness of the bright part of the brightness component is kept or pulled down according to the bright part information, so that the brightness of the dark part is enhanced, and meanwhile, the detail of the bright part is kept completely; wherein, the improved Gamma transformation function is:
in the above formula, when δ is a value of f (x) =0, a and b are adjusting parameters of the function, and the adjusting parameters can be used together to adjust and improve the enhancement amplitude and range of the Gamma function on the pixel point, and the solving formula of a and b is as follows:
where m is the normalized mean value of pixels with pixel values below 97 in the illumination component.
In an embodiment, the enhancing module 1303 is further configured to: when the image only has dark part areas, the brightness component is enhanced by using the adaptive Gamma transformation;
The enhancement formula of the adaptive Gamma transformation is as follows: f (x) =x γ Wherein f (x) is an output image, x is an image subjected to normalization processing, and gamma is an image enhancement parameter;
the automatic calculation and selection formula of the gamma parameter is as follows:
wherein N is the average value of the gray level image of the image to be enhanced, N is the average value after normalization, and gamma is the image enhancement parameter.
In an embodiment, the enhancing module 1303 is further configured to:
dividing gray values of all pixels of the illumination component into 16 levels;
when the duty ratio of the current level four pixels is more than 10%, judging that a dark part area exists in the image;
when the duty ratio of the last four-stage pixel is more than 10%, it is determined that the image has a bright area.
In an embodiment, the extracting module 1305 is further configured to:
the original texture is extracted using Gabor filters and Canny algorithm.
In one embodiment, the cancellation module 1308 is further configured to:
noise in the new image is removed using the laplace convolution kernel sharpened image as a guide map for guide filtering.
In one embodiment, the method further comprises a correction module, for correcting the saturation component by using the adaptive Gamma before channel-fusing the hue component, the brightness component and the saturation component into a new image; the enhancement parameter γ value is γ=0.85+0.2g; the enhancement formula of the adaptive Gamma transformation is as follows: f (x) =x γ Wherein f (x) is the output image, and x is normalizedAnd (3) transforming the processed image, wherein gamma is an image enhancement parameter, and g is a gamma conversion regulation parameter.
The low-illumination image enhancement device provided in the embodiment of the present application may execute the technical solution shown in the embodiment of the method, and its implementation principle and beneficial effects are similar, and will not be described herein again.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
The specific embodiments described herein are offered by way of example only to illustrate the spirit of the present application. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions in a similar manner without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.

Claims (10)

1. A method of low-light image enhancement, comprising:
separating under HSV space to obtain tone component, brightness component and saturation component of original image;
adopting a BM3D filtering algorithm as a center surrounding function of a Retinex algorithm, estimating an illumination component, and obtaining a corresponding reflection component;
Enhancing the brightness component according to the illumination component;
sharpening the reflected component using a laplace convolution kernel;
extracting the edge texture of the original image, and recording the pixel coordinates corresponding to the edge texture; performing a sharpening operation using the laplace convolution kernel to edge locations in the reflected component;
weighting and fusing the reflection component and the brightness component to form a new brightness component;
channel fusion is carried out on the tone component, the brightness component and the saturation component to form a new image, and the new image is converted back to an RGB space;
noise of the new image is eliminated.
2. The low-luminance image enhancement method according to claim 1, further comprising the step of, before estimating the illumination component using the BM3D filter algorithm as a center-surround function of the Retinex algorithm: the brightness component is optimized using a constrained contrast adaptive histogram equalization (CLAHE) algorithm.
3. The low-luminance image enhancement method according to claim 1, wherein the estimating of the illumination component using the BM3D filter algorithm as a center-surround function of the Retinex algorithm comprises the steps of:
in the first stage, a certain same pixel x exists in a plurality of similar blocks, and repeated pixel points x are weighted and averaged and then aggregated, so that a basic estimation image required by the second stage can be obtained, and the basic estimation formula is as follows:
Wherein y is basic (x) A base estimate image for the base estimate filtering stage,for the estimated value of the similar group, a similar three-dimensional array which can be obtained by a similar block measurement formula is obtained by a series of transformation; gamma is a threshold filtering operation; />And->Three-dimensional transformation and three-dimensional inverse transformation respectively, +.>As a characteristic function of similar blocks omega h Number N of nonzero coefficients after hard threshold shrinkage for similar groups h The formula of the estimated value weight is as follows:
in the second stage, the basic estimated graph is partitioned again and estimated block by block, and finally all the reference blocks in the previous stage are weighted and aggregated to obtain the final estimated image y final (x) The specific formula is as follows:
wherein T' h And T h Respectively a three-dimensional matrix of the basic estimation and a three-dimensional matrix of the original picture,similarity group estimate for wiener filtering, < >>And->For three-dimensional linear transformation and three-dimensional inverse transformation, +.>For the wiener filter contraction coefficient,the weight coefficient of wiener filtering is obtained through the noise standard deviation sigma and the wiener filtering shrinkage coefficient.
4. The low-illuminance image enhancement method according to claim 1, wherein the enhancement of the brightness component according to the illumination component includes the steps of:
Judging whether a bright area and a dark area exist in the image according to the illumination component;
when two kinds of information of a bright part area and a dark part area exist in the image at the same time, the enhancement of the dark part of the brightness component is realized by using an improved Gamma transformation function, and the brightness of the bright part of the brightness component is kept or pulled down according to the bright part information, so that the brightness of the dark part is enhanced, and meanwhile, the detail of the bright part is kept completely; wherein, the improved Gamma transformation function is:
in the above formula, when δ is a value of f (x) =0, a and b are adjusting parameters of the function, and the adjusting parameters can be used together to adjust and improve the enhancement amplitude and range of the Gamma function on the pixel point, and the solving formula of a and b is as follows:
where m is the normalized mean value of pixels with pixel values below 97 in the illumination component.
5. The low-light image enhancement method according to claim 4, further comprising the step of:
when the image only has dark part areas, the brightness component is enhanced by using the adaptive Gamma transformation;
the enhancement formula of the adaptive Gamma transformation is as follows: f (x) =x γ Wherein f (x) is an output image, x is an image subjected to normalization processing, and gamma is an image enhancement parameter;
the automatic calculation and selection formula of the gamma parameter is as follows:
Wherein N is the average value of the gray level image of the image to be enhanced, N is the average value after normalization, and gamma is the image enhancement parameter.
6. The method of claim 4, wherein the determining whether the bright area and the dark area exist in the image according to the illumination component comprises:
dividing gray values of all pixels of the illumination component into 16 levels;
when the duty ratio of the current level four pixels is more than 10%, judging that a dark part area exists in the image;
when the duty ratio of the last four-stage pixel is more than 10%, it is determined that the image has a bright area.
7. The low-light level image enhancement method according to claim 1, wherein the extracting the edge texture of the original image is specifically:
the original texture is extracted using Gabor filters and Canny algorithm.
8. The low-light level image enhancement method according to claim 1, wherein the noise of the new image is eliminated, specifically:
noise in the new image is removed using the laplace convolution kernel sharpened image as a guide map for guide filtering.
9. The method of claim 1, further comprising, prior to channel fusing the hue component, the brightness component, and the saturation component into a new image:
Correcting the saturation component by using the adaptive Gamma; the enhancement parameter γ value is γ=0.85+0.2g; the enhancement formula of the adaptive Gamma transformation is as follows: f (x) =x γ Wherein f (x) is an output image, x is an image subjected to normalization processing, gamma is an image enhancement parameter, and g is a gamma conversion adjustment parameter.
10. A low-light image enhancement device, comprising:
the separation module is used for separating and obtaining tone components, brightness components and saturation components of the original image under HSV space;
the estimating module is used for estimating the illumination component by adopting a BM3D filtering algorithm as a center surrounding function of the Retinex algorithm and obtaining a corresponding reflection component;
the enhancement module is used for enhancing the brightness component according to the illumination component;
the sharpening module is used for sharpening the reflection component by using the Laplace convolution check;
the extraction module is used for extracting the edge texture of the original image and recording the pixel coordinates corresponding to the edge texture; performing a sharpening operation using the laplace convolution kernel to edge locations in the reflected component;
the fusion module is used for carrying out weighted fusion on the reflection component and the brightness component to form a new brightness component;
The conversion module is used for fusing the hue component, the brightness component and the saturation component into a new image through channels and converting the new image back to an RGB space;
and the elimination module is used for eliminating noise of the new image.
CN202311204220.1A 2023-09-19 2023-09-19 Low-illumination image enhancement method and device Pending CN117274085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311204220.1A CN117274085A (en) 2023-09-19 2023-09-19 Low-illumination image enhancement method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311204220.1A CN117274085A (en) 2023-09-19 2023-09-19 Low-illumination image enhancement method and device

Publications (1)

Publication Number Publication Date
CN117274085A true CN117274085A (en) 2023-12-22

Family

ID=89220749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311204220.1A Pending CN117274085A (en) 2023-09-19 2023-09-19 Low-illumination image enhancement method and device

Country Status (1)

Country Link
CN (1) CN117274085A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117455802A (en) * 2023-12-25 2024-01-26 榆林金马巴巴网络科技有限公司 Noise reduction and enhancement method for image acquisition of intrinsic safety type miner lamp

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117455802A (en) * 2023-12-25 2024-01-26 榆林金马巴巴网络科技有限公司 Noise reduction and enhancement method for image acquisition of intrinsic safety type miner lamp
CN117455802B (en) * 2023-12-25 2024-04-05 榆林金马巴巴网络科技有限公司 Noise reduction and enhancement method for image acquisition of intrinsic safety type miner lamp

Similar Documents

Publication Publication Date Title
Wang et al. Adaptive image enhancement method for correcting low-illumination images
US9633422B2 (en) Method for image processing using local statistics convolution
CN108765336B (en) Image defogging method based on dark and bright primary color prior and adaptive parameter optimization
WO2016206087A1 (en) Low-illumination image processing method and device
CN106846276B (en) Image enhancement method and device
CN114118144A (en) Anti-interference accurate aerial remote sensing image shadow detection method
CN111968065A (en) Self-adaptive enhancement method for image with uneven brightness
CN105678245A (en) Target position identification method based on Haar features
CN117274085A (en) Low-illumination image enhancement method and device
CN111210395A (en) Retinex underwater image enhancement method based on gray value mapping
CN111476744B (en) Underwater image enhancement method based on classification and atmospheric imaging model
CN116309152A (en) Detail enhancement method, system, equipment and storage medium for low-illumination image
CN110111280A (en) A kind of enhancement algorithm for low-illumination image of multi-scale gradient domain guiding filtering
CN113129300A (en) Drainage pipeline defect detection method, device, equipment and medium for reducing false detection rate
CN111611940A (en) Rapid video face recognition method based on big data processing
CN116630198A (en) Multi-scale fusion underwater image enhancement method combining self-adaptive gamma correction
CN116797468A (en) Low-light image enhancement method based on self-calibration depth curve estimation of soft-edge reconstruction
CN112822343B (en) Night video oriented sharpening method and storage medium
CN115908155A (en) NSST domain combined GAN and scale correlation coefficient low-illumination image enhancement and denoising method
CN115619662A (en) Image defogging method based on dark channel prior
Corchs et al. Enhancing underexposed images preserving the original mood
CN111915500A (en) Foggy day image enhancement method based on improved Retinex algorithm
Peng et al. Underwater image enhancement by rayleigh stretching in time and frequency domain
CN113160073B (en) Remote sensing image haze removal method combining rolling deep learning and Retinex theory
Srinivas et al. Spatial Information Computation-Based Low Contrast Image Enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination