CN112419210A - Underwater image enhancement method based on color correction and three-interval histogram stretching - Google Patents
Underwater image enhancement method based on color correction and three-interval histogram stretching Download PDFInfo
- Publication number
- CN112419210A CN112419210A CN202011444565.0A CN202011444565A CN112419210A CN 112419210 A CN112419210 A CN 112419210A CN 202011444565 A CN202011444565 A CN 202011444565A CN 112419210 A CN112419210 A CN 112419210A
- Authority
- CN
- China
- Prior art keywords
- image
- channel
- value
- pixel
- representing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012937 correction Methods 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000012545 processing Methods 0.000 claims abstract description 32
- 230000004927 fusion Effects 0.000 claims abstract description 17
- 230000009466 transformation Effects 0.000 claims abstract description 17
- 230000002776 aggregation Effects 0.000 claims description 15
- 238000004220 aggregation Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000001186 cumulative effect Effects 0.000 claims description 6
- 241001235534 Graphis <ascomycete fungus> Species 0.000 claims description 3
- 238000013459 approach Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 10
- 230000003631 expected effect Effects 0.000 description 6
- 230000006872 improvement Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention provides an underwater image enhancement method based on color correction and three-interval histogram stretching. The method comprises the following steps: and performing color correction processing on the source image, processing the source image in R, G, B channels by adopting a three-interval histogram equalization method, stretching the pixel value of a single channel, performing threshold selection, separating three subintervals, and completing three-interval equalization operation to obtain an enhanced image with three-interval histogram equalization. And performing linear weighted fusion on the image subjected to the linear transformation based on the subintervals and the three-interval histogram equalization image, and reconstructing a final defogged image. The invention utilizes a histogram equalization method based on multiple intervals to more accurately divide the single-channel histogram of the source image, performs histogram equalization on a single interval, and simultaneously linearly fuses with the image after color correction processing, so that the dark details of the source image are better displayed, noise is reduced, and image defogging is realized.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an underwater image enhancement method based on color correction and three-interval histogram stretching.
Background
The development and use of marine resources relies on underwater images, which are typically captured with underwater cameras and underwater robots. Due to the absorption and scattering of light, the underwater images have problems of low contrast, color cast and the like, thereby causing image degradation and making the underwater images difficult to analyze. Common factors affecting the decay rate are water temperature and salinity, and the type and amount of suspended particles in the water. The severe deterioration causes difficulty in recovering image information. Finding an effective solution to recover underwater image color and contrast is therefore a very challenging task.
Research and development have been directed to underwater enhancement techniques to address such problems. The underwater enhancement technology is a simple and quick method, but has great effect on improving the quality of underwater images. The method can improve the image quality by processing the red, green and blue channel intensity value through a specific rule.
Disclosure of Invention
According to the technical problem, the invention provides an underwater image enhancement method based on color correction and three-interval histogram stretching.
The technical means adopted by the invention are as follows: the underwater image enhancement method based on color correction and three-interval histogram stretching is characterized by comprising the following steps of:
step S01: acquiring an original RGB dense fog image; carrying out color correction on the original RGB dense fog image by a color correction method based on subinterval linear transformation to obtain an enhanced image after color correction;
step S02: decomposing the original RGB dense fog image into R, G, B channel image, and carrying out the following steps on pixel values of the R, G, B channel image;
step S03: stretching the pixel values of the R, G, B channel images to be within the range of 0-255 respectively to obtain stretched single-channel images;
step S04: calculating average pixel values of the R, G, B channel images respectively; subtracting the average pixel value of a single channel from the pixel value of each pixel point to serve as an error, and squaring the variance; selecting the pixel point with the maximum error square according to the error square value of the pixel value of each pixel point and the average pixel value, wherein the left and right of the point are also points with larger difference between the pixel value and the average pixel value, so that the variance of adding or subtracting three times from the left and right of the point is used as the center to determine two thresholds required by three-interval division, and the whole single-channel histogram is divided into three intervals;
step S05: equalizing the subintervals of the R, G, B channels to obtain an image after single-channel equalization;
step S06: and carrying out linear weighted fusion on the R, G, B channel image and the equalized R, G, B channel image to obtain a final defogging image.
Further, the total pixel value calculation formula of the R, G, B channel in the color correction method based on the subinterval linear transformation is as follows:
where M and N denote the number of rows and columns, respectively, of the input image, IR(i,j)、IG(i,j)、IB(i, j) represent the R, G, B three-channel image pixel values at the (i, j) locations, respectively;
meanwhile, the ratio of the red, green and blue channels is calculated as:
max represents a function of taking the maximum value, and the maximum value of the total pixel value of the R, G, B channel is obtained through the Max function; pR,PG,PBR, G, B represents the ratio of the total pixel value to the maximum total pixel value of any one channel respectively; to divide each channel into three intervals, two cut-off ratios are definedAndis represented as follows:
wherein c represents R, G, B any channel, alpha1 and α2Are all constants between 0 and 1, pcRepresenting R, G, B a ratio of a total pixel value of any one channel to a sum of the maximum total pixel values; then, the threshold is cut offAndcorresponding to two cut-off ratiosAnddetermined as equations (9) and (10) according to the following quantile function:
wherein ,andrepresenting a cut-off threshold, F is a lower quantile function, Ic(x) A pixel value at a point that is one of the three channels R, G, B,andis a cut-off ratio;
to effectively suppress shading and highlight values, the following operations are performed for each color channel:
wherein ,representations R, G,B a processed pixel value of a point of any one channel,anddenotes the cut-off threshold, Ic(x) R, G, B pixel values of a point of any one of the channels;
finally, the following linear operation is performed on the pixel values of the intermediate region:
wherein ,representing the image after the color correction and,representing R, G, B the processed pixel value at any point of any one channel.
Further, the linear stretching operation is performed on the single-channel image in step S03, and each gray value is ensured to be between [0,255], so that the expression of linear stretching is defined as:
when c ∈ { R, G, B }, PC(i, j) represents R, G, B the gray value of any channel after the position correction at (i, j); i isC(i, j) indicates R, G, B the gray scale value of any one channel at the (i, j) position; mincR, G, B represents the minimum value of a pixel of any one channel; maxcRepresenting R, G, B the maximum value of a pixel for any one of the channels.
Further, in the step of selecting the threshold and dividing the three regions in step S04, the average pixel value of a single channel is calculated as follows:
wherein Mean isR,MeanG,MeanBRepresenting the average pixel value of R, G, B three channels, respectively, M, N representing the number of rows and columns, respectively, of the input image, IR(i,j),IG(i,j),IB(i, j) respectively representing R, G, B pixel values of the three-channel image at the (i, j) position, wherein M × N represents the total pixel point number of a single channel;
calculating R, G, B an error between the pixel value of any point of one of the three channels of the channel and the average pixel value of the corresponding channel and performing square operation to obtain the square of the error, wherein the calculation formula is as follows:
wherein ,representing the error between the pixel value of any point in one of the three channels of the R, G, B channels and the average pixel value of the corresponding channel, Ic (i, j) representing the pixel value of R, G, B at the (i, j) position, MeancThe average pixel value of any one channel is represented R, G, B,representing R, G, B the square of the error between the pixel value of any point in one of the three channels of the channel and the average pixel value of the corresponding channel;
selecting a point with the largest error square as a central point through a Max function, and adding and subtracting three times of pixel value variance left and right according to a 3 sigma criterion to further obtain left and right thresholds so as to finish three-interval division;
t1=Maxmc-3σ(20)
t2=Maxmc+3σ(21)
wherein ,MaxcRepresenting the maximum squared error, Maxm, of one of the three channels of the R, G, B channelcThe position of the corresponding row, t, representing the squared error maximum of one of the three channels of the R, G, B channel1、t2Respectively, and represents the variance of the pixel value of one of the R, G, B channel three channels.
Further, the histogram equalization processing procedure for the subintervals of each channel in step S05 is as follows:
firstly, dividing the gray scale range of three subintervals according to a threshold value:
[0,255]=[0,t1]∪(t1,t2]∪(t2,255](22)
where I denotes the original image, I (I, j) denotes the gray value of the pixel located in the I-th row and j-th column of the image, X1,X2,X3Respectively representing a first sub-image, a second sub-image and a last sub-image;
firstly, calculating the frequency of each pixel of the whole image, calculating the frequencies of three sub-histograms, obtaining the normalized pixel frequency of each sub-histogram, and finally calculating the cumulative normalized frequency of the three sub-histograms;
when x represents the gray value of the image, three value ranges of x can be obtained according to the interval division; when X belongs to X1The frequency of the accumulated grey levels of the histogram of the first sub-image from 0 to x is then calculated and expressed as CDF1(x) (ii) a When X belongs to X2Then, the histogram of the second sub-image is calculated from t1Frequency of accumulated gray levels to x and expressed as CDF2(x) (ii) a When X belongs to X3Then, the histogram of the last sub-image is calculated from t2Frequency of accumulated gray levels to x and expressed as CDF3(x);
Then, calculating the transformed gray value of the three sub-images after histogram equalization according to the histogram normalized pixel frequency of each sub-image; obtaining a sub-histogram equalization function by referring to a gray level transformation function of the traditional histogram equalization; the gray scale transformation function of conventional histogram equalization is described as:
f(x)=a+(b-a)CDF(x)(26)
where a represents the minimum value of the output gray value, b represents the maximum value of the output gray value, x represents the input gray value, cdf (x) represents the cumulative density function with respect to x;
the sub-histogram equalization formula is described as follows:
wherein y represents a gray value transformation function of three-interval equalization processing, and a processed result is obtained according to the function y; t is t1 and t2Individual watchTwo thresholds representing the division of the sub-histogram, x representing the input gray-level value, CDF1(x)、CDF2(x)、CDF3(x) Respectively representing the frequencies of the accumulated grey levels of the first, second and third sub-histograms.
Still further, the multi-scale fusion comprises the steps of:
step S071: defining an aggregation weight map and obtaining fusion of the input image and the aggregation weight map; the aggregate weight map is determined by three measurement weights, including: contrast weight, saturation weight and exposure weight map;
the contrast weight is a contrast weight map; gray scale images of an input image are used to estimate a global contrast weight W having an absolute valueLaTo ensure the edge and detail texture information of the image;
WLa=|La*F|(29)
wherein La represents a laplacian, x represents a convolution, and F represents an input image;
the saturation weight is the standard deviation of each pixel in a channel in the RGB color space and is used as the saturation weight;
where, R (x, y), G (x, y), B (x, y) respectively represent R, G, B channels of the input image, m (x, y) represents an average value of R, G, B channels at (x, y) positions, Wsa(x, y) represents the saturation weight at the (x, y) position;
the exposure weight map is required to ensure that the pixel value approaches 0.5, namely the midpoint; the exposure weight of each pixel point is represented by a gaussian curve with an expected value of 0.5:
the aggregation weight map is obtained by multiplying the three characteristic weight maps in multi-scale fusion; contrast weight map WLaSaturation weight map WsaAnd an exposure weight map WEMultiplying the pixel value by the pixel point corresponding to each input image:
wherein z represents the input z-th image, WiRepresenting a two-dimensional weight graph;
in order to ensure the consistency of images, a weight map W is introducedz:
step S072: fusing the input image and the aggregation weight map; the input image I is decomposed by a Laplacian pyramid and is defined asAggregate weight graphIs decomposed by Gaussian pyramid and is defined asWherein trademark l represents the ith weight map; laplacian pyramidAnd Gaussian pyramidAccording toThe pixel-by-pixel fusion is performed as follows:
where L { F } represents the laplacian pyramid representing the fused image, which is reconstructed to obtain the fused image.
Compared with the prior art, the invention has the following advantages:
1. the color correction method based on the linear transformation of the subintervals better improves the visibility, achieves a good color correction effect, enables the histogram distribution of red, green and blue channels to be more uniform, better solves the color cast problem of the underwater image, and improves the details of the dark part of the underwater image.
2. The invention utilizes the histogram equalization method of the three intervals, effectively improves the contrast of the image, obtains good effect on enhancing the bright part details of the image and completes the effective stretching of the image histogram.
3. According to the invention, through multi-scale linear fusion, the image with improved color cast and dark details and the image with improved contrast and bright details through a three-interval histogram equalization method are fused, so that the underwater image is effectively enhanced.
For the above reasons, the present invention can be widely applied to the fields of image processing and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of the present invention.
FIG. 2 is a comparison graph of the enhancement effect of the invention on underwater scene images compared with other algorithms. Wherein, FIG. 2-1-1 is a result graph after being processed by the HEEF algorithm; FIG. 2-1-2 is a graph of results after processing by the BBHE algorithm; FIGS. 2-1-3 are graphs of results after DOTHE algorithm processing; FIGS. 2-1-4 are graphs of results after being processed by the algorithm herein; FIG. 2-2-1 is a graph of results after being processed by the HEEF algorithm; FIG. 2-2-2 is a graph of results after processing by the BBHE algorithm; 2-2-3 are graphs of results after being processed by the DOTHE algorithm; FIGS. 2-2-4 are graphs of results after being processed by the algorithm herein; FIG. 2-3-1 is a graph of results after being processed by the HEEF algorithm; FIG. 2-3-2 is a graph of results after processing by the BBHE algorithm; FIGS. 2-3-3 are graphs of results after DOTHE algorithm processing; FIGS. 2-3-4 are graphs of results after being processed by the algorithm herein; FIG. 2-4-1 is a graph of results after being processed by the HEEF algorithm; 2-4-2 are graphs of results after processing by the BBHE algorithm; FIGS. 2-4-3 are graphs of results after DOTHE algorithm processing; FIGS. 2-4-4 are graphs of results after processing by the algorithm herein.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1-2, the present invention provides an underwater image enhancement method based on color correction and three-interval histogram stretching, comprising the following steps:
step S01: acquiring an original RGB dense fog image; carrying out color correction on the original RGB dense fog image by a color correction method based on subinterval linear transformation to obtain an enhanced image after color correction;
step S02: decomposing the original RGB dense fog image into R, G, B channel image, and carrying out the following steps on pixel values of the R, G, B channel image;
step S03: stretching the pixel values of the R, G, B channel images to be within the range of 0-255 respectively to obtain stretched single-channel images;
step S04: calculating average pixel values of the R, G, B channel images respectively; subtracting the average pixel value of a single channel from the pixel value of each pixel point to serve as an error, and squaring the variance; selecting the pixel point with the maximum error square according to the error square value of the pixel value of each pixel point and the average pixel value, wherein the left and right of the point are also points with larger difference between the pixel value and the average pixel value, so that the variance of adding or subtracting three times from the left and right of the point is used as the center to determine two thresholds required by three-interval division, and the whole single-channel histogram is divided into three intervals;
step S05: equalizing the subintervals of the R, G, B channels to obtain an image after single-channel equalization;
step S06: and carrying out linear weighted fusion on the R, G, B channel image and the equalized R, G, B channel image to obtain a final defogging image.
As a preferred embodiment, the total pixel value calculation formula of the R, G, B channel in the color correction method based on the subinterval linear transformation is as follows:
where M and N denote the number of rows and columns, respectively, of the input image, IR(i,j)、IG(i,j)、IB(i, j) represent the R, G, B three-channel image pixel values at the (i, j) locations, respectively;
meanwhile, the ratio of the red, green and blue channels is calculated as:
max represents a function of taking the maximum value, and the maximum value of the total pixel value of the R, G, B channel is obtained through the Max function; pR,PG,PBR, G, B represents the ratio of the total pixel value to the maximum total pixel value of any one channel respectively; to divide each channel into three intervals, two cut-off ratios are definedAndis represented as follows:
wherein c represents R, G, B any channel, alpha1 and α2Are all constants between 0 and 1, pcRepresenting R, G, B a ratio of a total pixel value of any one channel to a sum of the maximum total pixel values; then, the threshold is cut offAndcorresponding to two cut-off ratiosAnddetermined as equations (9) and (10) according to the following quantile function:
wherein ,andrepresenting a cut-off threshold, F is a lower quantile function, Ic(x) A pixel value at a point that is one of the three channels R, G, B,andis a cut-off ratio;
to effectively suppress shading and highlight values, the following operations are performed for each color channel:
wherein ,representing R, G, B the processed pixel value at a point of any one of the channels,anddenotes the cut-off threshold, Ic(x) R, G, B pixel values of a point of any one of the channels;
finally, the following linear operation is performed on the pixel values of the intermediate region:
wherein ,representing the image after the color correction and,representing R, G, B the processed pixel value at any point of any one channel.
As a preferred embodiment, in the present application, the linear stretching operation is performed on the single-channel image in step S03, and each gray value is ensured to be between [0,255], so that the expression of linear stretching is defined as:
when c ∈ { R, G, B }, PC(i, j) represents R, G, B the gray value of any channel after the position correction at (i, j); i isC(i, j) indicates R, G, B the gray scale value of any one channel at the (i, j) position; mincR, G, B represents the minimum value of a pixel of any one channel; maxcRepresenting R, G, B the maximum value of a pixel for any one of the channels.
Further, in the step of selecting the threshold and dividing the three regions in step S04, the average pixel value of a single channel is calculated as follows:
wherein Mean isR,MeanG,MeanBRepresenting the average pixel value of R, G, B three channels, respectively, M, N representing the number of rows and columns, respectively, of the input image, IR(i,j),IG(i,j),IB(i, j) respectively representing R, G, B pixel values of the three-channel image at the (i, j) position, wherein M × N represents the total pixel point number of a single channel;
calculating R, G, B an error between the pixel value of any point of one of the three channels of the channel and the average pixel value of the corresponding channel and performing square operation to obtain the square of the error, wherein the calculation formula is as follows:
wherein ,representing the error between the pixel value of any point in one of the three channels of the R, G, B channels and the average pixel value of the corresponding channel, Ic (i, j) representing the pixel value of R, G, B at the (i, j) position, MeancThe average pixel value of any one channel is represented R, G, B,representing R, G, B the square of the error between the pixel value of any point in one of the three channels of the channel and the average pixel value of the corresponding channel;
selecting a point with the largest error square as a central point through a Max function, and adding and subtracting three times of pixel value variance left and right according to a 3 sigma criterion to further obtain left and right thresholds so as to finish three-interval division;
t1=Maxmc-3σ(20)
t2=Maxmc+3σ(21)
wherein ,MaxcRepresenting the maximum squared error, Maxm, of one of the three channels of the R, G, B channelcThe position of the corresponding row, t, representing the squared error maximum of one of the three channels of the R, G, B channel1、t2Respectively, and represents the variance of the pixel value of one of the R, G, B channel three channels.
Further, the histogram equalization processing procedure for the subintervals of each channel in step S05 is as follows:
firstly, dividing the gray scale range of three subintervals according to a threshold value:
[0,255]=[0,t1]∪(t1,t2]∪(t2,255](22)
where I denotes the original image, I (I, j) denotes the gray value of the pixel located in the I-th row and j-th column of the image, X1,X2,X3Respectively representing a first sub-image, a second sub-image and a last sub-image;
firstly, calculating the frequency of each pixel of the whole image, calculating the frequencies of three sub-histograms, obtaining the normalized pixel frequency of each sub-histogram, and finally calculating the cumulative normalized frequency of the three sub-histograms;
when x represents the gray value of the image, three value ranges of x can be obtained according to the interval division; when X belongs to X1The frequency of the accumulated grey levels of the histogram of the first sub-image from 0 to x is then calculated and expressed as CDF1(x) (ii) a When X belongs to X2Then, the histogram of the second sub-image is calculated from t1Frequency of accumulated gray levels to x and expressed as CDF2(x) (ii) a When X belongs to X3Then, the histogram of the last sub-image is calculated from t2Frequency of accumulated gray levels to x and expressed as CDF3(x);
Then, calculating the transformed gray value of the three sub-images after histogram equalization according to the histogram normalized pixel frequency of each sub-image; obtaining a sub-histogram equalization function by referring to a gray level transformation function of the traditional histogram equalization; the gray scale transformation function of conventional histogram equalization is described as:
f(x)=a+(b-a)CDF(x)(26)
where a represents the minimum value of the output gray value, b represents the maximum value of the output gray value, x represents the input gray value, cdf (x) represents the cumulative density function with respect to x;
the sub-histogram equalization formula is described as follows:
wherein y represents a gray value transformation function of three-interval equalization processing, and a processed result is obtained according to the function y; t is t1 and t2Two thresholds respectively representing the division sub-histograms, x representing the input gray value, CDF1(x)、CDF2(x)、CDF3(x) Respectively representing the frequencies of the accumulated grey levels of the first, second and third sub-histograms.
Still further, the multi-scale fusion comprises the steps of:
step S071: defining an aggregation weight map and obtaining fusion of the input image and the aggregation weight map; the aggregate weight map is determined by three measurement weights, including: contrast weight, saturation weight and exposure weight map;
the contrast weight is a contrast weight map; gray scale images of an input image are used to estimate a global contrast weight W having an absolute valueLaTo ensure the edge and detail texture information of the image;
WLa=|La*F|(29)
wherein La represents a laplacian, x represents a convolution, and F represents an input image;
the saturation weight is the standard deviation of each pixel in a channel in the RGB color space and is used as the saturation weight;
where, R (x, y), G (x, y), B (x, y) respectively represent R, G, B channels of the input image, m (x, y) represents an average value of R, G, B channels at (x, y) positions, Wsa(x, y) represents the saturation weight at the (x, y) position;
the exposure weight map is required to ensure that the pixel value approaches 0.5, namely the midpoint; the exposure weight of each pixel point is represented by a gaussian curve with an expected value of 0.5:
the aggregation weight map is obtained by multiplying the three characteristic weight maps in multi-scale fusion; contrast weight map WLaSaturation weight map WsaAnd an exposure weight map WEMultiplying the pixel value by the pixel point corresponding to each input image:
wherein z represents the input z-th image, WiRepresenting a two-dimensional weight graph;
in order to ensure the consistency of images, a weight map W is introducedz:
step S072: fusing the input image and the aggregation weight map; the input image I is decomposed by Laplacian pyramid and determinedIs defined asAggregate weight graphIs decomposed by Gaussian pyramid and is defined asWherein trademark l represents the ith weight map; laplacian pyramidAnd Gaussian pyramidThe pixel-by-pixel fusion is performed as follows:
where L { F } represents the laplacian pyramid representing the fused image, which is reconstructed to obtain the fused image.
Example 1
As shown in fig. 2, first, a first greenish image is processed by various algorithms; fig. 2-1-1 is a result graph after the treatment of the HEEF algorithm, and it can be seen that the result graph is greenish in whole and slightly blurred in details, and the expected effect is not achieved. Fig. 2-1-2 is a result graph after processing by the BBHE algorithm, and it can be seen that the result graph is still greenish as a whole, the details are unclear, and the effect is not good. 2-1-3 are the result graphs after the DOTHE algorithm processing, it can be seen that the overall green color of the result graph is reduced, the details are partially improved, but the image is over-exposed, so that the brightness of the image is too high, the details are lost, and the expected effect is not achieved. 2-1-4 are result graphs after being processed by the algorithm, and it can be seen that the color cast problem of the result graph is solved, the detail is obviously improved, the contrast is improved, and the image enhancement is successful. Firstly, processing a first graph through a plurality of algorithms; and then processing a second greenish image, wherein a result graph processed by the HEEF algorithm is shown in figure 2-2-1, and it can be seen that the problem that the whole greenish image of the result graph is not solved, the details are slightly blurred, and the effect is poor. And 2-2-2 is a result graph after BBHE algorithm processing, and it can be seen that the result graph is green overall and unclear in detail, and the expected effect is not achieved. 2-2-3 are the result graphs after the DOTHE algorithm processing, it can be seen that the color cast problem of the result graph is partially improved, the details are partially improved, but the image is over-exposed, which causes the image brightness to be too high, the image details are lost, and the expected effect is not achieved. 2-2-4 are result graphs after being processed by the algorithm, and it can be seen that the color cast problem of the result graph is solved, the detail is obviously improved, the contrast is improved, the improvement is huge compared with the result graph, and the image enhancement is more successful.
And then processing the third blueish image, wherein a result graph processed by the HEEF algorithm is shown in figure 2-3-1, and it can be seen that the problem that the whole blueish result graph is not well improved, partial details are fuzzy, and the effect is poor. Fig. 2-3-2 is a result graph after processing by the BBHE algorithm, and it can be seen that the result graph is still blue in whole, details are not clear, and the effect is not good. 2-3-3 are the result graphs after the DOTHE algorithm processing, it can be seen that the color cast problem of the result graph is partially improved, the details are partially improved, but the image is over-exposed, which causes the image brightness to be too high, the image details are lost, and the expected effect is not achieved. 2-3-4 are result graphs after being processed by the algorithm, and it can be seen that the color cast problem of the result graph is solved, the details are greatly improved, the contrast is improved, the improvement is great compared with the result graph, and the image enhancement is successful. And then processing the fourth bluing image, and as shown in fig. 2-4-1, the result graph after the treatment by the HEEF algorithm shows that the problem that the whole bluing image of the result graph is not well improved, part of details are blurred, and the effect is poor. Fig. 2-4-2 are result graphs after processing by the BBHE algorithm, and it can be seen that the result graphs are still blue in whole, details are not clear, and the effect is not good. 2-4-3 are the result graphs after the DOTHE algorithm processing, it can be seen that the color cast problem of the result graph is partially improved, the details are partially improved, but the image is over-exposed, which results in too high brightness of the image, and the image details are lost, and the expected effect is not achieved. 2-4-4 are result graphs after being processed by the algorithm, and it can be seen that the color cast problem of the result graph is solved, the details are greatly improved, the contrast is improved, the improvement is great compared with the result graph, and the image enhancement is successful. It can be observed that the first three algorithms are insufficient in color correction, contrast and detail, and the algorithm proposed herein corrects colors, improves contrast and highlights details for underwater images; the colors reflecting the degraded image are well corrected.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments. In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (6)
1. The underwater image enhancement method based on color correction and three-interval histogram stretching is characterized by comprising the following steps of:
step S01: acquiring an original RGB dense fog image; carrying out color correction on the original RGB dense fog image by a color correction method based on subinterval linear transformation to obtain an enhanced image after color correction;
step S02: decomposing the original RGB dense fog image into R, G, B channel image, and carrying out the following steps on pixel values of the R, G, B channel image;
step S03: stretching the pixel values of the R, G, B channel images to be within the range of 0-255 respectively to obtain stretched single-channel images;
step S04: calculating average pixel values of the R, G, B channel images respectively; subtracting the average pixel value of a single channel from the pixel value of each pixel point to serve as an error, and squaring the variance; selecting the pixel point with the maximum error square according to the error square value of the pixel value of each pixel point and the average pixel value, wherein the left and right of the point are also points with larger difference between the pixel value and the average pixel value, so that the variance of adding or subtracting three times from the left and right of the point is used as the center to determine two thresholds required by three-interval division, and the whole single-channel histogram is divided into three intervals;
step S05: equalizing the subintervals of the R, G, B channels to obtain an image after single-channel equalization;
step S06: and carrying out linear weighted fusion on the R, G, B channel image and the equalized R, G, B channel image to obtain a final defogging image.
2. The color correction and three-bin histogram stretching-based underwater image enhancement method according to claim 1, wherein the total pixel value calculation formula of R, G, B channels in the color correction method based on sub-bin linear transformation is as follows:
where M and N represent the number of rows and columns of the input image, respectively,IR(i,j)、IG(i,j)、IB(i, j) represent the R, G, B three-channel image pixel values at the (i, j) locations, respectively;
meanwhile, the ratio of the red, green and blue channels is calculated as:
max represents a function of taking the maximum value, and the maximum value of the total pixel value of the R, G, B channel is obtained through the Max function; pR,PG,PBR, G, B represents the ratio of the total pixel value to the maximum total pixel value of any one channel respectively; to divide each channel into three intervals, two cut-off ratios are definedAndis represented as follows:
wherein c represents R, G, B any channel, alpha1 and α2Are all constants between 0 and 1, pcRepresenting R, G, B a ratio of a total pixel value of any one channel to a sum of the maximum total pixel values; then, the threshold is cut offAndcorresponding to two cut-off ratiosAnddetermined as equations (9) and (10) according to the following quantile function:
wherein ,andrepresenting a cut-off threshold, F is a lower quantile function, Ic(x) A pixel value at a point that is one of the three channels R, G, B,andis a cut-off ratio;
to effectively suppress shading and highlight values, the following operations are performed for each color channel:
wherein ,representing R, G, B the processed pixel value at a point of any one of the channels,anddenotes the cut-off threshold, Ic(x) R, G, B pixel values of a point of any one of the channels;
finally, the following linear operation is performed on the pixel values of the intermediate region:
3. The underwater image enhancement method based on color correction and three-interval histogram stretching according to claim 1, wherein the step S03 is to perform linear stretching operation on the single-channel image and ensure that each gray value is between [0,255], therefore, the expression of linear stretching is defined as:
when c ∈ { R, G, B }, PC(i, j) represents R, G, B the gray value of any channel after the position correction at (i, j); i isC(i, j) indicates R, G, B the gray scale value of any one channel at the (i, j) position; mincR, G, B represents the minimum value of a pixel of any one channel; maxcRepresenting R, G, B the maximum value of a pixel for any one of the channels.
4. The method for underwater image enhancement based on color correction and three-interval histogram stretching according to claim 1, wherein the steps of threshold selection and three-interval division in step S04 are characterized by firstly calculating the pixel average value of a single channel as follows:
wherein Mean isR,MeanG,MeanBRepresenting the average pixel value of R, G, B three channels, respectively, M, N representing the number of rows and columns, respectively, of the input image, IR(i,j),IG(i,j),IB(i, j) respectively representing R, G, B pixel values of the three-channel image at the (i, j) position, wherein M × N represents the total pixel point number of a single channel;
calculating R, G, B an error between the pixel value of any point of one of the three channels of the channel and the average pixel value of the corresponding channel and performing square operation to obtain the square of the error, wherein the calculation formula is as follows:
wherein ,representing the error between the pixel value of any point in one of the three channels of the R, G, B channels and the average pixel value of the corresponding channel, Ic (i, j) representing the pixel value of R, G, B at the (i, j) position, MeancThe average pixel value of any one channel is represented R, G, B,representing R, G, B the square of the error between the pixel value of any point in one of the three channels of the channel and the average pixel value of the corresponding channel;
selecting a point with the largest error square as a central point through a Max function, and adding and subtracting three times of pixel value variance left and right according to a 3 sigma criterion to further obtain left and right thresholds so as to finish three-interval division;
t1=Maxmc-3σ(20)
t2=Maxmc+3σ(21)
wherein ,MaxcRepresenting the maximum squared error, Maxm, of one of the three channels of the R, G, B channelcThe position of the corresponding row, t, representing the squared error maximum of one of the three channels of the R, G, B channel1、t2Respectively, and represents the variance of the pixel value of one of the R, G, B channel three channels.
5. The underwater image enhancement method based on color correction and three-interval histogram stretching according to claim 1, wherein the histogram equalization processing procedure for the sub-intervals of each channel in the step S05 is as follows:
firstly, dividing the gray scale range of three subintervals according to a threshold value:
[0,255]=[0,t1]∪(t1,t2]∪(t2,255](22)
where I denotes the original image, I (I, j) denotes the gray value of the pixel located in the I-th row and j-th column of the image, X1,X2,X3Respectively representing a first sub-image, a second sub-image and a last sub-image;
firstly, calculating the frequency of each pixel of the whole image, calculating the frequencies of three sub-histograms, obtaining the normalized pixel frequency of each sub-histogram, and finally calculating the cumulative normalized frequency of the three sub-histograms;
when x represents the gray value of the image, three value ranges of x can be obtained according to the interval division; when X belongs to X1The frequency of the accumulated grey levels of the histogram of the first sub-image from 0 to x is then calculated and expressed as CDF1(x) (ii) a When X belongs to X2Then, the histogram of the second sub-image is calculated from t1Frequency of accumulated gray levels to x and expressed as CDF2(x) (ii) a When X belongs to X3Then, the histogram of the last sub-image is calculatedt2Frequency of accumulated gray levels to x and expressed as CDF3(x);
Then, calculating the transformed gray value of the three sub-images after histogram equalization according to the histogram normalized pixel frequency of each sub-image; obtaining a sub-histogram equalization function by referring to a gray level transformation function of the traditional histogram equalization; the gray scale transformation function of conventional histogram equalization is described as:
f(x)=a+(b-a)CDF(x)(26)
where a represents the minimum value of the output gray value, b represents the maximum value of the output gray value, x represents the input gray value, cdf (x) represents the cumulative density function with respect to x;
the sub-histogram equalization formula is described as follows:
wherein y represents a gray value transformation function of three-interval equalization processing, and a processed result is obtained according to the function y; t is t1 and t2Two thresholds respectively representing the division sub-histograms, x representing the input gray value, CDF1(x)、CDF2(x)、CDF3(x) Respectively representing the frequencies of the accumulated grey levels of the first, second and third sub-histograms.
6. The underwater image enhancement method based on color correction and three-interval histogram stretching according to claim 1,
the multi-scale fusion comprises the following steps:
step S071: defining an aggregation weight map and obtaining fusion of the input image and the aggregation weight map; the aggregate weight map is determined by three measurement weights, including: contrast weight, saturation weight and exposure weight map;
the contrast weight is a contrast weight map; gray scale images of an input image are used to estimate a global contrast weight W having an absolute valueLaTo ensure the edge and detail of the imageTexture information;
WLa=|La*F|(29)
wherein La represents a laplacian, x represents a convolution, and F represents an input image;
the saturation weight is the standard deviation of each pixel in a channel in the RGB color space and is used as the saturation weight;
where, R (x, y), G (x, y), B (x, y) respectively represent R, G, B channels of the input image, m (x, y) represents an average value of R, G, B channels at (x, y) positions, Wsa(x, y) represents the saturation weight at the (x, y) position;
the exposure weight map is required to ensure that the pixel value approaches 0.5, namely the midpoint; the exposure weight of each pixel point is represented by a gaussian curve with an expected value of 0.5:
the aggregation weight map is obtained by multiplying the three characteristic weight maps in multi-scale fusion; contrast weight map WLaSaturation weight map WsaAnd an exposure weight map WEMultiplying the pixel value by the pixel point corresponding to each input image:
WZ=WLaz×WSaz×WEz(32)
wherein z represents the input z-th image, WiRepresenting a two-dimensional weight graph;
in order to ensure the consistency of images, a weight map W is introducedz:
step S072: fusing the input image and the aggregation weight map; the input image I is decomposed by a Laplacian pyramid and is defined asAggregate weight graphIs decomposed by Gaussian pyramid and is defined asWherein trademark l represents the ith weight map; laplacian pyramidAnd Gaussian pyramidThe pixel-by-pixel fusion is performed as follows:
where L { F } represents the laplacian pyramid representing the fused image, which is reconstructed to obtain the fused image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011444565.0A CN112419210B (en) | 2020-12-08 | 2020-12-08 | Underwater image enhancement method based on color correction and three-interval histogram stretching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011444565.0A CN112419210B (en) | 2020-12-08 | 2020-12-08 | Underwater image enhancement method based on color correction and three-interval histogram stretching |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112419210A true CN112419210A (en) | 2021-02-26 |
CN112419210B CN112419210B (en) | 2023-09-22 |
Family
ID=74775554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011444565.0A Active CN112419210B (en) | 2020-12-08 | 2020-12-08 | Underwater image enhancement method based on color correction and three-interval histogram stretching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112419210B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114445300A (en) * | 2022-01-29 | 2022-05-06 | 赵恒� | Nonlinear underwater image gain algorithm for hyperbolic tangent deformation function transformation |
CN114494084A (en) * | 2022-04-14 | 2022-05-13 | 广东欧谱曼迪科技有限公司 | Image color homogenizing method and device, electronic equipment and storage medium |
WO2023130547A1 (en) * | 2022-01-06 | 2023-07-13 | 广东欧谱曼迪科技有限公司 | Endoscopic image dehazing method and apparatus, electronic device, and storage medium |
CN117078561A (en) * | 2023-10-13 | 2023-11-17 | 深圳市东视电子有限公司 | RGB-based self-adaptive color correction and contrast enhancement method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090169102A1 (en) * | 2007-11-29 | 2009-07-02 | Chao Zhang | Multi-scale multi-camera adaptive fusion with contrast normalization |
CN111127359A (en) * | 2019-12-19 | 2020-05-08 | 大连海事大学 | Underwater image enhancement method based on selective compensation color and three-interval balance |
-
2020
- 2020-12-08 CN CN202011444565.0A patent/CN112419210B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090169102A1 (en) * | 2007-11-29 | 2009-07-02 | Chao Zhang | Multi-scale multi-camera adaptive fusion with contrast normalization |
CN111127359A (en) * | 2019-12-19 | 2020-05-08 | 大连海事大学 | Underwater image enhancement method based on selective compensation color and three-interval balance |
Non-Patent Citations (1)
Title |
---|
于君霞;: "基于自适应动态限幅的水下图像增强算法改进", 西部皮革, no. 22 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023130547A1 (en) * | 2022-01-06 | 2023-07-13 | 广东欧谱曼迪科技有限公司 | Endoscopic image dehazing method and apparatus, electronic device, and storage medium |
CN114445300A (en) * | 2022-01-29 | 2022-05-06 | 赵恒� | Nonlinear underwater image gain algorithm for hyperbolic tangent deformation function transformation |
CN114494084A (en) * | 2022-04-14 | 2022-05-13 | 广东欧谱曼迪科技有限公司 | Image color homogenizing method and device, electronic equipment and storage medium |
CN114494084B (en) * | 2022-04-14 | 2022-07-26 | 广东欧谱曼迪科技有限公司 | Image color homogenizing method and device, electronic equipment and storage medium |
CN117078561A (en) * | 2023-10-13 | 2023-11-17 | 深圳市东视电子有限公司 | RGB-based self-adaptive color correction and contrast enhancement method and device |
CN117078561B (en) * | 2023-10-13 | 2024-01-19 | 深圳市东视电子有限公司 | RGB-based self-adaptive color correction and contrast enhancement method and device |
Also Published As
Publication number | Publication date |
---|---|
CN112419210B (en) | 2023-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zheng et al. | Image dehazing by an artificial image fusion method based on adaptive structure decomposition | |
CN112419210A (en) | Underwater image enhancement method based on color correction and three-interval histogram stretching | |
CN110175964B (en) | Retinex image enhancement method based on Laplacian pyramid | |
CN108876743B (en) | Image rapid defogging method, system, terminal and storage medium | |
Jiang et al. | Image dehazing using adaptive bi-channel priors on superpixels | |
US9230304B2 (en) | Apparatus and method for enhancing image using color channel | |
CN112288658A (en) | Underwater image enhancement method based on multi-residual joint learning | |
Li et al. | Color correction based on cfa and enhancement based on retinex with dense pixels for underwater images | |
CN106485668A (en) | Mthods, systems and devices for overexposure correction | |
CN109064423B (en) | Intelligent image repairing method for generating antagonistic loss based on asymmetric circulation | |
CN106846263A (en) | The image defogging method being immunized based on fusion passage and to sky | |
Chen et al. | Hazy image restoration by bi-histogram modification | |
CN105894484A (en) | HDR reconstructing algorithm based on histogram normalization and superpixel segmentation | |
CN111861901A (en) | Edge generation image restoration method based on GAN network | |
CN112085673A (en) | Multi-exposure image fusion method for removing strong ghost | |
Liu et al. | Image contrast enhancement based on intensity expansion-compression | |
CN111179196B (en) | Multi-resolution depth network image highlight removing method based on divide-and-conquer | |
CN113284061B (en) | Underwater image enhancement method based on gradient network | |
Steffens et al. | Deep learning based exposure correction for image exposure correction with application in computer vision for robotics | |
CN105608683A (en) | Defogging method of single image | |
Dixit et al. | Image Contrast Optimization using Local Color Correction and Fuzzy Intensification | |
CN116563133A (en) | Low-illumination color image enhancement method based on simulated exposure and multi-scale fusion | |
Srigowri | Enhancing unpaired underwater images with cycle consistent network | |
Kaur et al. | A novel hybrid technique for low exposure image enhancement using sub-imge histogram equilization and artificial neural network | |
Filin et al. | Haze removal method based on joint transmission map estimation and atmospheric-light extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |